mirror of
https://github.com/grafana/grafana.git
synced 2025-12-18 22:16:21 -05:00
Provisioning: Remove migration from legacy storage (#112505)
* Deprecate Legacy Storage Migration in Backend * Change the messaging around legacy storage * Disable cards to connect * Commit import changes * Block repository creation if resources are in legacy storage * Update error message * Prettify * chore: uncomment unified migration * chore: adapt and fix tests * Remove legacy storage migration from frontend * Refactor provisioning job options by removing legacy storage and history fields - Removed the `History` field from `MigrateJobOptions` and related references in the codebase. - Eliminated the `LegacyStorage` field from `RepositoryViewList` and its associated comments. - Updated tests and generated OpenAPI schema to reflect these changes. - Simplified the `MigrationWorker` by removing dependencies on legacy storage checks. * Refactor OpenAPI schema and tests to remove deprecated fields - Removed the `history` field from `MigrateJobOptions` and updated the OpenAPI schema accordingly. - Eliminated the `legacyStorage` field from `RepositoryViewList` and its associated comments in the schema. - Updated integration tests to reflect the removal of these fields. * Fix typescript errors * Refactor provisioning code to remove legacy storage dependencies - Eliminated references to `dualwrite.Service` and related legacy storage checks across multiple files. - Updated `APIBuilder`, `RepositoryController`, and `SyncWorker` to streamline resource handling without legacy storage considerations. - Adjusted tests to reflect the removal of legacy storage mocks and dependencies, ensuring cleaner and more maintainable code. * Fix unit tests * Remove more references to legacy * Enhance provisioning wizard with migration options - Added a checkbox for migrating existing resources in the BootstrapStep component. - Updated the form context to track the new migration option. - Adjusted the SynchronizeStep and useCreateSyncJob hook to incorporate the migration logic. - Enhanced localization with new descriptions and labels for migration features. * Remove unused variable and dualwrite reference in provisioning code - Eliminated an unused variable declaration in `provisioning_manifest.go`. - Removed the `nil` reference for dualwrite in `repo_operator.go`, aligning with the standalone operator's assumption of unified storage. * Update go.mod and go.sum to include new dependencies - Added `github.com/grafana/grafana-app-sdk` version `0.48.5` and several indirect dependencies including `github.com/getkin/kin-openapi`, `github.com/hashicorp/errwrap`, and others. - Updated `go.sum` to reflect the new dependencies and their respective versions. * Refactor provisioning components for improved readability - Simplified the import statement in HomePage.tsx by removing unnecessary line breaks. - Consolidated props in the SynchronizeStep component for cleaner code. - Enhanced the layout of the ProvisioningWizard component by streamlining the rendering of the SynchronizeStep. * Deprecate MigrationWorker and clean up related comments - Removed the deprecated MigrationWorker implementation and its associated comments from the provisioning code. - This change reflects the ongoing effort to eliminate legacy components and improve code maintainability. * Fix linting issues * Add explicit comment * Update useResourceStats hook in BootstrapStep component to accept selected target - Modified the BootstrapStep component to pass the selected target to the useResourceStats hook. - Updated related tests to reflect the change in expected arguments for the useResourceStats hook. * fix(provisioning): Update migrate tests to match export-then-sync behavior for all repository types Updates test expectations for folder-type repositories to match the implementation changes where both folder and instance repository types now run export followed by sync. Only the namespace cleaner is skipped for folder-type repositories. Changes: - Update "should run export and sync for folder-type repositories" test to include export mocks - Update "should fail when sync job fails for folder-type repositories" test to include export mocks - Rename test to clarify that both export and sync run for folder types - Add proper mock expectations for SetMessage, StrictMaxErrors, Process, and ResetResults All migrate package tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Update provisioning wizard text and improve resource counting display - Enhanced descriptions for migrating existing resources to clarify that unmanaged resources will also be included. - Refactored BootstrapStepResourceCounting component to simplify the rendering logic and ensure both external storage and unmanaged resources are displayed correctly. - Updated alert messages in SynchronizeStep to reflect accurate information regarding resource management during migration. - Adjusted localization strings for consistency with the new descriptions. * Update provisioning wizard alert messages for clarity and accuracy - Revised alert points to indicate that resources can still be modified during migration, with a note on potential export issues. - Clarified that resources will be marked as managed post-provisioning and that dashboards remain accessible throughout the process. * Fix issue with trigger wrong type of job * Fix export failure when folder already exists in repository When exporting resources to a repository, if a folder already exists, the Read() method would fail with "path component is empty" error. This occurred because: 1. Folders are identified by trailing slash (e.g., "Legacy Folder/") 2. The Read() method passes this path directly to GetTreeByPath() 3. GetTreeByPath() splits the path by "/" creating empty components 4. This causes the "path component is empty" error The fix strips the trailing slash before calling GetTreeByPath() to avoid empty path components, while still using the trailing slash convention to identify directories. The Create() method already handles this correctly by appending ".keep" to directory paths, which is why the first export succeeded but subsequent exports failed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Fix folder tree not updated when folder already exists in repository When exporting resources and a folder already exists in the repository, the folder was not being added to the FolderManager's tree. This caused subsequent dashboard exports to fail with "folder NOT found in tree". The fix adds the folder to fm.tree even when it already exists in the repository, ensuring all folders are available for resource lookups. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Revert "Merge remote-tracking branch 'origin/uncomment-unified-migration-code' into cleanup/deprecate-legacy-storage-migration-in-provisioning" This reverts commit6440fae342, reversing changes made toec39fb04f2. * fix: handle empty folder titles in path construction - Skip folders with empty titles in dirPath to avoid empty path components - Skip folders with empty paths before checking if they exist in repository - Fix unit tests to properly check useResourceStats hook calls with type annotations * Update workspace * Fix BootstrapStep tests after reverting unified migration merge Updated test expectations to match the current component behavior where resource counts are displayed for both instance and folder sync options. - Changed 'Empty' count expectation from 3 to 4 (2 cards × 2 counts each) - Changed '7 resources' test to use findAllByText instead of findByText since the count appears in multiple cards 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Remove bubbletee deps * Fix workspace * provisioning: update error message to reference enableMigration config Update the error message when provisioning cannot be used due to incompatible data format to instruct users to enable data migration for folders and dashboards using the enableMigration configuration introduced in PR #114857. Also update the test helper to include EnableMigration: true for both dashboards and folders to match the new configuration pattern. * provisioning: add comment explaining Mode5 and EnableMigration requirement Add a comment in the integration test helper explaining that Provisioning requires Mode5 (unified storage) and EnableMigration (data migration) as it expects resources to be fully migrated to unified storage. * Remove migrate resources checkbox from folder type provisioning wizard - Remove checkbox UI for migrating existing resources in folder type - Remove migrateExistingResources from migration logic - Simplify migration to only use requiresMigration flag - Remove unused translation keys - Update i18n strings * Fix linting * Remove unnecessary React Fragment wrapper in BootstrapStep * Address comments --------- Co-authored-by: Rafael Paulovic <rafael.paulovic@grafana.com> Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
60298fb02a
commit
7e45a300b9
69 changed files with 392 additions and 4472 deletions
|
|
@ -7,6 +7,7 @@ require (
|
|||
github.com/google/go-github/v70 v70.0.0
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f
|
||||
github.com/grafana/grafana-app-sdk v0.48.7
|
||||
github.com/grafana/grafana-app-sdk/logging v0.48.7
|
||||
github.com/grafana/grafana/apps/secret v0.0.0-20250902093454-b56b7add012f
|
||||
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2
|
||||
|
|
@ -28,6 +29,7 @@ require (
|
|||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.13.0 // indirect
|
||||
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
|
||||
github.com/getkin/kin-openapi v0.133.0 // indirect
|
||||
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
|
|
@ -54,13 +56,20 @@ require (
|
|||
github.com/gorilla/mux v1.8.1 // indirect
|
||||
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
|
||||
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect
|
||||
github.com/grafana/grafana-app-sdk v0.48.7 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.1 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 // indirect
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 // indirect
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
|
||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/client_golang v1.23.2 // indirect
|
||||
github.com/prometheus/client_model v0.6.2 // indirect
|
||||
|
|
@ -68,6 +77,7 @@ require (
|
|||
github.com/prometheus/procfs v0.19.2 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/woodsbury/decimal128 v1.4.0 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/otel v1.39.0 // indirect
|
||||
|
|
|
|||
|
|
@ -14,6 +14,8 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
|
|||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
|
||||
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
|
||||
github.com/getkin/kin-openapi v0.133.0 h1:pJdmNohVIJ97r4AUFtEXRXwESr8b0bD721u/Tz6k8PQ=
|
||||
github.com/getkin/kin-openapi v0.133.0/go.mod h1:boAciF6cXk5FhPqe/NQeBTeenbjqU4LhWBf09ILVvWE=
|
||||
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
|
||||
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
|
||||
|
|
@ -59,6 +61,8 @@ github.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6
|
|||
github.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||
github.com/go-test/deep v1.1.1 h1:0r/53hagsehfO4bzD2Pgr/+RgHqhmf+k1Bpse2cTu1U=
|
||||
github.com/go-test/deep v1.1.1/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
|
|
@ -98,6 +102,13 @@ github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2 h
|
|||
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2/go.mod h1:RRvSjHH12/PnQaXraMO65jUhVu8n59mzvhfIMBETnV4=
|
||||
github.com/grafana/nanogit v0.3.0 h1:XNEef+4Vi+465ZITJs/g/xgnDRJbWhhJ7iQrAnWZ0oQ=
|
||||
github.com/grafana/nanogit v0.3.0/go.mod h1:6s6CCTpyMOHPpcUZaLGI+rgBEKdmxVbhqSGgCK13j7Y=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
|
|
@ -110,6 +121,8 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
|||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
|
||||
github.com/mailru/easyjson v0.9.1/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
|
||||
github.com/migueleliasweb/go-github-mock v1.1.0 h1:GKaOBPsrPGkAKgtfuWY8MclS1xR6MInkx1SexJucMwE=
|
||||
github.com/migueleliasweb/go-github-mock v1.1.0/go.mod h1:pYe/XlGs4BGMfRY4vmeixVsODHnVDDhJ9zoi0qzSMHc=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
|
|
@ -118,14 +131,22 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
|
|||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 h1:G7ERwszslrBzRxj//JalHPu/3yz+De2J+4aLtSRlHiY=
|
||||
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037/go.mod h1:2bpvgLBZEtENV5scfDFEtB/5+1M4hkQhDQrccEJ/qGw=
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 h1:bQx3WeLcUWy+RletIKwUIt4x3t8n2SxavmoclizMb8c=
|
||||
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||
github.com/onsi/ginkgo/v2 v2.22.2 h1:/3X8Panh8/WwhU/3Ssa6rCKqPLuAkVY2I0RoyDLySlU=
|
||||
github.com/onsi/ginkgo/v2 v2.22.2/go.mod h1:oeMosUL+8LtarXBHu/c0bx2D/K9zyQ6uX3cTyztHwsk=
|
||||
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
|
||||
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
|
||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
|
|
@ -150,6 +171,10 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
|
|||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
|
||||
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||
github.com/woodsbury/decimal128 v1.4.0 h1:xJATj7lLu4f2oObouMt2tgGiElE5gO6mSWUjQsBgUlc=
|
||||
github.com/woodsbury/decimal128 v1.4.0/go.mod h1:BP46FUrVjVhdTbKT+XuQh2xfQaGki9LMIRJSFuh6THU=
|
||||
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
package repository
|
||||
|
||||
manifest: {
|
||||
appName: "provisioning"
|
||||
groupOverride: "provisioning.grafana.app"
|
||||
kinds: [
|
||||
appName: "provisioning"
|
||||
groupOverride: "provisioning.grafana.app"
|
||||
preferredVersion: "v0alpha1"
|
||||
kinds: [
|
||||
repository,
|
||||
connection
|
||||
]
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ repository: {
|
|||
// Enabled must be saved as true before any sync job will run
|
||||
enabled: bool
|
||||
// Where values should be saved
|
||||
target: "unified" | "legacy"
|
||||
target: "instance" | "folder"
|
||||
// When non-zero, the sync will run periodically
|
||||
intervalSeconds?: int
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,92 @@
|
|||
//
|
||||
// This file is generated by grafana-app-sdk
|
||||
// DO NOT EDIT
|
||||
//
|
||||
|
||||
package manifestdata
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/app"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
var appManifestData = app.ManifestData{
|
||||
AppName: "provisioning",
|
||||
Group: "provisioning.grafana.app",
|
||||
PreferredVersion: "v0alpha1",
|
||||
Versions: []app.ManifestVersion{},
|
||||
}
|
||||
|
||||
func LocalManifest() app.Manifest {
|
||||
return app.NewEmbeddedManifest(appManifestData)
|
||||
}
|
||||
|
||||
func RemoteManifest() app.Manifest {
|
||||
return app.NewAPIServerManifest("provisioning")
|
||||
}
|
||||
|
||||
var kindVersionToGoType = map[string]resource.Kind{}
|
||||
|
||||
// ManifestGoTypeAssociator returns the associated resource.Kind instance for a given Kind and Version, if one exists.
|
||||
// If there is no association for the provided Kind and Version, exists will return false.
|
||||
func ManifestGoTypeAssociator(kind, version string) (goType resource.Kind, exists bool) {
|
||||
goType, exists = kindVersionToGoType[fmt.Sprintf("%s/%s", kind, version)]
|
||||
return goType, exists
|
||||
}
|
||||
|
||||
var customRouteToGoResponseType = map[string]any{}
|
||||
|
||||
// ManifestCustomRouteResponsesAssociator returns the associated response go type for a given kind, version, custom route path, and method, if one exists.
|
||||
// kind may be empty for custom routes which are not kind subroutes. Leading slashes are removed from subroute paths.
|
||||
// If there is no association for the provided kind, version, custom route path, and method, exists will return false.
|
||||
// Resource routes (those without a kind) should prefix their route with "<namespace>/" if the route is namespaced (otherwise the route is assumed to be cluster-scope)
|
||||
func ManifestCustomRouteResponsesAssociator(kind, version, path, verb string) (goType any, exists bool) {
|
||||
if len(path) > 0 && path[0] == '/' {
|
||||
path = path[1:]
|
||||
}
|
||||
goType, exists = customRouteToGoResponseType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
|
||||
return goType, exists
|
||||
}
|
||||
|
||||
var customRouteToGoParamsType = map[string]runtime.Object{}
|
||||
|
||||
func ManifestCustomRouteQueryAssociator(kind, version, path, verb string) (goType runtime.Object, exists bool) {
|
||||
if len(path) > 0 && path[0] == '/' {
|
||||
path = path[1:]
|
||||
}
|
||||
goType, exists = customRouteToGoParamsType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
|
||||
return goType, exists
|
||||
}
|
||||
|
||||
var customRouteToGoRequestBodyType = map[string]any{}
|
||||
|
||||
func ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb string) (goType any, exists bool) {
|
||||
if len(path) > 0 && path[0] == '/' {
|
||||
path = path[1:]
|
||||
}
|
||||
goType, exists = customRouteToGoRequestBodyType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
|
||||
return goType, exists
|
||||
}
|
||||
|
||||
type GoTypeAssociator struct{}
|
||||
|
||||
func NewGoTypeAssociator() *GoTypeAssociator {
|
||||
return &GoTypeAssociator{}
|
||||
}
|
||||
|
||||
func (g *GoTypeAssociator) KindToGoType(kind, version string) (goType resource.Kind, exists bool) {
|
||||
return ManifestGoTypeAssociator(kind, version)
|
||||
}
|
||||
func (g *GoTypeAssociator) CustomRouteReturnGoType(kind, version, path, verb string) (goType any, exists bool) {
|
||||
return ManifestCustomRouteResponsesAssociator(kind, version, path, verb)
|
||||
}
|
||||
func (g *GoTypeAssociator) CustomRouteQueryGoType(kind, version, path, verb string) (goType runtime.Object, exists bool) {
|
||||
return ManifestCustomRouteQueryAssociator(kind, version, path, verb)
|
||||
}
|
||||
func (g *GoTypeAssociator) CustomRouteRequestBodyGoType(kind, version, path, verb string) (goType any, exists bool) {
|
||||
return ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb)
|
||||
}
|
||||
|
|
@ -136,9 +136,6 @@ type ExportJobOptions struct {
|
|||
}
|
||||
|
||||
type MigrateJobOptions struct {
|
||||
// Preserve history (if possible)
|
||||
History bool `json:"history,omitempty"`
|
||||
|
||||
// Message to use when committing the changes in a single commit
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -9,11 +9,6 @@ import (
|
|||
type RepositoryViewList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
|
||||
// The backend is using legacy storage
|
||||
// FIXME: Not sure where this should be exposed... but we need it somewhere
|
||||
// The UI should force the onboarding workflow when this is true
|
||||
LegacyStorage bool `json:"legacyStorage,omitempty"`
|
||||
|
||||
// The valid targets (can disable instance or folder types)
|
||||
AllowedTargets []SyncTargetType `json:"allowedTargets,omitempty"`
|
||||
|
||||
|
|
|
|||
|
|
@ -1495,13 +1495,6 @@ func schema_pkg_apis_provisioning_v0alpha1_MigrateJobOptions(ref common.Referenc
|
|||
SchemaProps: spec.SchemaProps{
|
||||
Type: []string{"object"},
|
||||
Properties: map[string]spec.Schema{
|
||||
"history": {
|
||||
SchemaProps: spec.SchemaProps{
|
||||
Description: "Preserve history (if possible)",
|
||||
Type: []string{"boolean"},
|
||||
Format: "",
|
||||
},
|
||||
},
|
||||
"message": {
|
||||
SchemaProps: spec.SchemaProps{
|
||||
Description: "Message to use when committing the changes in a single commit",
|
||||
|
|
@ -2119,13 +2112,6 @@ func schema_pkg_apis_provisioning_v0alpha1_RepositoryViewList(ref common.Referen
|
|||
Format: "",
|
||||
},
|
||||
},
|
||||
"legacyStorage": {
|
||||
SchemaProps: spec.SchemaProps{
|
||||
Description: "The backend is using legacy storage FIXME: Not sure where this should be exposed... but we need it somewhere The UI should force the onboarding workflow when this is true",
|
||||
Type: []string{"boolean"},
|
||||
Format: "",
|
||||
},
|
||||
},
|
||||
"allowedTargets": {
|
||||
SchemaProps: spec.SchemaProps{
|
||||
Description: "The valid targets (can disable instance or folder types)",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ package v0alpha1
|
|||
// MigrateJobOptionsApplyConfiguration represents a declarative configuration of the MigrateJobOptions type for use
|
||||
// with apply.
|
||||
type MigrateJobOptionsApplyConfiguration struct {
|
||||
History *bool `json:"history,omitempty"`
|
||||
Message *string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
|
|
@ -17,14 +16,6 @@ func MigrateJobOptions() *MigrateJobOptionsApplyConfiguration {
|
|||
return &MigrateJobOptionsApplyConfiguration{}
|
||||
}
|
||||
|
||||
// WithHistory sets the History field in the declarative configuration to the given value
|
||||
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
|
||||
// If called multiple times, the History field is set to the value of the last call.
|
||||
func (b *MigrateJobOptionsApplyConfiguration) WithHistory(value bool) *MigrateJobOptionsApplyConfiguration {
|
||||
b.History = &value
|
||||
return b
|
||||
}
|
||||
|
||||
// WithMessage sets the Message field in the declarative configuration to the given value
|
||||
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
|
||||
// If called multiple times, the Message field is set to the value of the last call.
|
||||
|
|
|
|||
|
|
@ -384,8 +384,7 @@ func TestValidateJob(t *testing.T) {
|
|||
Action: provisioning.JobActionMigrate,
|
||||
Repository: "test-repo",
|
||||
Migrate: &provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
Message: "Migrate from legacy",
|
||||
Message: "Migrate from unified",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -238,6 +238,8 @@ func (r *gitRepository) Read(ctx context.Context, filePath, ref string) (*reposi
|
|||
|
||||
// Check if the path represents a directory
|
||||
if safepath.IsDir(filePath) {
|
||||
// Strip trailing slash for git tree lookup to avoid empty path components
|
||||
finalPath = strings.TrimSuffix(finalPath, "/")
|
||||
tree, err := r.client.GetTreeByPath(ctx, commit.Tree, finalPath)
|
||||
if err != nil {
|
||||
if errors.Is(err, nanogit.ErrObjectNotFound) {
|
||||
|
|
|
|||
13
go.work.sum
13
go.work.sum
|
|
@ -13,6 +13,7 @@ cel.dev/expr v0.16.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg=
|
|||
cel.dev/expr v0.19.0/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw=
|
||||
cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
cel.dev/expr v0.23.1/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
cloud.google.com/go v0.82.0/go.mod h1:vlKccHJGuFBFufnAnuB08dfEH9Y3H7dzDzRECFdC2TA=
|
||||
cloud.google.com/go v0.121.0/go.mod h1:rS7Kytwheu/y9buoDmu5EIpMMCI4Mb8ND4aeN4Vwj7Q=
|
||||
cloud.google.com/go v0.121.1/go.mod h1:nRFlrHq39MNVWu+zESP2PosMWA0ryJw8KUBZ2iZpxbw=
|
||||
|
|
@ -329,6 +330,8 @@ github.com/KimMachineGun/automemlimit v0.7.1 h1:QcG/0iCOLChjfUweIMC3YL5Xy9C3VBeN
|
|||
github.com/KimMachineGun/automemlimit v0.7.1/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
|
||||
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
|
||||
github.com/KyleBanks/depth v1.2.1/go.mod h1:jzSb9d0L43HxTQfT+oSA1EEp2q+ne2uh6XgeJcm8brE=
|
||||
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
|
||||
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
|
||||
github.com/MarvinJWendt/testza v0.1.0/go.mod h1:7AxNvlfeHP7Z/hDQ5JtE3OKYT3XFUeLCDE2DQninSqs=
|
||||
github.com/MarvinJWendt/testza v0.2.1/go.mod h1:God7bhG8n6uQxwdScay+gjm9/LnO4D3kkcZX4hv9Rp8=
|
||||
github.com/MarvinJWendt/testza v0.2.8/go.mod h1:nwIcjmr0Zz+Rcwfh3/4UhBp7ePKVhuBExvZqnKYWlII=
|
||||
|
|
@ -490,6 +493,8 @@ github.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp
|
|||
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
|
||||
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2 h1:CJyGEyO1CIwOnXTU40urf0mchf6t3voxpvUDikOU9LY=
|
||||
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2/go.mod h1:vxxjwBHe/KbgFeNlAP/Tvp4SsVRL3WQamcWRxqVh0z0=
|
||||
github.com/aymanbagabas/go-udiff v0.2.0 h1:TK0fH4MteXUDspT88n8CKzvK0X9O2xu9yQjWpi6yML8=
|
||||
github.com/aymanbagabas/go-udiff v0.2.0/go.mod h1:RE4Ex0qsGkTAJoQdQQCA0uG+nAzJO/pI/QwceO5fgrA=
|
||||
github.com/aymerick/douceur v0.2.0 h1:Mv+mAeH1Q+n9Fr+oyamOlAkUNPWPlA8PPGR0QAaYuPk=
|
||||
github.com/aymerick/douceur v0.2.0/go.mod h1:wlT5vV2O3h55X9m7iVYN0TBM0NH/MmbLnd30/FjWUq4=
|
||||
github.com/baidubce/bce-sdk-go v0.9.188 h1:8MA7ewe4VpX01uYl7Kic6ZvfIReUFdSKbY46ZqlQM7U=
|
||||
|
|
@ -528,6 +533,10 @@ github.com/campoy/embedmd v1.0.0 h1:V4kI2qTJJLf4J29RzI/MAt2c3Bl4dQSYPuflzwFH2hY=
|
|||
github.com/campoy/embedmd v1.0.0/go.mod h1:oxyr9RCiSXg0M3VJ3ks0UGfp98BpSSGr0kpiX3MzVl8=
|
||||
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||
github.com/census-instrumentation/opencensus-proto v0.4.1 h1:iKLQ0xPNFxR/2hzXZMrBo8f1j86j5WHzznCCQxV/b8g=
|
||||
github.com/charmbracelet/harmonica v0.2.0 h1:8NxJWRWg/bzKqqEaaeFNipOu77YR5t8aSwG4pgaUBiQ=
|
||||
github.com/charmbracelet/harmonica v0.2.0/go.mod h1:KSri/1RMQOZLbw7AHqgcBycp8pgJnQMYYT8QZRqZ1Ao=
|
||||
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91 h1:payRxjMjKgx2PaCWLZ4p3ro9y97+TVLZNaRZgJwSVDQ=
|
||||
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91/go.mod h1:wDlXFlCrmJ8J+swcL/MnGUuYnqgQdW9rhSD61oNMb6U=
|
||||
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
|
||||
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20230717121745-296ad89f973d h1:77cEq6EriyTZ0g/qfRdp61a3Uu/AWrgIq2s0ClJV1g0=
|
||||
|
|
@ -719,6 +728,7 @@ github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731 h1:R/ZjJpjQK
|
|||
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731/go.mod h1:M9R1FoZ3y//hwwnJtO51ypFGwm8ZfpxPT/ZLtO1mcgQ=
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
|
||||
github.com/expr-lang/expr v1.17.6/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
|
||||
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
|
||||
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
|
||||
github.com/fatih/color v1.17.0/go.mod h1:YZ7TlrGPkiz6ku9fK3TLD/pl3CpsiFyu8N92HLgmosI=
|
||||
|
|
@ -1406,6 +1416,8 @@ github.com/sagikazarmark/crypt v0.6.0 h1:REOEXCs/NFY/1jOCEouMuT4zEniE5YoXbvpC5X/
|
|||
github.com/sagikazarmark/locafero v0.9.0/go.mod h1:UBUyz37V+EdMS3hDF3QWIiVr/2dPrx49OMO0Bn0hJqk=
|
||||
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
|
||||
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
|
||||
github.com/sahilm/fuzzy v0.1.1 h1:ceu5RHF8DGgoi+/dR5PsECjCDH1BE3Fnmpo7aVXOdRA=
|
||||
github.com/sahilm/fuzzy v0.1.1/go.mod h1:VFvziUEIMCrT6A6tw2RFIXPXXmzXbOsSHF0DOI8ZK9Y=
|
||||
github.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc=
|
||||
github.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU=
|
||||
github.com/samber/slog-common v0.18.1 h1:c0EipD/nVY9HG5shgm/XAs67mgpWDMF+MmtptdJNCkQ=
|
||||
|
|
@ -1956,6 +1968,7 @@ golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
|
|||
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
|
||||
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
|
||||
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/net v0.0.0-20190921015927-1a5e07d1ff72/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
|
||||
|
|
|
|||
|
|
@ -1585,8 +1585,6 @@ export type DeleteJobOptions = {
|
|||
resources?: ResourceRef[];
|
||||
};
|
||||
export type MigrateJobOptions = {
|
||||
/** Preserve history (if possible) */
|
||||
history?: boolean;
|
||||
/** Message to use when committing the changes in a single commit */
|
||||
message?: string;
|
||||
};
|
||||
|
|
@ -2047,8 +2045,6 @@ export type RepositoryViewList = {
|
|||
items: RepositoryView[];
|
||||
/** Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds */
|
||||
kind?: string;
|
||||
/** The backend is using legacy storage FIXME: Not sure where this should be exposed... but we need it somewhere The UI should force the onboarding workflow when this is true */
|
||||
legacyStorage?: boolean;
|
||||
};
|
||||
export type ManagerStats = {
|
||||
/** Manager identity */
|
||||
|
|
|
|||
|
|
@ -85,7 +85,6 @@ func RunRepoController(deps server.OperatorDependencies) error {
|
|||
resourceLister,
|
||||
controllerCfg.clients,
|
||||
jobs,
|
||||
nil, // dualwrite -- standalone operator assumes it is backed by unified storage
|
||||
healthChecker,
|
||||
statusPatcher,
|
||||
deps.Registerer,
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ import (
|
|||
"github.com/grafana/grafana/pkg/infra/tracing"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
|
|
@ -53,7 +52,6 @@ type RepositoryController struct {
|
|||
repoLister listers.RepositoryLister
|
||||
repoSynced cache.InformerSynced
|
||||
logger logging.Logger
|
||||
dualwrite dualwrite.Service
|
||||
|
||||
jobs interface {
|
||||
jobs.Queue
|
||||
|
|
@ -86,7 +84,6 @@ func NewRepositoryController(
|
|||
jobs.Queue
|
||||
jobs.Store
|
||||
},
|
||||
dualwrite dualwrite.Service,
|
||||
healthChecker *HealthChecker,
|
||||
statusPatcher StatusPatcher,
|
||||
registry prometheus.Registerer,
|
||||
|
|
@ -114,11 +111,10 @@ func NewRepositoryController(
|
|||
metrics: &finalizerMetrics,
|
||||
maxWorkers: parallelOperations,
|
||||
},
|
||||
jobs: jobs,
|
||||
logger: logging.DefaultLogger.With("logger", loggerName),
|
||||
dualwrite: dualwrite,
|
||||
registry: registry,
|
||||
tracer: tracer,
|
||||
jobs: jobs,
|
||||
logger: logging.DefaultLogger.With("logger", loggerName),
|
||||
registry: registry,
|
||||
tracer: tracer,
|
||||
}
|
||||
|
||||
_, err := repoInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
|
||||
|
|
@ -356,9 +352,6 @@ func (rc *RepositoryController) determineSyncStrategy(ctx context.Context, obj *
|
|||
case !healthStatus.Healthy:
|
||||
logger.Info("skip sync for unhealthy repository")
|
||||
return nil
|
||||
case rc.dualwrite != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, rc.dualwrite):
|
||||
logger.Info("skip sync as we are reading from legacy storage")
|
||||
return nil
|
||||
case healthStatus.Healthy != obj.Status.Health.Healthy:
|
||||
logger.Info("repository became healthy, full resync")
|
||||
return &provisioning.SyncJobOptions{}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package export
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package export
|
||||
|
||||
|
|
|
|||
|
|
@ -49,7 +49,6 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
|
|||
Path: "/exported",
|
||||
},
|
||||
Migrate: &provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
Message: "Migration test",
|
||||
},
|
||||
Delete: &provisioning.DeleteJobOptions{
|
||||
|
|
@ -101,7 +100,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
|
|||
}
|
||||
|
||||
t.Run("jobToStream creates correct stream with all fields", func(t *testing.T) {
|
||||
history := createTestLokiJobHistory(t)
|
||||
history := createTestLokiJobHistory()
|
||||
// Clean job copy like WriteJob does
|
||||
jobCopy := job.DeepCopy()
|
||||
delete(jobCopy.Labels, LabelJobClaim)
|
||||
|
|
@ -147,7 +146,6 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
|
|||
assert.Equal(t, "main", deserializedJob.Spec.Push.Branch)
|
||||
assert.Equal(t, "/exported", deserializedJob.Spec.Push.Path)
|
||||
require.NotNil(t, deserializedJob.Spec.Migrate)
|
||||
assert.True(t, deserializedJob.Spec.Migrate.History)
|
||||
assert.Equal(t, "Migration test", deserializedJob.Spec.Migrate.Message)
|
||||
require.NotNil(t, deserializedJob.Spec.Delete)
|
||||
assert.Equal(t, "main", deserializedJob.Spec.Delete.Ref)
|
||||
|
|
@ -190,7 +188,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("buildJobQuery creates correct LogQL", func(t *testing.T) {
|
||||
history := createTestLokiJobHistory(t)
|
||||
history := createTestLokiJobHistory()
|
||||
|
||||
query := history.buildJobQuery("test-ns", "test-repo")
|
||||
|
||||
|
|
@ -199,7 +197,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("getJobTimestamp returns correct timestamp", func(t *testing.T) {
|
||||
history := createTestLokiJobHistory(t)
|
||||
history := createTestLokiJobHistory()
|
||||
|
||||
// Test finished time priority
|
||||
jobWithFinished := &provisioning.Job{
|
||||
|
|
@ -584,7 +582,7 @@ func TestLokiJobHistory_GetJob(t *testing.T) {
|
|||
}
|
||||
|
||||
// createTestLokiJobHistory creates a LokiJobHistory for testing
|
||||
func createTestLokiJobHistory(t *testing.T) *LokiJobHistory {
|
||||
func createTestLokiJobHistory() *LokiJobHistory {
|
||||
// Create test URLs
|
||||
readURL, _ := url.Parse("http://localhost:3100")
|
||||
writeURL, _ := url.Parse("http://localhost:3100")
|
||||
|
|
|
|||
|
|
@ -1,95 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/logging"
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
)
|
||||
|
||||
type LegacyMigrator struct {
|
||||
legacyMigrator LegacyResourcesMigrator
|
||||
storageSwapper StorageSwapper
|
||||
syncWorker jobs.Worker
|
||||
wrapWithStageFn WrapWithStageFn
|
||||
}
|
||||
|
||||
func NewLegacyMigrator(
|
||||
legacyMigrator LegacyResourcesMigrator,
|
||||
storageSwapper StorageSwapper,
|
||||
syncWorker jobs.Worker,
|
||||
wrapWithStageFn WrapWithStageFn,
|
||||
) *LegacyMigrator {
|
||||
return &LegacyMigrator{
|
||||
legacyMigrator: legacyMigrator,
|
||||
storageSwapper: storageSwapper,
|
||||
syncWorker: syncWorker,
|
||||
wrapWithStageFn: wrapWithStageFn,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *LegacyMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, options provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
|
||||
namespace := rw.Config().Namespace
|
||||
var stageMode repository.StageMode
|
||||
if options.History {
|
||||
// When History is true, we want to commit and push each file (previous PushOnWrites: true)
|
||||
stageMode = repository.StageModeCommitAndPushOnEach
|
||||
} else {
|
||||
// When History is false, we want to commit only once (previous CommitOnlyOnce: true)
|
||||
stageMode = repository.StageModeCommitOnlyOnce
|
||||
}
|
||||
|
||||
stageOptions := repository.StageOptions{
|
||||
Mode: stageMode,
|
||||
CommitOnlyOnceMessage: options.Message,
|
||||
// TODO: make this configurable
|
||||
Timeout: 10 * time.Minute,
|
||||
}
|
||||
|
||||
// Fail if migrating at least one
|
||||
progress.StrictMaxErrors(1)
|
||||
progress.SetMessage(ctx, "migrating legacy resources")
|
||||
|
||||
if err := m.wrapWithStageFn(ctx, rw, stageOptions, func(repo repository.Repository, staged bool) error {
|
||||
rw, ok := repo.(repository.ReaderWriter)
|
||||
if !ok {
|
||||
return errors.New("migration job submitted targeting repository that is not a ReaderWriter")
|
||||
}
|
||||
|
||||
return m.legacyMigrator.Migrate(ctx, rw, namespace, options, progress)
|
||||
}); err != nil {
|
||||
return fmt.Errorf("migrate from SQL: %w", err)
|
||||
}
|
||||
|
||||
progress.SetMessage(ctx, "resetting unified storage")
|
||||
if err := m.storageSwapper.WipeUnifiedAndSetMigratedFlag(ctx, namespace); err != nil {
|
||||
return fmt.Errorf("unable to reset unified storage %w", err)
|
||||
}
|
||||
|
||||
// Reset the results after the export as pull will operate on the same resources
|
||||
progress.ResetResults()
|
||||
|
||||
// Delegate the import to a sync (from the already checked out go-git repository!)
|
||||
progress.SetMessage(ctx, "pulling resources")
|
||||
if err := m.syncWorker.Process(ctx, rw, provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Pull: &provisioning.SyncJobOptions{
|
||||
Incremental: false,
|
||||
},
|
||||
},
|
||||
}, progress); err != nil { // this will have an error when too many errors exist
|
||||
progress.SetMessage(ctx, "error importing resources, reverting")
|
||||
if e2 := m.storageSwapper.StopReadingUnifiedStorage(ctx); e2 != nil {
|
||||
logger := logging.FromContext(ctx)
|
||||
logger.Warn("error trying to revert dual write settings after an error", "err", err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,265 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/export"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
|
||||
unifiedmigrations "github.com/grafana/grafana/pkg/storage/unified/migrations"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/parquet"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
||||
var _ resource.BulkResourceWriter = (*legacyResourceResourceMigrator)(nil)
|
||||
|
||||
//go:generate mockery --name LegacyResourcesMigrator --structname MockLegacyResourcesMigrator --inpackage --filename mock_legacy_resources_migrator.go --with-expecter
|
||||
type LegacyResourcesMigrator interface {
|
||||
Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error
|
||||
}
|
||||
|
||||
type legacyResourcesMigrator struct {
|
||||
repositoryResources resources.RepositoryResourcesFactory
|
||||
parsers resources.ParserFactory
|
||||
dashboardAccess legacy.MigrationDashboardAccessor
|
||||
signerFactory signature.SignerFactory
|
||||
clients resources.ClientFactory
|
||||
exportFn export.ExportFn
|
||||
}
|
||||
|
||||
func NewLegacyResourcesMigrator(
|
||||
repositoryResources resources.RepositoryResourcesFactory,
|
||||
parsers resources.ParserFactory,
|
||||
dashboardAccess legacy.MigrationDashboardAccessor,
|
||||
signerFactory signature.SignerFactory,
|
||||
clients resources.ClientFactory,
|
||||
exportFn export.ExportFn,
|
||||
) LegacyResourcesMigrator {
|
||||
return &legacyResourcesMigrator{
|
||||
repositoryResources: repositoryResources,
|
||||
parsers: parsers,
|
||||
dashboardAccess: dashboardAccess,
|
||||
signerFactory: signerFactory,
|
||||
clients: clients,
|
||||
exportFn: exportFn,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *legacyResourcesMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
|
||||
parser, err := m.parsers.GetParser(ctx, rw)
|
||||
if err != nil {
|
||||
return fmt.Errorf("get parser: %w", err)
|
||||
}
|
||||
|
||||
repositoryResources, err := m.repositoryResources.Client(ctx, rw)
|
||||
if err != nil {
|
||||
return fmt.Errorf("get repository resources: %w", err)
|
||||
}
|
||||
|
||||
// FIXME: signature is only relevant for repositories which support signature
|
||||
// Not all repositories support history
|
||||
signer, err := m.signerFactory.New(ctx, signature.SignOptions{
|
||||
Namespace: namespace,
|
||||
History: opts.History,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("get signer: %w", err)
|
||||
}
|
||||
|
||||
progress.SetMessage(ctx, "migrate folders from SQL")
|
||||
clients, err := m.clients.Clients(ctx, namespace)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// nothing special for the export for now
|
||||
exportOpts := provisioning.ExportJobOptions{}
|
||||
if err = m.exportFn(ctx, rw.Config().Name, exportOpts, clients, repositoryResources, progress); err != nil {
|
||||
return fmt.Errorf("migrate folders from SQL: %w", err)
|
||||
}
|
||||
|
||||
progress.SetMessage(ctx, "migrate resources from SQL")
|
||||
for _, kind := range resources.SupportedProvisioningResources {
|
||||
if kind == resources.FolderResource {
|
||||
continue // folders have special handling
|
||||
}
|
||||
|
||||
reader := newLegacyResourceMigrator(
|
||||
rw,
|
||||
m.dashboardAccess,
|
||||
parser,
|
||||
repositoryResources,
|
||||
progress,
|
||||
opts,
|
||||
namespace,
|
||||
kind.GroupResource(),
|
||||
signer,
|
||||
)
|
||||
|
||||
if err := reader.Migrate(ctx); err != nil {
|
||||
return fmt.Errorf("migrate resource %s: %w", kind, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type legacyResourceResourceMigrator struct {
|
||||
repo repository.ReaderWriter
|
||||
dashboardAccess legacy.MigrationDashboardAccessor
|
||||
parser resources.Parser
|
||||
progress jobs.JobProgressRecorder
|
||||
namespace string
|
||||
kind schema.GroupResource
|
||||
options provisioning.MigrateJobOptions
|
||||
resources resources.RepositoryResources
|
||||
signer signature.Signer
|
||||
history map[string]string // UID >> file path
|
||||
}
|
||||
|
||||
func newLegacyResourceMigrator(
|
||||
repo repository.ReaderWriter,
|
||||
dashboardAccess legacy.MigrationDashboardAccessor,
|
||||
parser resources.Parser,
|
||||
resources resources.RepositoryResources,
|
||||
progress jobs.JobProgressRecorder,
|
||||
options provisioning.MigrateJobOptions,
|
||||
namespace string,
|
||||
kind schema.GroupResource,
|
||||
signer signature.Signer,
|
||||
) *legacyResourceResourceMigrator {
|
||||
var history map[string]string
|
||||
if options.History {
|
||||
history = make(map[string]string)
|
||||
}
|
||||
return &legacyResourceResourceMigrator{
|
||||
repo: repo,
|
||||
dashboardAccess: dashboardAccess,
|
||||
parser: parser,
|
||||
progress: progress,
|
||||
options: options,
|
||||
namespace: namespace,
|
||||
kind: kind,
|
||||
resources: resources,
|
||||
signer: signer,
|
||||
history: history,
|
||||
}
|
||||
}
|
||||
|
||||
// Close implements resource.BulkResourceWriter.
|
||||
func (r *legacyResourceResourceMigrator) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CloseWithResults implements resource.BulkResourceWriter.
|
||||
func (r *legacyResourceResourceMigrator) CloseWithResults() (*resourcepb.BulkResponse, error) {
|
||||
return &resourcepb.BulkResponse{}, nil
|
||||
}
|
||||
|
||||
// Write implements resource.BulkResourceWriter.
|
||||
func (r *legacyResourceResourceMigrator) Write(ctx context.Context, key *resourcepb.ResourceKey, value []byte) error {
|
||||
// Reuse the same parse+cleanup logic
|
||||
parsed, err := r.parser.Parse(ctx, &repository.FileInfo{
|
||||
Path: "", // empty path to ignore file system
|
||||
Data: value,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("unmarshal unstructured: %w", err)
|
||||
}
|
||||
|
||||
// clear anything so it will get written
|
||||
parsed.Meta.SetManagerProperties(utils.ManagerProperties{})
|
||||
parsed.Meta.SetSourceProperties(utils.SourceProperties{})
|
||||
|
||||
// Add author signature to the context
|
||||
ctx, err = r.signer.Sign(ctx, parsed.Meta)
|
||||
if err != nil {
|
||||
return fmt.Errorf("add author signature: %w", err)
|
||||
}
|
||||
|
||||
// TODO: this seems to be same logic as the export job
|
||||
// TODO: we should use a kind safe manager here
|
||||
fileName, err := r.resources.WriteResourceFileFromObject(ctx, parsed.Obj, resources.WriteOptions{
|
||||
Path: "",
|
||||
Ref: "",
|
||||
})
|
||||
|
||||
// When replaying history, the path to the file may change over time
|
||||
// This happens when the title or folder change
|
||||
if r.history != nil && err == nil {
|
||||
name := parsed.Meta.GetName()
|
||||
previous := r.history[name]
|
||||
if previous != "" && previous != fileName {
|
||||
err = r.repo.Delete(ctx, previous, "", fmt.Sprintf("moved to: %s", fileName))
|
||||
}
|
||||
r.history[name] = fileName
|
||||
}
|
||||
|
||||
result := jobs.JobResourceResult{
|
||||
Name: parsed.Meta.GetName(),
|
||||
Group: r.kind.Group,
|
||||
Kind: parsed.GVK.Kind,
|
||||
Action: repository.FileActionCreated,
|
||||
Path: fileName,
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
result.Error = fmt.Errorf("writing resource %s/%s %s to file %s: %w", r.kind.Group, r.kind.Resource, parsed.Meta.GetName(), fileName, err)
|
||||
}
|
||||
|
||||
r.progress.Record(ctx, result)
|
||||
if err := r.progress.TooManyErrors(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *legacyResourceResourceMigrator) Migrate(ctx context.Context) error {
|
||||
r.progress.SetMessage(ctx, fmt.Sprintf("migrate %s resource", r.kind.Resource))
|
||||
|
||||
// Create a parquet migrator with this instance as the BulkResourceWriter
|
||||
parquetClient := parquet.NewBulkResourceWriterClient(r)
|
||||
migrator := unifiedmigrations.ProvideUnifiedMigratorParquet(
|
||||
r.dashboardAccess,
|
||||
parquetClient,
|
||||
)
|
||||
|
||||
opts := legacy.MigrateOptions{
|
||||
Namespace: r.namespace,
|
||||
WithHistory: r.options.History,
|
||||
Resources: []schema.GroupResource{r.kind},
|
||||
OnlyCount: true, // first get the count
|
||||
}
|
||||
stats, err := migrator.Migrate(ctx, opts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to count legacy items %w", err)
|
||||
}
|
||||
|
||||
// FIXME: explain why we calculate it in this way
|
||||
if len(stats.Summary) > 0 {
|
||||
count := stats.Summary[0].Count //
|
||||
history := stats.Summary[0].History
|
||||
if history > count {
|
||||
count = history // the number of items we will process
|
||||
}
|
||||
r.progress.SetTotal(ctx, int(count))
|
||||
}
|
||||
|
||||
opts.OnlyCount = false // this time actually write
|
||||
_, err = migrator.Migrate(ctx, opts)
|
||||
if err != nil {
|
||||
return fmt.Errorf("migrate legacy %s: %w", r.kind.Resource, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,934 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/export"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
||||
func TestLegacyResourcesMigrator_Migrate(t *testing.T) {
|
||||
t.Run("should fail when parser factory fails", func(t *testing.T) {
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(nil, errors.New("parser factory error"))
|
||||
|
||||
signerFactory := signature.NewMockSignerFactory(t)
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
nil,
|
||||
mockParserFactory,
|
||||
nil,
|
||||
signerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{}, jobs.NewMockJobProgressRecorder(t))
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, "get parser: parser factory error")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when repository resources factory fails", func(t *testing.T) {
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(resources.NewMockParser(t), nil)
|
||||
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(nil, errors.New("repo resources factory error"))
|
||||
signerFactory := signature.NewMockSignerFactory(t)
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
nil,
|
||||
signerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{}, jobs.NewMockJobProgressRecorder(t))
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, "get repository resources: repo resources factory error")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when resource migration fails", func(t *testing.T) {
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(resources.NewMockParser(t), nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(mockRepoResources, nil)
|
||||
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{}, errors.New("legacy migrator error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
|
||||
signer := signature.NewMockSigner(t)
|
||||
signerFactory := signature.NewMockSignerFactory(t)
|
||||
signerFactory.On("New", mock.Anything, mock.Anything).
|
||||
Return(signer, nil)
|
||||
|
||||
mockClients := resources.NewMockResourceClients(t)
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
|
||||
Return(mockClients, nil)
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
mockDashboardAccess,
|
||||
signerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
Name: "test-repo",
|
||||
},
|
||||
})
|
||||
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migrate resource")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
mockClients.AssertExpectations(t)
|
||||
repo.AssertExpectations(t)
|
||||
})
|
||||
t.Run("should fail when client creation fails", func(t *testing.T) {
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(resources.NewMockParser(t), nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(mockRepoResources, nil)
|
||||
|
||||
mockSigner := signature.NewMockSigner(t)
|
||||
mockSignerFactory := signature.NewMockSignerFactory(t)
|
||||
mockSignerFactory.On("New", mock.Anything, mock.Anything).
|
||||
Return(mockSigner, nil)
|
||||
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
|
||||
Return(nil, errors.New("client creation error"))
|
||||
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
nil,
|
||||
mockSignerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
|
||||
repo := repository.NewMockRepository(t)
|
||||
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, "client creation error")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockSignerFactory.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
repo.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when signer factory fails", func(t *testing.T) {
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(resources.NewMockParser(t), nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(mockRepoResources, nil)
|
||||
|
||||
mockSignerFactory := signature.NewMockSignerFactory(t)
|
||||
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
|
||||
Namespace: "test-namespace",
|
||||
History: true,
|
||||
}).Return(nil, fmt.Errorf("signer factory error"))
|
||||
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
nil,
|
||||
mockSignerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
}, progress)
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, "get signer: signer factory error")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockSignerFactory.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
})
|
||||
t.Run("should fail when folder export fails", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(mockParser, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(mockRepoResources, nil)
|
||||
|
||||
mockSigner := signature.NewMockSigner(t)
|
||||
mockSignerFactory := signature.NewMockSignerFactory(t)
|
||||
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
|
||||
Namespace: "test-namespace",
|
||||
History: false,
|
||||
}).Return(mockSigner, nil)
|
||||
|
||||
mockClients := resources.NewMockResourceClients(t)
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
|
||||
Return(mockClients, nil)
|
||||
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
|
||||
Return(fmt.Errorf("export error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
nil,
|
||||
mockSignerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
Name: "test-repo",
|
||||
},
|
||||
})
|
||||
|
||||
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migrate folders from SQL: export error")
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockSignerFactory.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should successfully migrate all resources", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
mockParserFactory := resources.NewMockParserFactory(t)
|
||||
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
|
||||
Return(mockParser, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
|
||||
Return(mockRepoResources, nil)
|
||||
|
||||
mockSigner := signature.NewMockSigner(t)
|
||||
mockSignerFactory := signature.NewMockSignerFactory(t)
|
||||
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
|
||||
Namespace: "test-namespace",
|
||||
History: true,
|
||||
}).Return(mockSigner, nil)
|
||||
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
// Mock CountResources for the count phase
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{}, nil).Once()
|
||||
// Mock MigrateDashboards for the actual migration phase (dashboards resource)
|
||||
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return !opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
}), mock.Anything).Return(&legacy.BlobStoreInfo{
|
||||
Count: 10,
|
||||
Size: 5,
|
||||
}, nil).Once()
|
||||
|
||||
mockClients := resources.NewMockResourceClients(t)
|
||||
mockClientFactory := resources.NewMockClientFactory(t)
|
||||
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
|
||||
Return(mockClients, nil)
|
||||
mockExportFn := export.NewMockExportFn(t)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrate resources from SQL").Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrate dashboards resource").Return()
|
||||
|
||||
migrator := NewLegacyResourcesMigrator(
|
||||
mockRepoResourcesFactory,
|
||||
mockParserFactory,
|
||||
mockDashboardAccess,
|
||||
mockSignerFactory,
|
||||
mockClientFactory,
|
||||
mockExportFn.Execute,
|
||||
)
|
||||
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
Name: "test-repo",
|
||||
},
|
||||
})
|
||||
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
}, progress)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParserFactory.AssertExpectations(t)
|
||||
mockRepoResourcesFactory.AssertExpectations(t)
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
mockClientFactory.AssertExpectations(t)
|
||||
mockExportFn.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
mockClients.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyResourceResourceMigrator_Write(t *testing.T) {
|
||||
t.Run("should fail when parser fails", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(nil, errors.New("parser error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err := migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "unmarshal unstructured")
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("records error when create resource file fails", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, mock.Anything).
|
||||
Return("", errors.New("create file error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
|
||||
return result.Action == repository.FileActionCreated &&
|
||||
result.Name == "test" &&
|
||||
result.Error != nil &&
|
||||
result.Error.Error() == "writing resource test.grafana.app/tests test to file : create file error"
|
||||
})).Return()
|
||||
progress.On("TooManyErrors").Return(nil)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
mockRepoResources,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.NoError(t, err) // Error is recorded but not returned
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockRepoResources.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when signer fails", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockSigner := signature.NewMockSigner(t)
|
||||
mockSigner.On("Sign", mock.Anything, meta).
|
||||
Return(nil, errors.New("signing error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
mockSigner,
|
||||
)
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, "add author signature: signing error")
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockSigner.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should successfully add author signature", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockSigner := signature.NewMockSigner(t)
|
||||
signedCtx := repository.WithAuthorSignature(context.Background(), repository.CommitSignature{
|
||||
Name: "test-user",
|
||||
Email: "test@example.com",
|
||||
})
|
||||
mockSigner.On("Sign", mock.Anything, meta).
|
||||
Return(signedCtx, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("WriteResourceFileFromObject", signedCtx, mock.Anything, mock.Anything).
|
||||
Return("test/path", nil)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
|
||||
return result.Action == repository.FileActionCreated &&
|
||||
result.Name == "test" &&
|
||||
result.Error == nil &&
|
||||
result.Path == "test/path"
|
||||
})).Return()
|
||||
progress.On("TooManyErrors").Return(nil)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
mockRepoResources,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
mockSigner,
|
||||
)
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockSigner.AssertExpectations(t)
|
||||
mockRepoResources.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should maintain history", func(t *testing.T) {
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("Record", mock.Anything, mock.Anything).Return()
|
||||
progress.On("TooManyErrors").Return(nil)
|
||||
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, _ := utils.MetaAccessor(obj)
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockRepo := repository.NewMockRepository(t)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
writeResourceFileFromObject := mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, mock.Anything)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
mockRepo,
|
||||
nil,
|
||||
mockParser,
|
||||
mockRepoResources,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
writeResourceFileFromObject.Return("aaaa.json", nil)
|
||||
err := migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte(""))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "aaaa.json", migrator.history["test"], "kept track of the old files")
|
||||
|
||||
// Change the result file name
|
||||
writeResourceFileFromObject.Return("bbbb.json", nil)
|
||||
mockRepo.On("Delete", mock.Anything, "aaaa.json", "", "moved to: bbbb.json").
|
||||
Return(nil).Once()
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte(""))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "bbbb.json", migrator.history["test"], "kept track of the old files")
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockRepoResources.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should successfully write resource", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
meta.SetManagerProperties(utils.ManagerProperties{
|
||||
Kind: utils.ManagerKindRepo,
|
||||
Identity: "test",
|
||||
AllowsEdits: true,
|
||||
Suspended: false,
|
||||
})
|
||||
meta.SetSourceProperties(utils.SourceProperties{
|
||||
Path: "test",
|
||||
Checksum: "test",
|
||||
TimestampMillis: 1234567890,
|
||||
})
|
||||
|
||||
mockParser.On("Parse", mock.Anything, mock.MatchedBy(func(info *repository.FileInfo) bool {
|
||||
return info != nil && info.Path == "" && string(info.Data) == "test"
|
||||
})).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.MatchedBy(func(obj *unstructured.Unstructured) bool {
|
||||
if obj == nil {
|
||||
return false
|
||||
}
|
||||
if obj.GetName() != "test" {
|
||||
return false
|
||||
}
|
||||
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
managerProps, _ := meta.GetManagerProperties()
|
||||
sourceProps, _ := meta.GetSourceProperties()
|
||||
|
||||
return assert.Zero(t, sourceProps) && assert.Zero(t, managerProps)
|
||||
}), resources.WriteOptions{
|
||||
Path: "",
|
||||
Ref: "",
|
||||
}).
|
||||
Return("test/path", nil)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
|
||||
return result.Action == repository.FileActionCreated &&
|
||||
result.Name == "test" &&
|
||||
result.Error == nil &&
|
||||
result.Kind == "" && // empty kind
|
||||
result.Group == "test.grafana.app" &&
|
||||
result.Path == "test/path"
|
||||
})).Return()
|
||||
progress.On("TooManyErrors").Return(nil)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
mockRepoResources,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockRepoResources.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when too many errors", func(t *testing.T) {
|
||||
mockParser := resources.NewMockParser(t)
|
||||
obj := &unstructured.Unstructured{
|
||||
Object: map[string]any{
|
||||
"metadata": map[string]any{
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
meta, err := utils.MetaAccessor(obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockParser.On("Parse", mock.Anything, mock.Anything).
|
||||
Return(&resources.ParsedResource{
|
||||
Meta: meta,
|
||||
Obj: obj,
|
||||
}, nil)
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, resources.WriteOptions{}).
|
||||
Return("test/path", nil)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("Record", mock.Anything, mock.Anything).Return()
|
||||
progress.On("TooManyErrors").Return(errors.New("too many errors"))
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
nil,
|
||||
mockParser,
|
||||
mockRepoResources,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
|
||||
require.EqualError(t, err, "too many errors")
|
||||
|
||||
mockParser.AssertExpectations(t)
|
||||
mockRepoResources.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyResourceResourceMigrator_Migrate(t *testing.T) {
|
||||
t.Run("should fail when legacy migrate count fails", func(t *testing.T) {
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{}, errors.New("count error"))
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
mockDashboardAccess,
|
||||
nil,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background())
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "unable to count legacy items")
|
||||
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should fail when legacy migrate write fails", func(t *testing.T) {
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{}, nil).Once() // Count phase
|
||||
// For test-resources GroupResource, we don't know which method it will call, but since it's not dashboards/folders/librarypanels,
|
||||
// the Migrate will fail trying to map the resource type. Let's make it dashboards for this test.
|
||||
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return !opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
}), mock.Anything).Return(nil, errors.New("write error")).Once() // Write phase
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
mockDashboardAccess,
|
||||
nil,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background())
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migrate legacy dashboards: write error")
|
||||
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
|
||||
t.Run("should successfully migrate resource", func(t *testing.T) {
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{}, nil).Once() // Count phase
|
||||
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return !opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
mockDashboardAccess,
|
||||
nil,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background())
|
||||
require.NoError(t, err)
|
||||
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
t.Run("should set total to history if history is greater than count", func(t *testing.T) {
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{
|
||||
Summary: []*resourcepb.BulkResponse_Summary{
|
||||
{
|
||||
Group: "dashboard.grafana.app",
|
||||
Resource: "dashboards",
|
||||
Count: 1,
|
||||
History: 100,
|
||||
},
|
||||
},
|
||||
}, nil).Once() // Count phase
|
||||
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return !opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
progress.On("SetTotal", mock.Anything, 100).Return()
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
mockDashboardAccess,
|
||||
nil,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
|
||||
signature.NewGrafanaSigner(),
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background())
|
||||
require.NoError(t, err)
|
||||
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
t.Run("should set total to count if history is less than count", func(t *testing.T) {
|
||||
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
|
||||
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
})).Return(&resourcepb.BulkResponse{
|
||||
Summary: []*resourcepb.BulkResponse_Summary{
|
||||
{
|
||||
Group: "dashboard.grafana.app",
|
||||
Resource: "dashboards",
|
||||
Count: 200,
|
||||
History: 1,
|
||||
},
|
||||
},
|
||||
}, nil).Once() // Count phase
|
||||
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
|
||||
return !opts.OnlyCount && opts.Namespace == "test-namespace"
|
||||
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
|
||||
progress.On("SetTotal", mock.Anything, 200).Return()
|
||||
signer := signature.NewMockSigner(t)
|
||||
|
||||
migrator := newLegacyResourceMigrator(
|
||||
nil,
|
||||
mockDashboardAccess,
|
||||
nil,
|
||||
nil,
|
||||
progress,
|
||||
provisioning.MigrateJobOptions{},
|
||||
"test-namespace",
|
||||
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
|
||||
signer,
|
||||
)
|
||||
|
||||
err := migrator.Migrate(context.Background())
|
||||
require.NoError(t, err)
|
||||
|
||||
mockDashboardAccess.AssertExpectations(t)
|
||||
progress.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyResourceResourceMigrator_Close(t *testing.T) {
|
||||
t.Run("should return nil error", func(t *testing.T) {
|
||||
migrator := &legacyResourceResourceMigrator{}
|
||||
err := migrator.Close()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyResourceResourceMigrator_CloseWithResults(t *testing.T) {
|
||||
t.Run("should return empty bulk response and nil error", func(t *testing.T) {
|
||||
migrator := &legacyResourceResourceMigrator{}
|
||||
response, err := migrator.CloseWithResults()
|
||||
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, response)
|
||||
require.IsType(t, &resourcepb.BulkResponse{}, response)
|
||||
require.Empty(t, response.Summary)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,440 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
)
|
||||
|
||||
func TestWrapWithStageFn(t *testing.T) {
|
||||
t.Run("should return error when repository is not a ReaderWriter", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
// Create the wrapper function that matches WrapWithCloneFn signature
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
// pass a reader to function call
|
||||
repo := repository.NewMockReader(t)
|
||||
return fn(repo, true)
|
||||
}
|
||||
|
||||
legacyFoldersMigrator := NewLegacyMigrator(
|
||||
NewMockLegacyResourcesMigrator(t),
|
||||
NewMockStorageSwapper(t),
|
||||
jobs.NewMockWorker(t),
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
err := legacyFoldersMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migration job submitted targeting repository that is not a ReaderWriter")
|
||||
})
|
||||
}
|
||||
func TestWrapWithCloneFn_Error(t *testing.T) {
|
||||
t.Run("should return error when wrapFn fails", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
expectedErr := errors.New("clone failed")
|
||||
|
||||
// Create the wrapper function that returns an error
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return expectedErr
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
NewMockLegacyResourcesMigrator(t),
|
||||
NewMockStorageSwapper(t),
|
||||
jobs.NewMockWorker(t),
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migrate from SQL: clone failed")
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_MigrateFails(t *testing.T) {
|
||||
t.Run("should return error when legacyMigrator.Migrate fails", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
expectedErr := errors.New("migration failed")
|
||||
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
|
||||
Return(expectedErr)
|
||||
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return fn(rw, true)
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "migrate from SQL: migration failed")
|
||||
|
||||
// Storage swapper should not be called when migration fails
|
||||
mockStorageSwapper.AssertNotCalled(t, "WipeUnifiedAndSetMigratedFlag")
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_ResetUnifiedStorageFails(t *testing.T) {
|
||||
t.Run("should return error when storage reset fails", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
expectedErr := errors.New("reset failed")
|
||||
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
|
||||
Return(expectedErr)
|
||||
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return fn(rw, true)
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "unable to reset unified storage")
|
||||
|
||||
// Sync worker should not be called when reset fails
|
||||
mockWorker.AssertNotCalled(t, "Process")
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_SyncFails(t *testing.T) {
|
||||
t.Run("should revert storage settings when sync fails", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
expectedErr := errors.New("sync failed")
|
||||
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
|
||||
Return(nil)
|
||||
mockStorageSwapper.On("StopReadingUnifiedStorage", mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
|
||||
}), mock.Anything).Return(expectedErr)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return fn(rw, true)
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
|
||||
progress.On("ResetResults").Return()
|
||||
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "error importing resources, reverting").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "sync failed")
|
||||
|
||||
// Verify storage settings were reverted
|
||||
mockStorageSwapper.AssertCalled(t, "StopReadingUnifiedStorage", mock.Anything)
|
||||
})
|
||||
|
||||
t.Run("should handle revert failure after sync failure", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
syncErr := errors.New("sync failed")
|
||||
revertErr := errors.New("revert failed")
|
||||
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
|
||||
Return(nil)
|
||||
mockStorageSwapper.On("StopReadingUnifiedStorage", mock.Anything).
|
||||
Return(revertErr)
|
||||
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
|
||||
}), mock.Anything).Return(syncErr)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return fn(rw, true)
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
|
||||
progress.On("ResetResults").Return()
|
||||
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "error importing resources, reverting").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "sync failed")
|
||||
|
||||
// Verify both errors occurred
|
||||
mockStorageSwapper.AssertCalled(t, "StopReadingUnifiedStorage", mock.Anything)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_Success(t *testing.T) {
|
||||
t.Run("should complete migration successfully", func(t *testing.T) {
|
||||
// Setup
|
||||
ctx := context.Background()
|
||||
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
|
||||
Return(nil)
|
||||
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
|
||||
Return(nil)
|
||||
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
|
||||
}), mock.Anything).Return(nil)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return fn(rw, true)
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
|
||||
progress.On("ResetResults").Return()
|
||||
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
|
||||
|
||||
// Assert
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify all expected operations were called in order
|
||||
mockLegacyMigrator.AssertCalled(t, "Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything)
|
||||
mockStorageSwapper.AssertCalled(t, "WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace")
|
||||
mockWorker.AssertCalled(t, "Process", mock.Anything, mock.Anything, mock.Anything, mock.Anything)
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_BeforeFnExecution(t *testing.T) {
|
||||
t.Run("should execute beforeFn functions", func(t *testing.T) {
|
||||
// Setup
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return errors.New("abort test here")
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
// No progress messages expected in current staging implementation
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
|
||||
// Execute
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(context.Background(), repo, provisioning.MigrateJobOptions{}, progress)
|
||||
require.EqualError(t, err, "migrate from SQL: abort test here")
|
||||
})
|
||||
}
|
||||
|
||||
func TestLegacyMigrator_ProgressScanner(t *testing.T) {
|
||||
t.Run("should update progress with scanner", func(t *testing.T) {
|
||||
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
|
||||
mockStorageSwapper := NewMockStorageSwapper(t)
|
||||
mockWorker := jobs.NewMockWorker(t)
|
||||
|
||||
// Create a wrapper function that calls the provided function
|
||||
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
return errors.New("abort test here")
|
||||
}
|
||||
|
||||
legacyMigrator := NewLegacyMigrator(
|
||||
mockLegacyMigrator,
|
||||
mockStorageSwapper,
|
||||
mockWorker,
|
||||
wrapFn,
|
||||
)
|
||||
|
||||
progress := jobs.NewMockJobProgressRecorder(t)
|
||||
// No progress messages expected in current staging implementation
|
||||
progress.On("StrictMaxErrors", 1).Return()
|
||||
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
|
||||
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
})
|
||||
|
||||
err := legacyMigrator.Migrate(context.Background(), repo, provisioning.MigrateJobOptions{}, progress)
|
||||
require.EqualError(t, err, "migrate from SQL: abort test here")
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
// No progress message calls expected in current staging implementation
|
||||
return progress.AssertExpectations(t)
|
||||
}, time.Second, 10*time.Millisecond)
|
||||
})
|
||||
}
|
||||
|
|
@ -1,430 +0,0 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
metadata "google.golang.org/grpc/metadata"
|
||||
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
||||
// BulkStore_BulkProcessClient is an autogenerated mock type for the BulkStore_BulkProcessClient type
|
||||
type BulkStore_BulkProcessClient struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type BulkStore_BulkProcessClient_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *BulkStore_BulkProcessClient) EXPECT() *BulkStore_BulkProcessClient_Expecter {
|
||||
return &BulkStore_BulkProcessClient_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// CloseAndRecv provides a mock function with no fields
|
||||
func (_m *BulkStore_BulkProcessClient) CloseAndRecv() (*resourcepb.BulkResponse, error) {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for CloseAndRecv")
|
||||
}
|
||||
|
||||
var r0 *resourcepb.BulkResponse
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func() (*resourcepb.BulkResponse, error)); ok {
|
||||
return rf()
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func() *resourcepb.BulkResponse); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*resourcepb.BulkResponse)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func() error); ok {
|
||||
r1 = rf()
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_CloseAndRecv_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'CloseAndRecv'
|
||||
type BulkStore_BulkProcessClient_CloseAndRecv_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// CloseAndRecv is a helper method to define mock.On call
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) CloseAndRecv() *BulkStore_BulkProcessClient_CloseAndRecv_Call {
|
||||
return &BulkStore_BulkProcessClient_CloseAndRecv_Call{Call: _e.mock.On("CloseAndRecv")}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) Run(run func()) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) Return(_a0 *resourcepb.BulkResponse, _a1 error) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
|
||||
_c.Call.Return(_a0, _a1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) RunAndReturn(run func() (*resourcepb.BulkResponse, error)) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// CloseSend provides a mock function with no fields
|
||||
func (_m *BulkStore_BulkProcessClient) CloseSend() error {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for CloseSend")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func() error); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_CloseSend_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'CloseSend'
|
||||
type BulkStore_BulkProcessClient_CloseSend_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// CloseSend is a helper method to define mock.On call
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) CloseSend() *BulkStore_BulkProcessClient_CloseSend_Call {
|
||||
return &BulkStore_BulkProcessClient_CloseSend_Call{Call: _e.mock.On("CloseSend")}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) Run(run func()) *BulkStore_BulkProcessClient_CloseSend_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) Return(_a0 error) *BulkStore_BulkProcessClient_CloseSend_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) RunAndReturn(run func() error) *BulkStore_BulkProcessClient_CloseSend_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Context provides a mock function with no fields
|
||||
func (_m *BulkStore_BulkProcessClient) Context() context.Context {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Context")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
if rf, ok := ret.Get(0).(func() context.Context); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_Context_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Context'
|
||||
type BulkStore_BulkProcessClient_Context_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Context is a helper method to define mock.On call
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) Context() *BulkStore_BulkProcessClient_Context_Call {
|
||||
return &BulkStore_BulkProcessClient_Context_Call{Call: _e.mock.On("Context")}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Context_Call) Run(run func()) *BulkStore_BulkProcessClient_Context_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Context_Call) Return(_a0 context.Context) *BulkStore_BulkProcessClient_Context_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Context_Call) RunAndReturn(run func() context.Context) *BulkStore_BulkProcessClient_Context_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Header provides a mock function with no fields
|
||||
func (_m *BulkStore_BulkProcessClient) Header() (metadata.MD, error) {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Header")
|
||||
}
|
||||
|
||||
var r0 metadata.MD
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func() (metadata.MD, error)); ok {
|
||||
return rf()
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func() metadata.MD); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(metadata.MD)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func() error); ok {
|
||||
r1 = rf()
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_Header_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Header'
|
||||
type BulkStore_BulkProcessClient_Header_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Header is a helper method to define mock.On call
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) Header() *BulkStore_BulkProcessClient_Header_Call {
|
||||
return &BulkStore_BulkProcessClient_Header_Call{Call: _e.mock.On("Header")}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Header_Call) Run(run func()) *BulkStore_BulkProcessClient_Header_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Header_Call) Return(_a0 metadata.MD, _a1 error) *BulkStore_BulkProcessClient_Header_Call {
|
||||
_c.Call.Return(_a0, _a1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Header_Call) RunAndReturn(run func() (metadata.MD, error)) *BulkStore_BulkProcessClient_Header_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// RecvMsg provides a mock function with given fields: m
|
||||
func (_m *BulkStore_BulkProcessClient) RecvMsg(m interface{}) error {
|
||||
ret := _m.Called(m)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for RecvMsg")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(interface{}) error); ok {
|
||||
r0 = rf(m)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_RecvMsg_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'RecvMsg'
|
||||
type BulkStore_BulkProcessClient_RecvMsg_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// RecvMsg is a helper method to define mock.On call
|
||||
// - m interface{}
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) RecvMsg(m interface{}) *BulkStore_BulkProcessClient_RecvMsg_Call {
|
||||
return &BulkStore_BulkProcessClient_RecvMsg_Call{Call: _e.mock.On("RecvMsg", m)}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) Run(run func(m interface{})) *BulkStore_BulkProcessClient_RecvMsg_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(interface{}))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) Return(_a0 error) *BulkStore_BulkProcessClient_RecvMsg_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) RunAndReturn(run func(interface{}) error) *BulkStore_BulkProcessClient_RecvMsg_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Send provides a mock function with given fields: _a0
|
||||
func (_m *BulkStore_BulkProcessClient) Send(_a0 *resourcepb.BulkRequest) error {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Send")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(*resourcepb.BulkRequest) error); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_Send_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Send'
|
||||
type BulkStore_BulkProcessClient_Send_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Send is a helper method to define mock.On call
|
||||
// - _a0 *resource.BulkRequest
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) Send(_a0 interface{}) *BulkStore_BulkProcessClient_Send_Call {
|
||||
return &BulkStore_BulkProcessClient_Send_Call{Call: _e.mock.On("Send", _a0)}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Send_Call) Run(run func(_a0 *resourcepb.BulkRequest)) *BulkStore_BulkProcessClient_Send_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(*resourcepb.BulkRequest))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Send_Call) Return(_a0 error) *BulkStore_BulkProcessClient_Send_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Send_Call) RunAndReturn(run func(*resourcepb.BulkRequest) error) *BulkStore_BulkProcessClient_Send_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// SendMsg provides a mock function with given fields: m
|
||||
func (_m *BulkStore_BulkProcessClient) SendMsg(m interface{}) error {
|
||||
ret := _m.Called(m)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for SendMsg")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(interface{}) error); ok {
|
||||
r0 = rf(m)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_SendMsg_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SendMsg'
|
||||
type BulkStore_BulkProcessClient_SendMsg_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// SendMsg is a helper method to define mock.On call
|
||||
// - m interface{}
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) SendMsg(m interface{}) *BulkStore_BulkProcessClient_SendMsg_Call {
|
||||
return &BulkStore_BulkProcessClient_SendMsg_Call{Call: _e.mock.On("SendMsg", m)}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) Run(run func(m interface{})) *BulkStore_BulkProcessClient_SendMsg_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(interface{}))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) Return(_a0 error) *BulkStore_BulkProcessClient_SendMsg_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) RunAndReturn(run func(interface{}) error) *BulkStore_BulkProcessClient_SendMsg_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Trailer provides a mock function with no fields
|
||||
func (_m *BulkStore_BulkProcessClient) Trailer() metadata.MD {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Trailer")
|
||||
}
|
||||
|
||||
var r0 metadata.MD
|
||||
if rf, ok := ret.Get(0).(func() metadata.MD); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(metadata.MD)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BulkStore_BulkProcessClient_Trailer_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Trailer'
|
||||
type BulkStore_BulkProcessClient_Trailer_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Trailer is a helper method to define mock.On call
|
||||
func (_e *BulkStore_BulkProcessClient_Expecter) Trailer() *BulkStore_BulkProcessClient_Trailer_Call {
|
||||
return &BulkStore_BulkProcessClient_Trailer_Call{Call: _e.mock.On("Trailer")}
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Trailer_Call) Run(run func()) *BulkStore_BulkProcessClient_Trailer_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Trailer_Call) Return(_a0 metadata.MD) *BulkStore_BulkProcessClient_Trailer_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *BulkStore_BulkProcessClient_Trailer_Call) RunAndReturn(run func() metadata.MD) *BulkStore_BulkProcessClient_Trailer_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewBulkStore_BulkProcessClient creates a new instance of BulkStore_BulkProcessClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewBulkStore_BulkProcessClient(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *BulkStore_BulkProcessClient {
|
||||
mock := &BulkStore_BulkProcessClient{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
grpc "google.golang.org/grpc"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
|
||||
resourcepb "github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
||||
// MockBulkStoreClient is an autogenerated mock type for the BulkStoreClient type
|
||||
type MockBulkStoreClient struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockBulkStoreClient_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockBulkStoreClient) EXPECT() *MockBulkStoreClient_Expecter {
|
||||
return &MockBulkStoreClient_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// BulkProcess provides a mock function with given fields: ctx, opts
|
||||
func (_m *MockBulkStoreClient) BulkProcess(ctx context.Context, opts ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error) {
|
||||
_va := make([]interface{}, len(opts))
|
||||
for _i := range opts {
|
||||
_va[_i] = opts[_i]
|
||||
}
|
||||
var _ca []interface{}
|
||||
_ca = append(_ca, ctx)
|
||||
_ca = append(_ca, _va...)
|
||||
ret := _m.Called(_ca...)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BulkProcess")
|
||||
}
|
||||
|
||||
var r0 resourcepb.BulkStore_BulkProcessClient
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)); ok {
|
||||
return rf(ctx, opts...)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, ...grpc.CallOption) resourcepb.BulkStore_BulkProcessClient); ok {
|
||||
r0 = rf(ctx, opts...)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(resourcepb.BulkStore_BulkProcessClient)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, ...grpc.CallOption) error); ok {
|
||||
r1 = rf(ctx, opts...)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockBulkStoreClient_BulkProcess_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'BulkProcess'
|
||||
type MockBulkStoreClient_BulkProcess_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// BulkProcess is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - opts ...grpc.CallOption
|
||||
func (_e *MockBulkStoreClient_Expecter) BulkProcess(ctx interface{}, opts ...interface{}) *MockBulkStoreClient_BulkProcess_Call {
|
||||
return &MockBulkStoreClient_BulkProcess_Call{Call: _e.mock.On("BulkProcess",
|
||||
append([]interface{}{ctx}, opts...)...)}
|
||||
}
|
||||
|
||||
func (_c *MockBulkStoreClient_BulkProcess_Call) Run(run func(ctx context.Context, opts ...grpc.CallOption)) *MockBulkStoreClient_BulkProcess_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
variadicArgs := make([]grpc.CallOption, len(args)-1)
|
||||
for i, a := range args[1:] {
|
||||
if a != nil {
|
||||
variadicArgs[i] = a.(grpc.CallOption)
|
||||
}
|
||||
}
|
||||
run(args[0].(context.Context), variadicArgs...)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockBulkStoreClient_BulkProcess_Call) Return(_a0 resourcepb.BulkStore_BulkProcessClient, _a1 error) *MockBulkStoreClient_BulkProcess_Call {
|
||||
_c.Call.Return(_a0, _a1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockBulkStoreClient_BulkProcess_Call) RunAndReturn(run func(context.Context, ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)) *MockBulkStoreClient_BulkProcess_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockBulkStoreClient creates a new instance of MockBulkStoreClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockBulkStoreClient(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockBulkStoreClient {
|
||||
mock := &MockBulkStoreClient{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
jobs "github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
|
||||
repository "github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
|
||||
v0alpha1 "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
)
|
||||
|
||||
// MockLegacyResourcesMigrator is an autogenerated mock type for the LegacyResourcesMigrator type
|
||||
type MockLegacyResourcesMigrator struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockLegacyResourcesMigrator_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockLegacyResourcesMigrator) EXPECT() *MockLegacyResourcesMigrator_Expecter {
|
||||
return &MockLegacyResourcesMigrator_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// Migrate provides a mock function with given fields: ctx, rw, namespace, opts, progress
|
||||
func (_m *MockLegacyResourcesMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts v0alpha1.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
|
||||
ret := _m.Called(ctx, rw, namespace, opts, progress)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Migrate")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repository.ReaderWriter, string, v0alpha1.MigrateJobOptions, jobs.JobProgressRecorder) error); ok {
|
||||
r0 = rf(ctx, rw, namespace, opts, progress)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockLegacyResourcesMigrator_Migrate_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Migrate'
|
||||
type MockLegacyResourcesMigrator_Migrate_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Migrate is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - rw repository.ReaderWriter
|
||||
// - namespace string
|
||||
// - opts v0alpha1.MigrateJobOptions
|
||||
// - progress jobs.JobProgressRecorder
|
||||
func (_e *MockLegacyResourcesMigrator_Expecter) Migrate(ctx interface{}, rw interface{}, namespace interface{}, opts interface{}, progress interface{}) *MockLegacyResourcesMigrator_Migrate_Call {
|
||||
return &MockLegacyResourcesMigrator_Migrate_Call{Call: _e.mock.On("Migrate", ctx, rw, namespace, opts, progress)}
|
||||
}
|
||||
|
||||
func (_c *MockLegacyResourcesMigrator_Migrate_Call) Run(run func(ctx context.Context, rw repository.ReaderWriter, namespace string, opts v0alpha1.MigrateJobOptions, progress jobs.JobProgressRecorder)) *MockLegacyResourcesMigrator_Migrate_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context), args[1].(repository.ReaderWriter), args[2].(string), args[3].(v0alpha1.MigrateJobOptions), args[4].(jobs.JobProgressRecorder))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockLegacyResourcesMigrator_Migrate_Call) Return(_a0 error) *MockLegacyResourcesMigrator_Migrate_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockLegacyResourcesMigrator_Migrate_Call) RunAndReturn(run func(context.Context, repository.ReaderWriter, string, v0alpha1.MigrateJobOptions, jobs.JobProgressRecorder) error) *MockLegacyResourcesMigrator_Migrate_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockLegacyResourcesMigrator creates a new instance of MockLegacyResourcesMigrator. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockLegacyResourcesMigrator(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockLegacyResourcesMigrator {
|
||||
mock := &MockLegacyResourcesMigrator{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
|
|
|
|||
|
|
@ -1,129 +0,0 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// MockStorageSwapper is an autogenerated mock type for the StorageSwapper type
|
||||
type MockStorageSwapper struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockStorageSwapper_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockStorageSwapper) EXPECT() *MockStorageSwapper_Expecter {
|
||||
return &MockStorageSwapper_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// StopReadingUnifiedStorage provides a mock function with given fields: ctx
|
||||
func (_m *MockStorageSwapper) StopReadingUnifiedStorage(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for StopReadingUnifiedStorage")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockStorageSwapper_StopReadingUnifiedStorage_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'StopReadingUnifiedStorage'
|
||||
type MockStorageSwapper_StopReadingUnifiedStorage_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// StopReadingUnifiedStorage is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
func (_e *MockStorageSwapper_Expecter) StopReadingUnifiedStorage(ctx interface{}) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
|
||||
return &MockStorageSwapper_StopReadingUnifiedStorage_Call{Call: _e.mock.On("StopReadingUnifiedStorage", ctx)}
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) Run(run func(ctx context.Context)) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) Return(_a0 error) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) RunAndReturn(run func(context.Context) error) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// WipeUnifiedAndSetMigratedFlag provides a mock function with given fields: ctx, namespace
|
||||
func (_m *MockStorageSwapper) WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error {
|
||||
ret := _m.Called(ctx, namespace)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for WipeUnifiedAndSetMigratedFlag")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, string) error); ok {
|
||||
r0 = rf(ctx, namespace)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'WipeUnifiedAndSetMigratedFlag'
|
||||
type MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// WipeUnifiedAndSetMigratedFlag is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - namespace string
|
||||
func (_e *MockStorageSwapper_Expecter) WipeUnifiedAndSetMigratedFlag(ctx interface{}, namespace interface{}) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
|
||||
return &MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call{Call: _e.mock.On("WipeUnifiedAndSetMigratedFlag", ctx, namespace)}
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) Run(run func(ctx context.Context, namespace string)) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context), args[1].(string))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) Return(_a0 error) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) RunAndReturn(run func(context.Context, string) error) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockStorageSwapper creates a new instance of MockStorageSwapper. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockStorageSwapper(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockStorageSwapper {
|
||||
mock := &MockStorageSwapper{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package migrate
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
repository "github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// MockWrapWithStageFn is an autogenerated mock type for the WrapWithStageFn type
|
||||
type MockWrapWithStageFn struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockWrapWithStageFn_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockWrapWithStageFn) EXPECT() *MockWrapWithStageFn_Expecter {
|
||||
return &MockWrapWithStageFn_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// Execute provides a mock function with given fields: ctx, repo, stageOptions, fn
|
||||
func (_m *MockWrapWithStageFn) Execute(ctx context.Context, repo repository.Repository, stageOptions repository.StageOptions, fn func(repository.Repository, bool) error) error {
|
||||
ret := _m.Called(ctx, repo, stageOptions, fn)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Execute")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository, repository.StageOptions, func(repository.Repository, bool) error) error); ok {
|
||||
r0 = rf(ctx, repo, stageOptions, fn)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockWrapWithStageFn_Execute_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Execute'
|
||||
type MockWrapWithStageFn_Execute_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Execute is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - repo repository.Repository
|
||||
// - stageOptions repository.StageOptions
|
||||
// - fn func(repository.Repository , bool) error
|
||||
func (_e *MockWrapWithStageFn_Expecter) Execute(ctx interface{}, repo interface{}, stageOptions interface{}, fn interface{}) *MockWrapWithStageFn_Execute_Call {
|
||||
return &MockWrapWithStageFn_Execute_Call{Call: _e.mock.On("Execute", ctx, repo, stageOptions, fn)}
|
||||
}
|
||||
|
||||
func (_c *MockWrapWithStageFn_Execute_Call) Run(run func(ctx context.Context, repo repository.Repository, stageOptions repository.StageOptions, fn func(repository.Repository, bool) error)) *MockWrapWithStageFn_Execute_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context), args[1].(repository.Repository), args[2].(repository.StageOptions), args[3].(func(repository.Repository, bool) error))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockWrapWithStageFn_Execute_Call) Return(_a0 error) *MockWrapWithStageFn_Execute_Call {
|
||||
_c.Call.Return(_a0)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockWrapWithStageFn_Execute_Call) RunAndReturn(run func(context.Context, repository.Repository, repository.StageOptions, func(repository.Repository, bool) error) error) *MockWrapWithStageFn_Execute_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockWrapWithStageFn creates a new instance of MockWrapWithStageFn. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockWrapWithStageFn(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockWrapWithStageFn {
|
||||
mock := &MockWrapWithStageFn{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,102 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/metadata"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/logging"
|
||||
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
||||
//go:generate mockery --name BulkStoreClient --structname MockBulkStoreClient --inpackage --filename mock_bulk_store_client.go --with-expecter
|
||||
//go:generate mockery --name=BulkStore_BulkProcessClient --srcpkg=github.com/grafana/grafana/pkg/storage/unified/resource --output=. --outpkg=migrate --filename=mock_bulk_process_client.go --with-expecter
|
||||
type BulkStoreClient interface {
|
||||
BulkProcess(ctx context.Context, opts ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)
|
||||
}
|
||||
|
||||
//go:generate mockery --name StorageSwapper --structname MockStorageSwapper --inpackage --filename mock_storage_swapper.go --with-expecter
|
||||
type StorageSwapper interface {
|
||||
StopReadingUnifiedStorage(ctx context.Context) error
|
||||
WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error
|
||||
}
|
||||
|
||||
type storageSwapper struct {
|
||||
// Direct access to unified storage... use carefully!
|
||||
bulk BulkStoreClient
|
||||
dual dualwrite.Service
|
||||
}
|
||||
|
||||
func NewStorageSwapper(bulk BulkStoreClient, dual dualwrite.Service) StorageSwapper {
|
||||
return &storageSwapper{
|
||||
bulk: bulk,
|
||||
dual: dual,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *storageSwapper) StopReadingUnifiedStorage(ctx context.Context) error {
|
||||
// FIXME: dual writer is not namespaced which means that we would consider all namespaces migrated
|
||||
// after one migrates
|
||||
for _, gr := range resources.SupportedProvisioningResources {
|
||||
status, _ := s.dual.Status(ctx, gr.GroupResource())
|
||||
status.ReadUnified = false
|
||||
status.Migrated = 0
|
||||
status.Migrating = 0
|
||||
_, err := s.dual.Update(ctx, status)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storageSwapper) WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error {
|
||||
for _, gr := range resources.SupportedProvisioningResources {
|
||||
status, _ := s.dual.Status(ctx, gr.GroupResource())
|
||||
if status.ReadUnified {
|
||||
return fmt.Errorf("unexpected state - already using unified storage for: %s", gr)
|
||||
}
|
||||
if status.Migrating > 0 {
|
||||
if time.Since(time.UnixMilli(status.Migrating)) < time.Second*30 {
|
||||
return fmt.Errorf("another migration job is running for: %s", gr)
|
||||
}
|
||||
}
|
||||
settings := resource.BulkSettings{
|
||||
RebuildCollection: true, // wipes everything in the collection
|
||||
Collection: []*resourcepb.ResourceKey{{
|
||||
Namespace: namespace,
|
||||
Group: gr.Group,
|
||||
Resource: gr.Resource,
|
||||
}},
|
||||
}
|
||||
ctx = metadata.NewOutgoingContext(ctx, settings.ToMD())
|
||||
stream, err := s.bulk.BulkProcess(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error clearing unified %s / %w", gr, err)
|
||||
}
|
||||
stats, err := stream.CloseAndRecv()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error clearing unified %s / %w", gr, err)
|
||||
}
|
||||
logger := logging.FromContext(ctx)
|
||||
logger.Error("cleared unified storage", "stats", stats)
|
||||
|
||||
status.Migrated = time.Now().UnixMilli() // but not really... since the sync is starting
|
||||
status.ReadUnified = true
|
||||
status.WriteLegacy = false // keep legacy "clean"
|
||||
_, err = s.dual.Update(ctx, status)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,204 +0,0 @@
|
|||
package migrate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
|
||||
"google.golang.org/grpc/metadata"
|
||||
)
|
||||
|
||||
func TestStorageSwapper_StopReadingUnifiedStorage(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
setupMocks func(*MockBulkStoreClient, *dualwrite.MockService)
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "should update status for all resources",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
for _, gr := range resources.SupportedProvisioningResources {
|
||||
status := dualwrite.StorageStatus{
|
||||
ReadUnified: true,
|
||||
Migrated: 123,
|
||||
Migrating: 456,
|
||||
}
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
|
||||
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
|
||||
return !status.ReadUnified && status.Migrated == 0 && status.Migrating == 0
|
||||
})).Return(dualwrite.StorageStatus{}, nil)
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should fail if status update fails",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
|
||||
dual.On("Update", mock.Anything, mock.Anything).Return(dualwrite.StorageStatus{}, errors.New("update failed"))
|
||||
},
|
||||
expectedError: "update failed",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
bulk := NewMockBulkStoreClient(t)
|
||||
dual := dualwrite.NewMockService(t)
|
||||
|
||||
if tt.setupMocks != nil {
|
||||
tt.setupMocks(bulk, dual)
|
||||
}
|
||||
|
||||
swapper := NewStorageSwapper(bulk, dual)
|
||||
err := swapper.StopReadingUnifiedStorage(context.Background())
|
||||
|
||||
if tt.expectedError != "" {
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), tt.expectedError)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStorageSwapper_WipeUnifiedAndSetMigratedFlag(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
setupMocks func(*MockBulkStoreClient, *dualwrite.MockService)
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "should fail if already using unified storage",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
status := dualwrite.StorageStatus{
|
||||
ReadUnified: true,
|
||||
}
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
|
||||
},
|
||||
expectedError: "unexpected state - already using unified storage",
|
||||
},
|
||||
{
|
||||
name: "should fail if migration is in progress",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
status := dualwrite.StorageStatus{
|
||||
ReadUnified: false,
|
||||
Migrating: time.Now().UnixMilli(),
|
||||
}
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
|
||||
},
|
||||
expectedError: "another migration job is running",
|
||||
},
|
||||
{
|
||||
name: "should fail if bulk process fails",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
|
||||
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(nil, errors.New("bulk process failed"))
|
||||
},
|
||||
expectedError: "error clearing unified",
|
||||
},
|
||||
{
|
||||
name: "should fail if status update fails after bulk process",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
|
||||
|
||||
mockStream := NewBulkStore_BulkProcessClient(t)
|
||||
mockStream.On("CloseAndRecv").Return(&resourcepb.BulkResponse{}, nil)
|
||||
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(mockStream, nil)
|
||||
|
||||
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
|
||||
return status.ReadUnified && !status.WriteLegacy && status.Migrated > 0
|
||||
})).Return(dualwrite.StorageStatus{}, errors.New("update failed"))
|
||||
},
|
||||
expectedError: "update failed",
|
||||
},
|
||||
{
|
||||
name: "should fail if bulk process stream close fails",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
gr := resources.SupportedProvisioningResources[0]
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
|
||||
|
||||
mockStream := NewBulkStore_BulkProcessClient(t)
|
||||
mockStream.On("CloseAndRecv").Return(nil, errors.New("stream close failed"))
|
||||
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(mockStream, nil)
|
||||
},
|
||||
expectedError: "error clearing unified",
|
||||
},
|
||||
{
|
||||
name: "should succeed with complete workflow",
|
||||
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
|
||||
for _, gr := range resources.SupportedProvisioningResources {
|
||||
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
|
||||
|
||||
mockStream := NewBulkStore_BulkProcessClient(t)
|
||||
mockStream.On("CloseAndRecv").Return(&resourcepb.BulkResponse{}, nil)
|
||||
bulk.On("BulkProcess", mock.MatchedBy(func(ctx context.Context) bool {
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
//nolint:errcheck // hits the err != nil gotcha
|
||||
settings, _ := resource.NewBulkSettings(md)
|
||||
if !settings.RebuildCollection {
|
||||
return false
|
||||
}
|
||||
if len(settings.Collection) != 1 {
|
||||
return false
|
||||
}
|
||||
if settings.Collection[0].Namespace != "test-namespace" {
|
||||
return false
|
||||
}
|
||||
if settings.Collection[0].Group != gr.Group {
|
||||
return false
|
||||
}
|
||||
if settings.Collection[0].Resource != gr.Resource {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}), mock.Anything).Return(mockStream, nil)
|
||||
|
||||
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
|
||||
return status.ReadUnified && !status.WriteLegacy && status.Migrated > 0
|
||||
})).Return(dualwrite.StorageStatus{}, nil)
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
bulk := NewMockBulkStoreClient(t)
|
||||
dual := dualwrite.NewMockService(t)
|
||||
|
||||
if tt.setupMocks != nil {
|
||||
tt.setupMocks(bulk, dual)
|
||||
}
|
||||
|
||||
swapper := NewStorageSwapper(bulk, dual)
|
||||
err := swapper.WipeUnifiedAndSetMigratedFlag(context.Background(), "test-namespace")
|
||||
|
||||
if tt.expectedError != "" {
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), tt.expectedError)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -33,23 +33,7 @@ func NewUnifiedStorageMigrator(
|
|||
func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.ReaderWriter, options provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
|
||||
namespace := repo.Config().GetNamespace()
|
||||
|
||||
// For folder-type repositories, only run sync (skip export and cleaner)
|
||||
if repo.Config().Spec.Sync.Target == provisioning.SyncTargetTypeFolder {
|
||||
progress.SetMessage(ctx, "pull resources")
|
||||
syncJob := provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Pull: &provisioning.SyncJobOptions{
|
||||
Incremental: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := m.syncWorker.Process(ctx, repo, syncJob, progress); err != nil {
|
||||
return fmt.Errorf("pull resources: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// For instance-type repositories, run the full workflow: export -> sync -> clean
|
||||
// Export resources first (for both folder and instance sync)
|
||||
progress.SetMessage(ctx, "export resources")
|
||||
progress.StrictMaxErrors(1) // strict as we want the entire instance to be managed
|
||||
|
||||
|
|
@ -67,6 +51,7 @@ func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.Re
|
|||
// Reset the results after the export as pull will operate on the same resources
|
||||
progress.ResetResults()
|
||||
|
||||
// Pull resources from the repository
|
||||
progress.SetMessage(ctx, "pull resources")
|
||||
syncJob := provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
|
|
@ -79,9 +64,12 @@ func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.Re
|
|||
return fmt.Errorf("pull resources: %w", err)
|
||||
}
|
||||
|
||||
progress.SetMessage(ctx, "clean namespace")
|
||||
if err := m.namespaceCleaner.Clean(ctx, namespace, progress); err != nil {
|
||||
return fmt.Errorf("clean namespace: %w", err)
|
||||
// For instance-type repositories, also clean the namespace
|
||||
if repo.Config().Spec.Sync.Target != provisioning.SyncTargetTypeFolder {
|
||||
progress.SetMessage(ctx, "clean namespace")
|
||||
if err := m.namespaceCleaner.Clean(ctx, namespace, progress); err != nil {
|
||||
return fmt.Errorf("clean namespace: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
|||
|
|
@ -134,7 +134,7 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
|
|||
expectedError: "",
|
||||
},
|
||||
{
|
||||
name: "should only run sync for folder-type repositories",
|
||||
name: "should run export and sync for folder-type repositories",
|
||||
setupMocks: func(nc *MockNamespaceCleaner, ew *jobs.MockWorker, sw *jobs.MockWorker, pr *jobs.MockJobProgressRecorder, rw *repository.MockRepository) {
|
||||
rw.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -147,9 +147,15 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
|
|||
},
|
||||
},
|
||||
})
|
||||
// Export should be skipped - no export-related mocks
|
||||
// Cleaner should also be skipped - no cleaner-related mocks
|
||||
// Only sync job should run
|
||||
// Export should run for folder-type repositories
|
||||
pr.On("SetMessage", mock.Anything, "export resources").Return()
|
||||
pr.On("StrictMaxErrors", 1).Return()
|
||||
ew.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Push != nil
|
||||
}), pr).Return(nil)
|
||||
pr.On("ResetResults").Return()
|
||||
// Cleaner should be skipped - no cleaner-related mocks
|
||||
// Sync job should run
|
||||
pr.On("SetMessage", mock.Anything, "pull resources").Return()
|
||||
sw.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
|
||||
|
|
@ -171,7 +177,14 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
|
|||
},
|
||||
},
|
||||
})
|
||||
// Only sync job should run and fail
|
||||
// Export should run first
|
||||
pr.On("SetMessage", mock.Anything, "export resources").Return()
|
||||
pr.On("StrictMaxErrors", 1).Return()
|
||||
ew.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Push != nil
|
||||
}), pr).Return(nil)
|
||||
pr.On("ResetResults").Return()
|
||||
// Sync job should run and fail
|
||||
pr.On("SetMessage", mock.Anything, "pull resources").Return()
|
||||
sw.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
|
||||
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import (
|
|||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
)
|
||||
|
||||
//go:generate mockery --name Migrator --structname MockMigrator --inpackage --filename mock_migrator.go --with-expecter
|
||||
|
|
@ -16,8 +15,6 @@ type Migrator interface {
|
|||
}
|
||||
|
||||
type MigrationWorker struct {
|
||||
storageStatus dualwrite.Service
|
||||
legacyMigrator Migrator
|
||||
unifiedMigrator Migrator
|
||||
}
|
||||
|
||||
|
|
@ -27,16 +24,9 @@ func NewMigrationWorkerFromUnified(unifiedMigrator Migrator) *MigrationWorker {
|
|||
}
|
||||
}
|
||||
|
||||
// HACK: we should decouple the implementation of these two
|
||||
func NewMigrationWorker(
|
||||
legacyMigrator Migrator,
|
||||
unifiedMigrator Migrator,
|
||||
storageStatus dualwrite.Service,
|
||||
) *MigrationWorker {
|
||||
func NewMigrationWorker(unifiedMigrator Migrator) *MigrationWorker {
|
||||
return &MigrationWorker{
|
||||
unifiedMigrator: unifiedMigrator,
|
||||
legacyMigrator: legacyMigrator,
|
||||
storageStatus: storageStatus,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -56,28 +46,5 @@ func (w *MigrationWorker) Process(ctx context.Context, repo repository.Repositor
|
|||
return errors.New("migration job submitted targeting repository that is not a ReaderWriter")
|
||||
}
|
||||
|
||||
if options.History {
|
||||
if repo.Config().Spec.Type != provisioning.GitHubRepositoryType {
|
||||
return errors.New("history is only supported for github repositories")
|
||||
}
|
||||
}
|
||||
|
||||
// Block migrate for legacy resources if repository type is folder
|
||||
if repo.Config().Spec.Sync.Target == provisioning.SyncTargetTypeFolder {
|
||||
// HACK: we should not have to check for storage existence here
|
||||
if w.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, w.storageStatus) {
|
||||
return errors.New("migration of legacy resources is not supported for folder-type repositories")
|
||||
}
|
||||
}
|
||||
|
||||
// HACK: we should not have to check for storage existence here
|
||||
if w.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, w.storageStatus) {
|
||||
return w.legacyMigrator.Migrate(ctx, rw, *options, progress)
|
||||
}
|
||||
|
||||
if options.History {
|
||||
return errors.New("history is not yet supported in unified storage")
|
||||
}
|
||||
|
||||
return w.unifiedMigrator.Migrate(ctx, rw, *options, progress)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ package migrate
|
|||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
|
@ -11,9 +10,7 @@ import (
|
|||
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository/local"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
)
|
||||
|
||||
func TestMigrationWorker_IsSupported(t *testing.T) {
|
||||
|
|
@ -42,7 +39,7 @@ func TestMigrationWorker_IsSupported(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
worker := NewMigrationWorker(nil, nil, nil)
|
||||
worker := NewMigrationWorker(nil)
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
|
@ -53,7 +50,7 @@ func TestMigrationWorker_IsSupported(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestMigrationWorker_ProcessNotReaderWriter(t *testing.T) {
|
||||
worker := NewMigrationWorker(nil, nil, nil)
|
||||
worker := NewMigrationWorker(NewMockMigrator(t))
|
||||
job := provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
|
|
@ -68,56 +65,13 @@ func TestMigrationWorker_ProcessNotReaderWriter(t *testing.T) {
|
|||
require.EqualError(t, err, "migration job submitted targeting repository that is not a ReaderWriter")
|
||||
}
|
||||
|
||||
func TestMigrationWorker_WithHistory(t *testing.T) {
|
||||
fakeDualwrite := dualwrite.NewMockService(t)
|
||||
fakeDualwrite.On("ReadFromUnified", mock.Anything, mock.Anything).
|
||||
Maybe().Return(true, nil) // using unified storage
|
||||
|
||||
worker := NewMigrationWorker(nil, nil, fakeDualwrite)
|
||||
job := provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{
|
||||
History: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("fail local", func(t *testing.T) {
|
||||
progressRecorder := jobs.NewMockJobProgressRecorder(t)
|
||||
progressRecorder.On("SetTotal", mock.Anything, 10).Return()
|
||||
|
||||
repo := local.NewRepository(&provisioning.Repository{}, nil)
|
||||
err := worker.Process(context.Background(), repo, job, progressRecorder)
|
||||
require.EqualError(t, err, "history is only supported for github repositories")
|
||||
})
|
||||
|
||||
t.Run("fail unified", func(t *testing.T) {
|
||||
progressRecorder := jobs.NewMockJobProgressRecorder(t)
|
||||
progressRecorder.On("SetTotal", mock.Anything, 10).Return()
|
||||
|
||||
repo := repository.NewMockRepository(t)
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Type: provisioning.GitHubRepositoryType,
|
||||
GitHub: &provisioning.GitHubRepositoryConfig{
|
||||
URL: "empty", // not valid
|
||||
},
|
||||
},
|
||||
})
|
||||
err := worker.Process(context.Background(), repo, job, progressRecorder)
|
||||
require.EqualError(t, err, "history is not yet supported in unified storage")
|
||||
})
|
||||
}
|
||||
|
||||
func TestMigrationWorker_Process(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
setupMocks func(*MockMigrator, *MockMigrator, *dualwrite.MockService, *jobs.MockJobProgressRecorder)
|
||||
setupRepo func(*repository.MockRepository)
|
||||
job provisioning.Job
|
||||
expectedError string
|
||||
isLegacyActive bool
|
||||
name string
|
||||
setupMocks func(*MockMigrator, *jobs.MockJobProgressRecorder)
|
||||
setupRepo func(*repository.MockRepository)
|
||||
job provisioning.Job
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "should fail when migrate settings are missing",
|
||||
|
|
@ -127,7 +81,7 @@ func TestMigrationWorker_Process(t *testing.T) {
|
|||
Migrate: nil,
|
||||
},
|
||||
},
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
// No Config() call expected since we fail before that
|
||||
|
|
@ -135,150 +89,35 @@ func TestMigrationWorker_Process(t *testing.T) {
|
|||
expectedError: "missing migrate settings",
|
||||
},
|
||||
{
|
||||
name: "should use legacy migrator when legacy storage is active",
|
||||
name: "should use unified storage migrator for instance-type repositories",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: true,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
|
||||
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeInstance,
|
||||
},
|
||||
},
|
||||
})
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should use unified storage migrator when legacy storage is not active",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: false,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil)
|
||||
um.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeInstance,
|
||||
},
|
||||
},
|
||||
})
|
||||
// No Config() call needed anymore
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should propagate migrator errors",
|
||||
name: "should allow migration for folder-type repositories",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: true,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
|
||||
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("migration failed"))
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeInstance,
|
||||
},
|
||||
},
|
||||
})
|
||||
},
|
||||
expectedError: "migration failed",
|
||||
},
|
||||
{
|
||||
name: "should block migration of legacy resources for folder-type repositories",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: true,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
|
||||
// legacyMigrator should not be called as we block before reaching it
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeFolder,
|
||||
},
|
||||
},
|
||||
})
|
||||
},
|
||||
expectedError: "migration of legacy resources is not supported for folder-type repositories",
|
||||
},
|
||||
{
|
||||
name: "should allow migration of legacy resources for instance-type repositories",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: true,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
|
||||
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeInstance,
|
||||
},
|
||||
},
|
||||
})
|
||||
},
|
||||
expectedError: "",
|
||||
},
|
||||
{
|
||||
name: "should allow migration for folder-type repositories when legacy storage is not active",
|
||||
job: provisioning.Job{
|
||||
Spec: provisioning.JobSpec{
|
||||
Action: provisioning.JobActionMigrate,
|
||||
Migrate: &provisioning.MigrateJobOptions{},
|
||||
},
|
||||
},
|
||||
isLegacyActive: false,
|
||||
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
|
||||
pr.On("SetTotal", mock.Anything, 10).Return()
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil)
|
||||
um.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
|
||||
},
|
||||
setupRepo: func(repo *repository.MockRepository) {
|
||||
repo.On("Config").Return(&provisioning.Repository{
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Sync: provisioning.SyncOptions{
|
||||
Target: provisioning.SyncTargetTypeFolder,
|
||||
},
|
||||
},
|
||||
})
|
||||
// No Config() call needed anymore
|
||||
},
|
||||
expectedError: "",
|
||||
},
|
||||
|
|
@ -286,15 +125,13 @@ func TestMigrationWorker_Process(t *testing.T) {
|
|||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
legacyMigrator := NewMockMigrator(t)
|
||||
unifiedMigrator := NewMockMigrator(t)
|
||||
dualWriteService := dualwrite.NewMockService(t)
|
||||
progressRecorder := jobs.NewMockJobProgressRecorder(t)
|
||||
|
||||
worker := NewMigrationWorker(legacyMigrator, unifiedMigrator, dualWriteService)
|
||||
worker := NewMigrationWorker(unifiedMigrator)
|
||||
|
||||
if tt.setupMocks != nil {
|
||||
tt.setupMocks(legacyMigrator, unifiedMigrator, dualWriteService, progressRecorder)
|
||||
tt.setupMocks(unifiedMigrator, progressRecorder)
|
||||
}
|
||||
|
||||
rw := repository.NewMockRepository(t)
|
||||
|
|
@ -310,7 +147,7 @@ func TestMigrationWorker_Process(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
mock.AssertExpectationsForObjects(t, legacyMigrator, unifiedMigrator, dualWriteService, progressRecorder, rw)
|
||||
mock.AssertExpectationsForObjects(t, unifiedMigrator, progressRecorder, rw)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package jobs
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ import (
|
|||
//go:generate mockery --name FullSyncFn --structname MockFullSyncFn --inpackage --filename full_sync_fn_mock.go --with-expecter
|
||||
type FullSyncFn func(ctx context.Context, repo repository.Reader, compare CompareFn, clients resources.ResourceClients, currentRef string, repositoryResources resources.RepositoryResources, progress jobs.JobProgressRecorder, tracer tracing.Tracer, maxSyncWorkers int, metrics jobs.JobMetrics) error
|
||||
|
||||
//go:generate mockery --name CompareFn --structname MockCompareFn --inpackage --filename compare_fn_mock.go --with-expecter
|
||||
//go:generate mockery -name CompareFn --structname MockCompareFn --inpackage --filename compare_fn_mock.go --with-expecter
|
||||
type CompareFn func(ctx context.Context, repo repository.Reader, repositoryResources resources.RepositoryResources, ref string) ([]ResourceFileChange, error)
|
||||
|
||||
//go:generate mockery --name IncrementalSyncFn --structname MockIncrementalSyncFn --inpackage --filename incremental_sync_fn_mock.go --with-expecter
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ import (
|
|||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/utils"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
)
|
||||
|
|
@ -29,9 +28,6 @@ type SyncWorker struct {
|
|||
// ResourceClients for the repository
|
||||
repositoryResources resources.RepositoryResourcesFactory
|
||||
|
||||
// Check if the system is using unified storage
|
||||
storageStatus dualwrite.Service
|
||||
|
||||
// Patch status for the repository
|
||||
patchStatus RepositoryPatchFn
|
||||
|
||||
|
|
@ -48,7 +44,6 @@ type SyncWorker struct {
|
|||
func NewSyncWorker(
|
||||
clients resources.ClientFactory,
|
||||
repositoryResources resources.RepositoryResourcesFactory,
|
||||
storageStatus dualwrite.Service,
|
||||
patchStatus RepositoryPatchFn,
|
||||
syncer Syncer,
|
||||
metrics jobs.JobMetrics,
|
||||
|
|
@ -59,7 +54,6 @@ func NewSyncWorker(
|
|||
clients: clients,
|
||||
repositoryResources: repositoryResources,
|
||||
patchStatus: patchStatus,
|
||||
storageStatus: storageStatus,
|
||||
syncer: syncer,
|
||||
metrics: metrics,
|
||||
tracer: tracer,
|
||||
|
|
@ -96,13 +90,6 @@ func (r *SyncWorker) Process(ctx context.Context, repo repository.Repository, jo
|
|||
)
|
||||
}()
|
||||
|
||||
// Check if we are onboarding from legacy storage
|
||||
// HACK -- this should be handled outside of this worker
|
||||
if r.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, r.storageStatus) {
|
||||
err := fmt.Errorf("sync not supported until storage has migrated")
|
||||
return tracing.Error(span, err)
|
||||
}
|
||||
|
||||
rw, ok := repo.(repository.ReaderWriter)
|
||||
if !ok {
|
||||
err := fmt.Errorf("sync job submitted for repository that does not support read-write")
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@ import (
|
|||
"github.com/grafana/grafana/pkg/infra/tracing"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
|
@ -46,7 +45,7 @@ func TestSyncWorker_IsSupported(t *testing.T) {
|
|||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
worker := NewSyncWorker(nil, nil, nil, nil, nil, metrics, tracing.NewNoopTracerService(), 10)
|
||||
worker := NewSyncWorker(nil, nil, nil, nil, metrics, tracing.NewNoopTracerService(), 10)
|
||||
result := worker.IsSupported(context.Background(), tt.job)
|
||||
require.Equal(t, tt.expected, result)
|
||||
})
|
||||
|
|
@ -63,9 +62,7 @@ func TestSyncWorker_ProcessNotReaderWriter(t *testing.T) {
|
|||
Title: "test-repo",
|
||||
},
|
||||
})
|
||||
fakeDualwrite := dualwrite.NewMockService(t)
|
||||
fakeDualwrite.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
worker := NewSyncWorker(nil, nil, fakeDualwrite, nil, nil, jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()), tracing.NewNoopTracerService(), 10)
|
||||
worker := NewSyncWorker(nil, nil, nil, nil, jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()), tracing.NewNoopTracerService(), 10)
|
||||
err := worker.Process(context.Background(), repo, provisioning.Job{}, jobs.NewMockJobProgressRecorder(t))
|
||||
require.EqualError(t, err, "sync job submitted for repository that does not support read-write")
|
||||
}
|
||||
|
|
@ -73,31 +70,13 @@ func TestSyncWorker_ProcessNotReaderWriter(t *testing.T) {
|
|||
func TestSyncWorker_Process(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
setupMocks func(*resources.MockClientFactory, *resources.MockRepositoryResourcesFactory, *dualwrite.MockService, *MockRepositoryPatchFn, *MockSyncer, *mockReaderWriter, *jobs.MockJobProgressRecorder)
|
||||
setupMocks func(*resources.MockClientFactory, *resources.MockRepositoryResourcesFactory, *MockRepositoryPatchFn, *MockSyncer, *mockReaderWriter, *jobs.MockJobProgressRecorder)
|
||||
expectedError string
|
||||
expectedStatus *provisioning.SyncStatus
|
||||
}{
|
||||
{
|
||||
name: "legacy storage not migrated",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
rw.MockRepository.On("Config").Return(&provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
},
|
||||
Spec: provisioning.RepositorySpec{
|
||||
Title: "test-repo",
|
||||
},
|
||||
})
|
||||
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil).Twice()
|
||||
},
|
||||
expectedError: "sync not supported until storage has migrated",
|
||||
},
|
||||
{
|
||||
name: "failed initial status patching",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
// Setup repository config with existing LastRef
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -132,7 +111,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "failed getting repository resources",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
// Setup repository config
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -149,9 +128,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
|
||||
// Storage is migrated
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
// Initial status update succeeds - expect granular patches
|
||||
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
|
||||
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
|
@ -168,7 +144,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "failed getting clients for namespace",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
// Setup repository config
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -186,9 +162,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
|
||||
// Storage is migrated
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
// Initial status update succeeds - expect granular patches
|
||||
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
|
||||
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
|
@ -208,7 +181,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "successful sync",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -222,9 +195,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
|
||||
// Storage is migrated
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
// Initial status update - expect granular patches
|
||||
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
|
||||
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
|
@ -261,7 +231,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "failed sync",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -275,9 +245,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
|
||||
// Storage is migrated
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
// Initial status update - expect granular patches
|
||||
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
|
||||
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
|
@ -315,7 +282,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "stats call fails",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -323,7 +290,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("Stats", mock.Anything).Return(nil, errors.New("stats error"))
|
||||
|
|
@ -344,7 +310,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "stats returns nil stats and nil error",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -352,7 +318,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
mockRepoResources.On("Stats", mock.Anything).Return(nil, nil)
|
||||
|
|
@ -378,7 +343,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "stats returns one managed stats",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -386,7 +351,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
// Initial patch with granular updates
|
||||
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
||||
|
|
@ -439,7 +403,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "stats returns multiple managed stats",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -447,7 +411,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
mockRepoResources := resources.NewMockRepositoryResources(t)
|
||||
stats := &provisioning.ResourceStats{
|
||||
|
|
@ -495,7 +458,7 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "failed final status patch",
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
|
||||
repoConfig := &provisioning.Repository{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-repo",
|
||||
|
|
@ -503,7 +466,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
},
|
||||
}
|
||||
rw.MockRepository.On("Config").Return(repoConfig)
|
||||
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
|
||||
|
||||
// Initial status patch succeeds - expect granular patches
|
||||
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
|
||||
|
|
@ -534,7 +496,6 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
// Create mocks
|
||||
clientFactory := resources.NewMockClientFactory(t)
|
||||
repoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
|
||||
dualwriteService := dualwrite.NewMockService(t)
|
||||
repositoryPatchFn := NewMockRepositoryPatchFn(t)
|
||||
syncer := NewMockSyncer(t)
|
||||
readerWriter := &mockReaderWriter{
|
||||
|
|
@ -544,13 +505,12 @@ func TestSyncWorker_Process(t *testing.T) {
|
|||
progressRecorder := jobs.NewMockJobProgressRecorder(t)
|
||||
|
||||
// Setup mocks
|
||||
tt.setupMocks(clientFactory, repoResourcesFactory, dualwriteService, repositoryPatchFn, syncer, readerWriter, progressRecorder)
|
||||
tt.setupMocks(clientFactory, repoResourcesFactory, repositoryPatchFn, syncer, readerWriter, progressRecorder)
|
||||
|
||||
// Create worker
|
||||
worker := NewSyncWorker(
|
||||
clientFactory,
|
||||
repoResourcesFactory,
|
||||
dualwriteService,
|
||||
repositoryPatchFn.Execute,
|
||||
syncer,
|
||||
jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()),
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// Code generated by mockery v2.52.4. DO NOT EDIT.
|
||||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package jobs
|
||||
|
||||
|
|
|
|||
|
|
@ -28,8 +28,6 @@ import (
|
|||
authlib "github.com/grafana/authlib/types"
|
||||
"github.com/grafana/grafana-app-sdk/logging"
|
||||
|
||||
dashboard "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
|
||||
folders "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
connectionvalidation "github.com/grafana/grafana/apps/provisioning/pkg/connection"
|
||||
appcontroller "github.com/grafana/grafana/apps/provisioning/pkg/controller"
|
||||
|
|
@ -54,14 +52,12 @@ import (
|
|||
movepkg "github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/move"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/sync"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/usage"
|
||||
"github.com/grafana/grafana/pkg/services/apiserver"
|
||||
"github.com/grafana/grafana/pkg/services/apiserver/builder"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/migrations"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
)
|
||||
|
||||
|
|
@ -112,7 +108,6 @@ type APIBuilder struct {
|
|||
jobHistoryLoki *jobs.LokiJobHistory
|
||||
resourceLister resources.ResourceLister
|
||||
dashboardAccess legacy.MigrationDashboardAccessor
|
||||
storageStatus dualwrite.Service
|
||||
unified resource.ResourceClient
|
||||
repoFactory repository.Factory
|
||||
client client.ProvisioningV0alpha1Interface
|
||||
|
|
@ -159,9 +154,9 @@ func NewAPIBuilder(
|
|||
} else {
|
||||
clients = resources.NewClientFactory(configProvider)
|
||||
}
|
||||
|
||||
parsers := resources.NewParserFactory(clients)
|
||||
legacyMigrator := migrations.ProvideUnifiedMigrator(dashboardAccess, unified)
|
||||
resourceLister := resources.NewResourceListerForMigrations(unified, legacyMigrator, storageStatus)
|
||||
resourceLister := resources.NewResourceListerForMigrations(unified)
|
||||
|
||||
b := &APIBuilder{
|
||||
onlyApiServer: onlyApiServer,
|
||||
|
|
@ -174,7 +169,6 @@ func NewAPIBuilder(
|
|||
repositoryResources: resources.NewRepositoryResourcesFactory(parsers, clients, resourceLister),
|
||||
resourceLister: resourceLister,
|
||||
dashboardAccess: dashboardAccess,
|
||||
storageStatus: storageStatus,
|
||||
unified: unified,
|
||||
access: access,
|
||||
jobHistoryConfig: jobHistoryConfig,
|
||||
|
|
@ -250,6 +244,10 @@ func RegisterAPIService(
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
if dualwrite.IsReadingLegacyDashboardsAndFolders(context.Background(), storageStatus) {
|
||||
return nil, fmt.Errorf("resources are stored in an incompatible data format to use provisioning. Please enable data migration in settings for folders and dashboards by adding the following configuration:\n[unified_storage.folders.folder.grafana.app]\nenableMigration = true\n\n[unified_storage.dashboards.dashboard.grafana.app]\nenableMigration = true\n\nAlternatively, disable provisioning")
|
||||
}
|
||||
|
||||
allowedTargets := []provisioning.SyncTargetType{}
|
||||
for _, target := range cfg.ProvisioningAllowedTargets {
|
||||
allowedTargets = append(allowedTargets, provisioning.SyncTargetType(target))
|
||||
|
|
@ -762,12 +760,6 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
|
|||
go repoInformer.Informer().Run(postStartHookCtx.Done())
|
||||
go jobInformer.Informer().Run(postStartHookCtx.Done())
|
||||
|
||||
// When starting with an empty instance -- swith to "mode 4+"
|
||||
err = b.tryRunningOnlyUnifiedStorage()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create the repository resources factory
|
||||
repositoryListerWrapper := func(ctx context.Context) ([]provisioning.Repository, error) {
|
||||
return GetRepositoriesInNamespace(ctx, b.store)
|
||||
|
|
@ -790,29 +782,12 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
|
|||
syncWorker := sync.NewSyncWorker(
|
||||
b.clients,
|
||||
b.repositoryResources,
|
||||
b.storageStatus,
|
||||
b.statusPatcher.Patch,
|
||||
syncer,
|
||||
metrics,
|
||||
b.tracer,
|
||||
10,
|
||||
)
|
||||
signerFactory := signature.NewSignerFactory(b.clients)
|
||||
legacyResources := migrate.NewLegacyResourcesMigrator(
|
||||
b.repositoryResources,
|
||||
b.parsers,
|
||||
b.dashboardAccess,
|
||||
signerFactory,
|
||||
b.clients,
|
||||
export.ExportAll,
|
||||
)
|
||||
storageSwapper := migrate.NewStorageSwapper(b.unified, b.storageStatus)
|
||||
legacyMigrator := migrate.NewLegacyMigrator(
|
||||
legacyResources,
|
||||
storageSwapper,
|
||||
syncWorker,
|
||||
stageIfPossible,
|
||||
)
|
||||
|
||||
cleaner := migrate.NewNamespaceCleaner(b.clients)
|
||||
unifiedStorageMigrator := migrate.NewUnifiedStorageMigrator(
|
||||
|
|
@ -820,12 +795,7 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
|
|||
exportWorker,
|
||||
syncWorker,
|
||||
)
|
||||
|
||||
migrationWorker := migrate.NewMigrationWorker(
|
||||
legacyMigrator,
|
||||
unifiedStorageMigrator,
|
||||
b.storageStatus,
|
||||
)
|
||||
migrationWorker := migrate.NewMigrationWorker(unifiedStorageMigrator)
|
||||
|
||||
deleteWorker := deletepkg.NewWorker(syncWorker, stageIfPossible, b.repositoryResources, metrics)
|
||||
moveWorker := movepkg.NewWorker(syncWorker, stageIfPossible, b.repositoryResources, metrics)
|
||||
|
|
@ -897,7 +867,6 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
|
|||
b.resourceLister,
|
||||
b.clients,
|
||||
b.jobs,
|
||||
b.storageStatus,
|
||||
b.GetHealthChecker(),
|
||||
b.statusPatcher,
|
||||
b.registry,
|
||||
|
|
@ -1306,65 +1275,6 @@ spec:
|
|||
return oas, nil
|
||||
}
|
||||
|
||||
// FIXME: This logic does not belong in provisioning! (but required for now)
|
||||
// When starting an empty instance, we shift so that we never reference legacy storage
|
||||
// This should run somewhere else at startup by default (dual writer? dashboards?)
|
||||
func (b *APIBuilder) tryRunningOnlyUnifiedStorage() error {
|
||||
ctx := context.Background()
|
||||
|
||||
if !b.storageStatus.ShouldManage(dashboard.DashboardResourceInfo.GroupResource()) {
|
||||
return nil // not enabled
|
||||
}
|
||||
|
||||
if !dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, b.storageStatus) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Count how many things exist - create a migrator on-demand for this
|
||||
legacyMigrator := migrations.ProvideUnifiedMigrator(b.dashboardAccess, b.unified)
|
||||
rsp, err := legacyMigrator.Migrate(ctx, legacy.MigrateOptions{
|
||||
Namespace: "default", // FIXME! this works for single org, but need to check multi-org
|
||||
Resources: []schema.GroupResource{{
|
||||
Group: dashboard.GROUP, Resource: dashboard.DASHBOARD_RESOURCE,
|
||||
}, {
|
||||
Group: folders.GROUP, Resource: folders.RESOURCE,
|
||||
}},
|
||||
OnlyCount: true,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("error getting legacy count %w", err)
|
||||
}
|
||||
for _, stats := range rsp.Summary {
|
||||
if stats.Count > 0 {
|
||||
return nil // something exists we can not just switch
|
||||
}
|
||||
}
|
||||
|
||||
logger := logging.DefaultLogger.With("logger", "provisioning startup")
|
||||
mode5 := func(gr schema.GroupResource) error {
|
||||
status, _ := b.storageStatus.Status(ctx, gr)
|
||||
if !status.ReadUnified {
|
||||
status.ReadUnified = true
|
||||
status.WriteLegacy = false
|
||||
status.WriteUnified = true
|
||||
status.Runtime = false
|
||||
status.Migrated = time.Now().UnixMilli()
|
||||
_, err = b.storageStatus.Update(ctx, status)
|
||||
logger.Info("set unified storage access", "group", gr.Group, "resource", gr.Resource)
|
||||
return err
|
||||
}
|
||||
return nil // already reading unified
|
||||
}
|
||||
|
||||
if err = mode5(dashboard.DashboardResourceInfo.GroupResource()); err != nil {
|
||||
return err
|
||||
}
|
||||
if err = mode5(folders.FolderResourceInfo.GroupResource()); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helpers for fetching valid Repository objects
|
||||
// TODO: where should the helpers live?
|
||||
|
||||
|
|
|
|||
|
|
@ -197,6 +197,8 @@ func (fm *FolderManager) EnsureFolderTreeExists(ctx context.Context, ref, path s
|
|||
if err != nil && (!errors.Is(err, repository.ErrFileNotFound) && !apierrors.IsNotFound(err)) {
|
||||
return fn(folder, false, fmt.Errorf("check if folder exists before writing: %w", err))
|
||||
} else if err == nil {
|
||||
// Folder already exists in repository, add it to tree so resources can find it
|
||||
fm.tree.Add(folder, parent)
|
||||
return fn(folder, false, nil)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -4,15 +4,9 @@ import (
|
|||
"context"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
dashboard "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
|
||||
folders "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/migrations"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
)
|
||||
|
|
@ -31,25 +25,16 @@ type ResourceStore interface {
|
|||
}
|
||||
|
||||
type ResourceListerFromSearch struct {
|
||||
store ResourceStore
|
||||
migrator migrations.UnifiedMigrator
|
||||
storageStatus dualwrite.Service
|
||||
store ResourceStore
|
||||
}
|
||||
|
||||
func NewResourceLister(store ResourceStore) ResourceLister {
|
||||
return &ResourceListerFromSearch{store: store}
|
||||
}
|
||||
|
||||
// FIXME: the logic about migration and storage should probably be separated from this
|
||||
func NewResourceListerForMigrations(
|
||||
store ResourceStore,
|
||||
migrator migrations.UnifiedMigrator,
|
||||
storageStatus dualwrite.Service,
|
||||
) ResourceLister {
|
||||
func NewResourceListerForMigrations(store ResourceStore) ResourceLister {
|
||||
return &ResourceListerFromSearch{
|
||||
store: store,
|
||||
migrator: migrator,
|
||||
storageStatus: storageStatus,
|
||||
store: store,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -133,37 +118,6 @@ func (o *ResourceListerFromSearch) Stats(ctx context.Context, namespace, reposit
|
|||
return stats, nil
|
||||
}
|
||||
|
||||
// Get the stats based on what a migration could support
|
||||
if o.storageStatus != nil && o.migrator != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, o.storageStatus) {
|
||||
rsp, err := o.migrator.Migrate(ctx, legacy.MigrateOptions{
|
||||
Namespace: namespace,
|
||||
Resources: []schema.GroupResource{{
|
||||
Group: dashboard.GROUP, Resource: dashboard.DASHBOARD_RESOURCE,
|
||||
}, {
|
||||
Group: folders.GROUP, Resource: folders.RESOURCE,
|
||||
}},
|
||||
WithHistory: false,
|
||||
OnlyCount: true,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, v := range rsp.Summary {
|
||||
stats.Instance = append(stats.Instance, provisioning.ResourceCount{
|
||||
Group: v.Group,
|
||||
Resource: v.Resource,
|
||||
Count: v.Count,
|
||||
})
|
||||
// Everything is unmanaged in legacy storage
|
||||
stats.Unmanaged = append(stats.Unmanaged, provisioning.ResourceCount{
|
||||
Group: v.Group,
|
||||
Resource: v.Resource,
|
||||
Count: v.Count,
|
||||
})
|
||||
}
|
||||
return stats, nil
|
||||
}
|
||||
|
||||
// Get full instance stats
|
||||
info, err := o.store.GetStats(ctx, &resourcepb.ResourceStatsRequest{
|
||||
Namespace: namespace,
|
||||
|
|
|
|||
|
|
@ -180,7 +180,12 @@ func (r *ResourcesManager) WriteResourceFileFromObject(ctx context.Context, obj
|
|||
var ok bool
|
||||
fid, ok = r.folders.Tree().DirPath(folder, rootFolder)
|
||||
if !ok {
|
||||
return "", fmt.Errorf("folder %s NOT found in tree with root: %s", folder, rootFolder)
|
||||
// HACK: this is a hack to get the folder path without the root folder
|
||||
// TODO: should we build the tree in a different way?
|
||||
fid, ok = r.folders.Tree().DirPath(folder, "")
|
||||
if !ok {
|
||||
return "", fmt.Errorf("folder %s NOT found in tree", folder)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,33 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
)
|
||||
|
||||
type grafanaSigner struct{}
|
||||
|
||||
// FIXME: where should we use this default signature?
|
||||
// NewGrafanaSigner returns a Signer that uses the grafana user as the author
|
||||
func NewGrafanaSigner() Signer {
|
||||
return &grafanaSigner{}
|
||||
}
|
||||
|
||||
func (s *grafanaSigner) Sign(ctx context.Context, item utils.GrafanaMetaAccessor) (context.Context, error) {
|
||||
sig := repository.CommitSignature{
|
||||
Name: "grafana",
|
||||
// TODO: should we add email?
|
||||
// Email: "grafana@grafana.com",
|
||||
}
|
||||
|
||||
t, err := item.GetUpdatedTimestamp()
|
||||
if err == nil && t != nil {
|
||||
sig.When = *t
|
||||
} else {
|
||||
sig.When = item.GetCreationTimestamp().Time
|
||||
}
|
||||
|
||||
return repository.WithAuthorSignature(ctx, sig), nil
|
||||
}
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
)
|
||||
|
||||
func TestNewGrafanaSigner(t *testing.T) {
|
||||
signer := NewGrafanaSigner()
|
||||
require.NotNil(t, signer, "signer should not be nil")
|
||||
require.IsType(t, &grafanaSigner{}, signer, "signer should be of type *grafanaSigner")
|
||||
}
|
||||
|
||||
func TestGrafanaSigner_Sign(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
creationTimestamp time.Time
|
||||
updateTimestampErr error
|
||||
updatedTimestamp *time.Time
|
||||
expectedTime time.Time
|
||||
setupMocks func(meta *utils.MockGrafanaMetaAccessor)
|
||||
}{
|
||||
{
|
||||
name: "should use creation timestamp when no update timestamp",
|
||||
creationTimestamp: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC),
|
||||
updatedTimestamp: ptr(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)),
|
||||
updateTimestampErr: errors.New("failed"),
|
||||
expectedTime: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC),
|
||||
setupMocks: func(meta *utils.MockGrafanaMetaAccessor) {
|
||||
meta.On("GetCreationTimestamp").Return(metav1.Time{Time: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)})
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should use creation timestamp when update timestamp is nil",
|
||||
creationTimestamp: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC),
|
||||
updatedTimestamp: nil,
|
||||
expectedTime: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC),
|
||||
setupMocks: func(meta *utils.MockGrafanaMetaAccessor) {
|
||||
meta.On("GetCreationTimestamp").Return(metav1.Time{Time: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)})
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should use update timestamp when available",
|
||||
creationTimestamp: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC),
|
||||
updatedTimestamp: ptr(time.Date(2024, 1, 2, 0, 0, 0, 0, time.UTC)),
|
||||
expectedTime: time.Date(2024, 1, 2, 0, 0, 0, 0, time.UTC),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
meta := utils.NewMockGrafanaMetaAccessor(t)
|
||||
var updatedTime *time.Time
|
||||
if tt.updatedTimestamp != nil {
|
||||
updatedTime = tt.updatedTimestamp
|
||||
}
|
||||
meta.On("GetUpdatedTimestamp").Return(updatedTime, tt.updateTimestampErr)
|
||||
|
||||
if tt.setupMocks != nil {
|
||||
tt.setupMocks(meta)
|
||||
}
|
||||
|
||||
signer := NewGrafanaSigner()
|
||||
ctx := context.Background()
|
||||
|
||||
signedCtx, err := signer.Sign(ctx, meta)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the signature in the context
|
||||
sig := repository.GetAuthorSignature(signedCtx)
|
||||
require.NotNil(t, sig, "signature should be present in context")
|
||||
require.Equal(t, "grafana", sig.Name)
|
||||
require.Equal(t, tt.expectedTime, sig.When)
|
||||
|
||||
meta.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func ptr[T any](v T) *T {
|
||||
return &v
|
||||
}
|
||||
|
|
@ -1,52 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
)
|
||||
|
||||
//go:generate mockery --name Signer --structname MockSigner --inpackage --filename signer_mock.go --with-expecter
|
||||
type Signer interface {
|
||||
Sign(ctx context.Context, item utils.GrafanaMetaAccessor) (context.Context, error)
|
||||
}
|
||||
|
||||
type SignOptions struct {
|
||||
Namespace string
|
||||
History bool
|
||||
}
|
||||
|
||||
// SignerFactory is a factory for creating Signers
|
||||
//
|
||||
//go:generate mockery --name SignerFactory --structname MockSignerFactory --inpackage --filename signature_factory_mock.go --with-expecter
|
||||
type SignerFactory interface {
|
||||
New(ctx context.Context, opts SignOptions) (Signer, error)
|
||||
}
|
||||
|
||||
type signerFactory struct {
|
||||
clients resources.ClientFactory
|
||||
}
|
||||
|
||||
func NewSignerFactory(clients resources.ClientFactory) SignerFactory {
|
||||
return &signerFactory{clients}
|
||||
}
|
||||
|
||||
func (f *signerFactory) New(ctx context.Context, opts SignOptions) (Signer, error) {
|
||||
if !opts.History {
|
||||
return NewGrafanaSigner(), nil
|
||||
}
|
||||
|
||||
clients, err := f.clients.Clients(ctx, opts.Namespace)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get clients: %w", err)
|
||||
}
|
||||
|
||||
userClient, err := clients.User(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get user client: %w", err)
|
||||
}
|
||||
|
||||
return NewLoadUsersOnceSigner(userClient), nil
|
||||
}
|
||||
|
|
@ -1,95 +0,0 @@
|
|||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// MockSignerFactory is an autogenerated mock type for the SignerFactory type
|
||||
type MockSignerFactory struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockSignerFactory_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockSignerFactory) EXPECT() *MockSignerFactory_Expecter {
|
||||
return &MockSignerFactory_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// New provides a mock function with given fields: ctx, opts
|
||||
func (_m *MockSignerFactory) New(ctx context.Context, opts SignOptions) (Signer, error) {
|
||||
ret := _m.Called(ctx, opts)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for New")
|
||||
}
|
||||
|
||||
var r0 Signer
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, SignOptions) (Signer, error)); ok {
|
||||
return rf(ctx, opts)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, SignOptions) Signer); ok {
|
||||
r0 = rf(ctx, opts)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(Signer)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, SignOptions) error); ok {
|
||||
r1 = rf(ctx, opts)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockSignerFactory_New_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'New'
|
||||
type MockSignerFactory_New_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// New is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - opts SignOptions
|
||||
func (_e *MockSignerFactory_Expecter) New(ctx interface{}, opts interface{}) *MockSignerFactory_New_Call {
|
||||
return &MockSignerFactory_New_Call{Call: _e.mock.On("New", ctx, opts)}
|
||||
}
|
||||
|
||||
func (_c *MockSignerFactory_New_Call) Run(run func(ctx context.Context, opts SignOptions)) *MockSignerFactory_New_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context), args[1].(SignOptions))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockSignerFactory_New_Call) Return(_a0 Signer, _a1 error) *MockSignerFactory_New_Call {
|
||||
_c.Call.Return(_a0, _a1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockSignerFactory_New_Call) RunAndReturn(run func(context.Context, SignOptions) (Signer, error)) *MockSignerFactory_New_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockSignerFactory creates a new instance of MockSignerFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockSignerFactory(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockSignerFactory {
|
||||
mock := &MockSignerFactory{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,92 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
)
|
||||
|
||||
func TestSignerFactory_New(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
opts SignOptions
|
||||
setupMocks func(t *testing.T, clients *resources.MockClientFactory)
|
||||
expectedType interface{}
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "should return grafana signer when history is false",
|
||||
opts: SignOptions{
|
||||
History: false,
|
||||
},
|
||||
setupMocks: func(t *testing.T, clients *resources.MockClientFactory) {
|
||||
// No mocks needed as we shouldn't call any clients
|
||||
},
|
||||
expectedType: &grafanaSigner{},
|
||||
},
|
||||
{
|
||||
name: "should return load users once signer when history is true",
|
||||
opts: SignOptions{
|
||||
History: true,
|
||||
Namespace: "test-ns",
|
||||
},
|
||||
setupMocks: func(t *testing.T, clients *resources.MockClientFactory) {
|
||||
mockResourceClients := resources.NewMockResourceClients(t)
|
||||
clients.On("Clients", context.Background(), "test-ns").Return(mockResourceClients, nil)
|
||||
mockResourceClients.On("User", mock.Anything).Return(nil, nil)
|
||||
},
|
||||
expectedType: &loadUsersOnceSigner{},
|
||||
},
|
||||
{
|
||||
name: "should return error when clients factory fails",
|
||||
opts: SignOptions{
|
||||
History: true,
|
||||
Namespace: "test-ns",
|
||||
},
|
||||
setupMocks: func(t *testing.T, clients *resources.MockClientFactory) {
|
||||
clients.On("Clients", context.Background(), "test-ns").Return(nil, fmt.Errorf("clients error"))
|
||||
},
|
||||
expectedError: "get clients: clients error",
|
||||
},
|
||||
{
|
||||
name: "should return error when user client fails",
|
||||
opts: SignOptions{
|
||||
History: true,
|
||||
Namespace: "test-ns",
|
||||
},
|
||||
setupMocks: func(t *testing.T, clients *resources.MockClientFactory) {
|
||||
mockResourceClients := resources.NewMockResourceClients(t)
|
||||
clients.On("Clients", context.Background(), "test-ns").Return(mockResourceClients, nil)
|
||||
mockResourceClients.On("User", mock.Anything).Return(nil, fmt.Errorf("user client error"))
|
||||
},
|
||||
expectedError: "get user client: user client error",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
mockClients := resources.NewMockClientFactory(t)
|
||||
tt.setupMocks(t, mockClients)
|
||||
|
||||
factory := NewSignerFactory(mockClients)
|
||||
signer, err := factory.New(context.Background(), tt.opts)
|
||||
|
||||
if tt.expectedError != "" {
|
||||
require.Error(t, err)
|
||||
require.EqualError(t, err, tt.expectedError)
|
||||
require.Nil(t, signer)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, signer)
|
||||
require.IsType(t, tt.expectedType, signer, "signer should be of expected type")
|
||||
}
|
||||
|
||||
mockClients.AssertExpectations(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -1,96 +0,0 @@
|
|||
// Code generated by mockery v2.53.4. DO NOT EDIT.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
utils "github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// MockSigner is an autogenerated mock type for the Signer type
|
||||
type MockSigner struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockSigner_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockSigner) EXPECT() *MockSigner_Expecter {
|
||||
return &MockSigner_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// Sign provides a mock function with given fields: ctx, item
|
||||
func (_m *MockSigner) Sign(ctx context.Context, item utils.GrafanaMetaAccessor) (context.Context, error) {
|
||||
ret := _m.Called(ctx, item)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Sign")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, utils.GrafanaMetaAccessor) (context.Context, error)); ok {
|
||||
return rf(ctx, item)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, utils.GrafanaMetaAccessor) context.Context); ok {
|
||||
r0 = rf(ctx, item)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, utils.GrafanaMetaAccessor) error); ok {
|
||||
r1 = rf(ctx, item)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockSigner_Sign_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Sign'
|
||||
type MockSigner_Sign_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Sign is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - item utils.GrafanaMetaAccessor
|
||||
func (_e *MockSigner_Expecter) Sign(ctx interface{}, item interface{}) *MockSigner_Sign_Call {
|
||||
return &MockSigner_Sign_Call{Call: _e.mock.On("Sign", ctx, item)}
|
||||
}
|
||||
|
||||
func (_c *MockSigner_Sign_Call) Run(run func(ctx context.Context, item utils.GrafanaMetaAccessor)) *MockSigner_Sign_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run(args[0].(context.Context), args[1].(utils.GrafanaMetaAccessor))
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockSigner_Sign_Call) Return(_a0 context.Context, _a1 error) *MockSigner_Sign_Call {
|
||||
_c.Call.Return(_a0, _a1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockSigner_Sign_Call) RunAndReturn(run func(context.Context, utils.GrafanaMetaAccessor) (context.Context, error)) *MockSigner_Sign_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewMockSigner creates a new instance of MockSigner. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockSigner(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockSigner {
|
||||
mock := &MockSigner{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
|
@ -1,115 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/client-go/dynamic"
|
||||
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
||||
)
|
||||
|
||||
const maxUsers = 10000
|
||||
|
||||
type loadUsersOnceSigner struct {
|
||||
signatures map[string]repository.CommitSignature
|
||||
client dynamic.ResourceInterface
|
||||
once sync.Once
|
||||
onceErr error
|
||||
}
|
||||
|
||||
// NewLoadUsersOnceSigner returns a Signer that loads the signatures from users
|
||||
// it will only load the signatures once and cache them
|
||||
// if the user is not found, it will use the grafana user as the author
|
||||
func NewLoadUsersOnceSigner(client dynamic.ResourceInterface) Signer {
|
||||
return &loadUsersOnceSigner{
|
||||
client: client,
|
||||
once: sync.Once{},
|
||||
signatures: map[string]repository.CommitSignature{},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *loadUsersOnceSigner) Sign(ctx context.Context, item utils.GrafanaMetaAccessor) (context.Context, error) {
|
||||
if s.onceErr != nil {
|
||||
return ctx, fmt.Errorf("load signatures: %w", s.onceErr)
|
||||
}
|
||||
|
||||
var err error
|
||||
s.once.Do(func() {
|
||||
s.signatures, err = s.load(ctx, s.client)
|
||||
s.onceErr = err
|
||||
})
|
||||
if err != nil {
|
||||
return ctx, fmt.Errorf("load signatures: %w", err)
|
||||
}
|
||||
|
||||
id := item.GetUpdatedBy()
|
||||
if id == "" {
|
||||
id = item.GetCreatedBy()
|
||||
}
|
||||
if id == "" {
|
||||
id = "grafana"
|
||||
}
|
||||
|
||||
sig := s.signatures[id] // lookup
|
||||
if sig.Name == "" && sig.Email == "" {
|
||||
sig.Name = id
|
||||
}
|
||||
t, err := item.GetUpdatedTimestamp()
|
||||
if err == nil && t != nil {
|
||||
sig.When = *t
|
||||
} else {
|
||||
sig.When = item.GetCreationTimestamp().Time
|
||||
}
|
||||
|
||||
return repository.WithAuthorSignature(ctx, sig), nil
|
||||
}
|
||||
|
||||
func (s *loadUsersOnceSigner) load(ctx context.Context, client dynamic.ResourceInterface) (map[string]repository.CommitSignature, error) {
|
||||
userInfo := make(map[string]repository.CommitSignature)
|
||||
var count int
|
||||
err := resources.ForEach(ctx, client, func(item *unstructured.Unstructured) error {
|
||||
count++
|
||||
if count > maxUsers {
|
||||
return errors.New("too many users")
|
||||
}
|
||||
|
||||
sig := repository.CommitSignature{}
|
||||
// FIXME: should we improve logging here?
|
||||
var (
|
||||
ok bool
|
||||
err error
|
||||
)
|
||||
|
||||
sig.Name, ok, err = unstructured.NestedString(item.Object, "spec", "login")
|
||||
if !ok || err != nil {
|
||||
return nil
|
||||
}
|
||||
sig.Email, ok, err = unstructured.NestedString(item.Object, "spec", "email")
|
||||
if !ok || err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if sig.Name == sig.Email {
|
||||
if sig.Name == "" {
|
||||
sig.Name = item.GetName()
|
||||
} else if strings.Contains(sig.Email, "@") {
|
||||
sig.Email = "" // don't use the same value for name+email
|
||||
}
|
||||
}
|
||||
|
||||
userInfo["user:"+item.GetName()] = sig
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return userInfo, nil
|
||||
}
|
||||
|
|
@ -1,357 +0,0 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/client-go/dynamic"
|
||||
|
||||
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
)
|
||||
|
||||
// mockDynamicInterface implements a simplified version of the dynamic.ResourceInterface
|
||||
type mockDynamicInterface struct {
|
||||
dynamic.ResourceInterface
|
||||
items []unstructured.Unstructured
|
||||
err error
|
||||
}
|
||||
|
||||
func (m *mockDynamicInterface) List(ctx context.Context, opts metav1.ListOptions) (*unstructured.UnstructuredList, error) {
|
||||
if m.err != nil {
|
||||
return nil, m.err
|
||||
}
|
||||
return &unstructured.UnstructuredList{
|
||||
Items: m.items,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type mockGrafanaMetaAccessor struct {
|
||||
utils.GrafanaMetaAccessor
|
||||
createdBy string
|
||||
updatedBy string
|
||||
creationTimestamp time.Time
|
||||
updatedTimestamp *time.Time
|
||||
updatedTimestampErr error
|
||||
}
|
||||
|
||||
func (m *mockGrafanaMetaAccessor) GetCreatedBy() string {
|
||||
return m.createdBy
|
||||
}
|
||||
|
||||
func (m *mockGrafanaMetaAccessor) GetUpdatedBy() string {
|
||||
return m.updatedBy
|
||||
}
|
||||
|
||||
func (m *mockGrafanaMetaAccessor) GetCreationTimestamp() metav1.Time {
|
||||
return metav1.Time{Time: m.creationTimestamp}
|
||||
}
|
||||
|
||||
func (m *mockGrafanaMetaAccessor) GetUpdatedTimestamp() (*time.Time, error) {
|
||||
if m.updatedTimestampErr != nil {
|
||||
return nil, m.updatedTimestampErr
|
||||
}
|
||||
return m.updatedTimestamp, nil
|
||||
}
|
||||
|
||||
func TestLoadUsersOnceSigner_Sign(t *testing.T) {
|
||||
baseTime := time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
updateTime := time.Date(2024, 1, 2, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
items []unstructured.Unstructured
|
||||
meta *mockGrafanaMetaAccessor
|
||||
clientErr error
|
||||
expectedSig repository.CommitSignature
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "should sign with user info when user exists",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "johndoe",
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
updatedTimestamp: &updateTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "johndoe",
|
||||
Email: "john@example.com",
|
||||
When: updateTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should fallback to created by when updated by is empty",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "johndoe",
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
createdBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "johndoe",
|
||||
Email: "john@example.com",
|
||||
When: baseTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should use grafana when no user info available",
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "grafana",
|
||||
When: baseTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should handle user with same login and email",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "john@example.com",
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
updatedTimestamp: &updateTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "john@example.com",
|
||||
Email: "",
|
||||
When: updateTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should handle empty login and email",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "",
|
||||
"email": "",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
updatedTimestamp: &updateTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "user1",
|
||||
Email: "",
|
||||
When: updateTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should handle empty email",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "johndoe",
|
||||
"email": "",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
updatedTimestamp: &updateTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "johndoe",
|
||||
Email: "",
|
||||
When: updateTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should fail when too many users",
|
||||
items: func() []unstructured.Unstructured {
|
||||
items := make([]unstructured.Unstructured, maxUsers+1)
|
||||
for i := 0; i < maxUsers+1; i++ {
|
||||
items[i] = unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "johndoe",
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
return items
|
||||
}(),
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
expectedError: "load signatures: too many users",
|
||||
},
|
||||
{
|
||||
name: "should handle missing user fields gracefully",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
// missing login and email
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "user:user1",
|
||||
When: baseTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should use creation timestamp when update timestamp has error",
|
||||
items: []unstructured.Unstructured{
|
||||
{
|
||||
Object: map[string]interface{}{
|
||||
"metadata": map[string]interface{}{
|
||||
"name": "user1",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"login": "johndoe",
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
updatedTimestampErr: errors.New("update timestamp error"),
|
||||
},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "johndoe",
|
||||
Email: "john@example.com",
|
||||
When: baseTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should fail when listing users fails",
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
clientErr: fmt.Errorf("failed to list users"),
|
||||
expectedError: "load signatures: error executing list: failed to list users",
|
||||
},
|
||||
{
|
||||
name: "should handle empty user list",
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
items: []unstructured.Unstructured{},
|
||||
expectedSig: repository.CommitSignature{
|
||||
Name: "user:user1",
|
||||
When: baseTime,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "should handle multiple calls with error",
|
||||
meta: &mockGrafanaMetaAccessor{
|
||||
updatedBy: "user:user1",
|
||||
creationTimestamp: baseTime,
|
||||
},
|
||||
clientErr: fmt.Errorf("failed to list users"),
|
||||
expectedError: "load signatures: error executing list: failed to list users",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
client := &mockDynamicInterface{
|
||||
items: tt.items,
|
||||
err: tt.clientErr,
|
||||
}
|
||||
|
||||
signer := NewLoadUsersOnceSigner(client)
|
||||
ctx := context.Background()
|
||||
|
||||
signedCtx, err := signer.Sign(ctx, tt.meta)
|
||||
|
||||
if tt.expectedError != "" {
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), tt.expectedError)
|
||||
|
||||
// Test that subsequent calls also fail with the same error
|
||||
_, err2 := signer.Sign(ctx, tt.meta)
|
||||
require.Error(t, err2)
|
||||
require.Contains(t, err2.Error(), tt.expectedError)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
sig := repository.GetAuthorSignature(signedCtx)
|
||||
require.NotNil(t, sig)
|
||||
require.Equal(t, tt.expectedSig.Name, sig.Name)
|
||||
require.Equal(t, tt.expectedSig.Email, sig.Email)
|
||||
require.Equal(t, tt.expectedSig.When, sig.When)
|
||||
|
||||
// Test that subsequent calls use cached data
|
||||
signedCtx2, err := signer.Sign(ctx, tt.meta)
|
||||
require.NoError(t, err)
|
||||
sig2 := repository.GetAuthorSignature(signedCtx2)
|
||||
require.Equal(t, sig, sig2)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -14,7 +14,6 @@ import (
|
|||
authlib "github.com/grafana/authlib/types"
|
||||
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
||||
"github.com/grafana/grafana/pkg/services/apiserver/builder"
|
||||
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
|
||||
"github.com/grafana/grafana/pkg/util/errhttp"
|
||||
)
|
||||
|
||||
|
|
@ -156,16 +155,9 @@ func (b *APIBuilder) handleSettings(w http.ResponseWriter, r *http.Request) {
|
|||
return
|
||||
}
|
||||
|
||||
legacyStorage := false
|
||||
if b.storageStatus != nil {
|
||||
legacyStorage = dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, b.storageStatus)
|
||||
}
|
||||
|
||||
settings := provisioning.RepositoryViewList{
|
||||
Items: make([]provisioning.RepositoryView, len(all)),
|
||||
AllowedTargets: b.allowedTargets,
|
||||
// FIXME: this shouldn't be here in provisioning but at the dual writer or something about the storage
|
||||
LegacyStorage: legacyStorage,
|
||||
Items: make([]provisioning.RepositoryView, len(all)),
|
||||
AllowedTargets: b.allowedTargets,
|
||||
AvailableRepositoryTypes: b.repoFactory.Types(),
|
||||
AllowImageRendering: b.allowImageRendering,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5295,10 +5295,6 @@
|
|||
"com.github.grafana.grafana.apps.provisioning.pkg.apis.provisioning.v0alpha1.MigrateJobOptions": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"history": {
|
||||
"description": "Preserve history (if possible)",
|
||||
"type": "boolean"
|
||||
},
|
||||
"message": {
|
||||
"description": "Message to use when committing the changes in a single commit",
|
||||
"type": "string"
|
||||
|
|
@ -5828,10 +5824,6 @@
|
|||
"kind": {
|
||||
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
|
||||
"type": "string"
|
||||
},
|
||||
"legacyStorage": {
|
||||
"description": "The backend is using legacy storage FIXME: Not sure where this should be exposed... but we need it somewhere The UI should force the onboarding workflow when this is true",
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
|
|||
|
|
@ -10,7 +10,9 @@ import (
|
|||
"k8s.io/apimachinery/pkg/util/version"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apiserver/rest"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/tests/testinfra"
|
||||
"github.com/grafana/grafana/pkg/tests/testsuite"
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
|
|
@ -37,6 +39,11 @@ func TestIntegrationOpenAPIs(t *testing.T) {
|
|||
featuremgmt.FlagKubernetesAlertingHistorian,
|
||||
featuremgmt.FlagKubernetesLogsDrilldown,
|
||||
},
|
||||
// Explicitly configure with mode 5 the resources supported by provisioning.
|
||||
UnifiedStorageConfig: map[string]setting.UnifiedStorageConfig{
|
||||
"dashboards.dashboard.grafana.app": {DualWriterMode: rest.Mode5},
|
||||
"folders.folder.grafana.app": {DualWriterMode: rest.Mode5},
|
||||
},
|
||||
})
|
||||
|
||||
t.Run("check valid version response", func(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -681,12 +681,17 @@ func runGrafana(t *testing.T, options ...grafanaOption) *provisioningTestHelper
|
|||
EnableFeatureToggles: []string{
|
||||
featuremgmt.FlagProvisioning,
|
||||
},
|
||||
// Provisioning requires resources to be fully migrated to unified storage.
|
||||
// Mode5 ensures reads/writes go to unified storage, and EnableMigration
|
||||
// enables the data migration at startup to migrate legacy data.
|
||||
UnifiedStorageConfig: map[string]setting.UnifiedStorageConfig{
|
||||
"dashboards.dashboard.grafana.app": {
|
||||
DualWriterMode: grafanarest.Mode5,
|
||||
DualWriterMode: grafanarest.Mode5,
|
||||
EnableMigration: true,
|
||||
},
|
||||
"folders.folder.grafana.app": {
|
||||
DualWriterMode: grafanarest.Mode5,
|
||||
DualWriterMode: grafanarest.Mode5,
|
||||
EnableMigration: true,
|
||||
},
|
||||
},
|
||||
PermittedProvisioningPaths: ".|" + provisioningPath,
|
||||
|
|
|
|||
|
|
@ -3,9 +3,9 @@ import { skipToken } from '@reduxjs/toolkit/query';
|
|||
import { useState } from 'react';
|
||||
|
||||
import { GrafanaTheme2 } from '@grafana/data';
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { t } from '@grafana/i18n';
|
||||
import { config } from '@grafana/runtime';
|
||||
import { Alert, Stack, useStyles2 } from '@grafana/ui';
|
||||
import { Stack, useStyles2 } from '@grafana/ui';
|
||||
import { Repository, useGetFrontendSettingsQuery } from 'app/api/clients/provisioning/v0alpha1';
|
||||
|
||||
import provisioningSvg from '../img/provisioning.svg';
|
||||
|
|
@ -127,7 +127,6 @@ export default function GettingStarted({ items }: Props) {
|
|||
const settingsQuery = useGetFrontendSettingsQuery(settingsArg, {
|
||||
refetchOnMountOrArgChange: true,
|
||||
});
|
||||
const legacyStorage = settingsQuery.data?.legacyStorage;
|
||||
const hasItems = Boolean(settingsQuery.data?.items?.length);
|
||||
const { hasPublicAccess, hasImageRenderer, hasRequiredFeatures } = getConfigurationStatus();
|
||||
const [showInstructionsModal, setShowModal] = useState(false);
|
||||
|
|
@ -135,20 +134,6 @@ export default function GettingStarted({ items }: Props) {
|
|||
|
||||
return (
|
||||
<>
|
||||
{legacyStorage && (
|
||||
<Alert
|
||||
severity="info"
|
||||
title={t(
|
||||
'provisioning.getting-started.title-setting-connection-could-cause-temporary-outage',
|
||||
'Setting up this connection could cause a temporary outage'
|
||||
)}
|
||||
>
|
||||
<Trans i18nKey="provisioning.getting-started.alert-temporary-outage">
|
||||
When you connect your whole instance, dashboards will be unavailable while running the migration. We
|
||||
recommend warning your users before starting the process.
|
||||
</Trans>
|
||||
</Alert>
|
||||
)}
|
||||
<Stack direction="column" gap={6} wrap="wrap">
|
||||
<Stack gap={10} alignItems="center">
|
||||
<div className={styles.imageContainer}>
|
||||
|
|
|
|||
|
|
@ -1,11 +1,8 @@
|
|||
import { useMemo, useState } from 'react';
|
||||
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { Alert, ConfirmModal, Stack, Tab, TabContent, TabsBar } from '@grafana/ui';
|
||||
import {
|
||||
useDeletecollectionRepositoryMutation,
|
||||
useGetFrontendSettingsQuery,
|
||||
} from 'app/api/clients/provisioning/v0alpha1';
|
||||
import { t } from '@grafana/i18n';
|
||||
import { ConfirmModal, Stack, Tab, TabContent, TabsBar } from '@grafana/ui';
|
||||
import { useDeletecollectionRepositoryMutation } from 'app/api/clients/provisioning/v0alpha1';
|
||||
import { Page } from 'app/core/components/Page/Page';
|
||||
|
||||
import GettingStarted from './GettingStarted/GettingStarted';
|
||||
|
|
@ -22,7 +19,6 @@ enum TabSelection {
|
|||
|
||||
export default function HomePage() {
|
||||
const [items, isLoading] = useRepositoryList({ watch: true });
|
||||
const settings = useGetFrontendSettingsQuery();
|
||||
const [deleteAll] = useDeletecollectionRepositoryMutation();
|
||||
const [showDeleteModal, setShowDeleteModal] = useState(false);
|
||||
const [activeTab, setActiveTab] = useState<TabSelection>(TabSelection.Repositories);
|
||||
|
|
@ -71,24 +67,6 @@ export default function HomePage() {
|
|||
actions={activeTab === TabSelection.Repositories && <ConnectRepositoryButton items={items} />}
|
||||
>
|
||||
<Page.Contents isLoading={isLoading}>
|
||||
{settings.data?.legacyStorage && (
|
||||
<Alert
|
||||
title={t('provisioning.home-page.title-legacy-storage-detected', 'Legacy storage detected')}
|
||||
severity="error"
|
||||
buttonContent={
|
||||
<Trans i18nKey="provisioning.home-page.remove-all-configured-repositories">
|
||||
Remove all configured repositories
|
||||
</Trans>
|
||||
}
|
||||
onRemove={() => {
|
||||
setShowDeleteModal(true);
|
||||
}}
|
||||
>
|
||||
<Trans i18nKey="provisioning.home-page.configured-repositories-while-running-legacy-storage">
|
||||
Configured repositories will not work while running legacy storage.
|
||||
</Trans>
|
||||
</Alert>
|
||||
)}
|
||||
<InlineSecureValueWarning items={items} />
|
||||
<ConfirmModal
|
||||
isOpen={showDeleteModal}
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import { useParams } from 'react-router-dom-v5-compat';
|
|||
import { SelectableValue, urlUtil } from '@grafana/data';
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { Alert, EmptyState, Spinner, Stack, Tab, TabContent, TabsBar, Text, TextLink } from '@grafana/ui';
|
||||
import { useGetFrontendSettingsQuery, useListRepositoryQuery } from 'app/api/clients/provisioning/v0alpha1';
|
||||
import { useListRepositoryQuery } from 'app/api/clients/provisioning/v0alpha1';
|
||||
import { Page } from 'app/core/components/Page/Page';
|
||||
import { useQueryParams } from 'app/core/hooks/useQueryParams';
|
||||
import { isNotFoundError } from 'app/features/alerting/unified/api/util';
|
||||
|
|
@ -32,7 +32,6 @@ export default function RepositoryStatusPage() {
|
|||
const data = query.data?.items?.[0];
|
||||
const location = useLocation();
|
||||
const [queryParams] = useQueryParams();
|
||||
const settings = useGetFrontendSettingsQuery();
|
||||
|
||||
const tab = queryParams['tab'] ?? TabSelection.Overview;
|
||||
|
||||
|
|
@ -64,16 +63,6 @@ export default function RepositoryStatusPage() {
|
|||
actions={data && <RepositoryActions repository={data} />}
|
||||
>
|
||||
<Page.Contents isLoading={query.isLoading}>
|
||||
{settings.data?.legacyStorage && (
|
||||
<Alert
|
||||
title={t('provisioning.repository-status-page.title-legacy-storage', 'Legacy Storage')}
|
||||
severity="error"
|
||||
>
|
||||
<Trans i18nKey="provisioning.repository-status-page.legacy-storage-message">
|
||||
Instance is not yet running unified storage -- requires migration wizard
|
||||
</Trans>
|
||||
</Alert>
|
||||
)}
|
||||
<InlineSecureValueWarning repo={data} />
|
||||
{notFound ? (
|
||||
<EmptyState
|
||||
|
|
|
|||
|
|
@ -143,7 +143,7 @@ describe('BootstrapStep', () => {
|
|||
it('should render correct info for GitHub repository type', async () => {
|
||||
setup();
|
||||
expect(screen.getAllByText('External storage')).toHaveLength(2);
|
||||
expect(screen.getAllByText('Empty')).toHaveLength(3); // Three elements should have the role "Empty" (2 external + 1 unmanaged)
|
||||
expect(screen.getAllByText('Empty')).toHaveLength(4); // Four elements should show "Empty" (2 external + 2 unmanaged, one per card)
|
||||
});
|
||||
|
||||
it('should render correct info for local file repository type', async () => {
|
||||
|
|
@ -198,7 +198,8 @@ describe('BootstrapStep', () => {
|
|||
|
||||
setup();
|
||||
|
||||
expect(await screen.findByText('7 resources')).toBeInTheDocument();
|
||||
// Two elements display "7 resources": one in the external storage card and one in unmanaged resources card
|
||||
expect(await screen.findAllByText('7 resources')).toHaveLength(2);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -207,21 +208,7 @@ describe('BootstrapStep', () => {
|
|||
setup();
|
||||
|
||||
const mockUseResourceStats = require('./hooks/useResourceStats').useResourceStats;
|
||||
expect(mockUseResourceStats).toHaveBeenCalledWith('test-repo', undefined);
|
||||
});
|
||||
|
||||
it('should use useResourceStats hook with legacy storage flag', async () => {
|
||||
setup({
|
||||
settingsData: {
|
||||
legacyStorage: true,
|
||||
allowImageRendering: true,
|
||||
items: [],
|
||||
availableRepositoryTypes: [],
|
||||
},
|
||||
});
|
||||
|
||||
const mockUseResourceStats = require('./hooks/useResourceStats').useResourceStats;
|
||||
expect(mockUseResourceStats).toHaveBeenCalledWith('test-repo', true);
|
||||
expect(mockUseResourceStats).toHaveBeenCalledWith('test-repo', 'instance');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -255,7 +242,6 @@ describe('BootstrapStep', () => {
|
|||
|
||||
setup({
|
||||
settingsData: {
|
||||
legacyStorage: true,
|
||||
allowImageRendering: true,
|
||||
items: [],
|
||||
availableRepositoryTypes: [],
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ export const BootstrapStep = memo(function BootstrapStep({ settingsData, repoNam
|
|||
const repositoryType = watch('repository.type');
|
||||
const { enabledOptions, disabledOptions } = useModeOptions(repoName, settingsData);
|
||||
const { target } = enabledOptions?.[0];
|
||||
const { resourceCountString, fileCountString, isLoading } = useResourceStats(repoName, settingsData?.legacyStorage);
|
||||
const { resourceCountString, fileCountString, isLoading } = useResourceStats(repoName, selectedTarget);
|
||||
const styles = useStyles2(getStyles);
|
||||
|
||||
useEffect(() => {
|
||||
|
|
@ -103,7 +103,6 @@ export const BootstrapStep = memo(function BootstrapStep({ settingsData, repoNam
|
|||
<div className={styles.divider} />
|
||||
|
||||
<BootstrapStepResourceCounting
|
||||
target={action.target}
|
||||
fileCountString={fileCountString}
|
||||
resourceCountString={resourceCountString}
|
||||
/>
|
||||
|
|
|
|||
|
|
@ -1,40 +1,23 @@
|
|||
import { Trans } from '@grafana/i18n';
|
||||
import { Stack, Text } from '@grafana/ui';
|
||||
|
||||
import { Target } from './types';
|
||||
|
||||
export function BootstrapStepResourceCounting({
|
||||
target,
|
||||
fileCountString,
|
||||
resourceCountString,
|
||||
}: {
|
||||
target: Target;
|
||||
fileCountString: string;
|
||||
resourceCountString: string;
|
||||
}) {
|
||||
if (target === 'instance') {
|
||||
return (
|
||||
<Stack direction="row" gap={3}>
|
||||
<Stack gap={1}>
|
||||
<Trans i18nKey="provisioning.bootstrap-step.external-storage-label">External storage</Trans>
|
||||
<Text color="primary">{fileCountString}</Text>
|
||||
</Stack>
|
||||
<Stack gap={1}>
|
||||
<Trans i18nKey="provisioning.bootstrap-step.unmanaged-resources-label">Unmanaged resources</Trans>{' '}
|
||||
<Text color="primary">{resourceCountString}</Text>
|
||||
</Stack>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
||||
if (target === 'folder') {
|
||||
return (
|
||||
return (
|
||||
<Stack direction="row" gap={3}>
|
||||
<Stack gap={1}>
|
||||
<Trans i18nKey="provisioning.bootstrap-step.external-storage-label">External storage</Trans>{' '}
|
||||
<Trans i18nKey="provisioning.bootstrap-step.external-storage-label">External storage</Trans>
|
||||
<Text color="primary">{fileCountString}</Text>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
||||
return null;
|
||||
<Stack gap={1}>
|
||||
<Trans i18nKey="provisioning.bootstrap-step.unmanaged-resources-label">Unmanaged resources</Trans>{' '}
|
||||
<Text color="primary">{resourceCountString}</Text>
|
||||
</Stack>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -121,7 +121,6 @@ describe('ProvisioningWizard', () => {
|
|||
mockUseGetFrontendSettingsQuery.mockReturnValue({
|
||||
data: {
|
||||
items: [],
|
||||
legacyStorage: false,
|
||||
allowImageRendering: true,
|
||||
availableRepositoryTypes: ['github', 'gitlab', 'bitbucket', 'git', 'local'],
|
||||
},
|
||||
|
|
|
|||
|
|
@ -89,7 +89,6 @@ export const ProvisioningWizard = memo(function ProvisioningWizard({
|
|||
activeStep === 'finish' && (isStepSuccess || completedSteps.includes('synchronize'));
|
||||
const shouldUseCancelBehavior = activeStep === 'connection' || isSyncCompleted || isFinishWithSyncCompleted;
|
||||
|
||||
const isLegacyStorage = Boolean(settingsData?.legacyStorage);
|
||||
const navigate = useNavigate();
|
||||
|
||||
const steps = getSteps();
|
||||
|
|
@ -122,12 +121,10 @@ export const ProvisioningWizard = memo(function ProvisioningWizard({
|
|||
shouldSkipSync,
|
||||
requiresMigration,
|
||||
isLoading: isResourceStatsLoading,
|
||||
} = useResourceStats(repoName, isLegacyStorage, syncTarget);
|
||||
} = useResourceStats(repoName, syncTarget);
|
||||
const { createSyncJob, isLoading: isCreatingSkipJob } = useCreateSyncJob({
|
||||
repoName: repoName,
|
||||
requiresMigration,
|
||||
repoType,
|
||||
isLegacyStorage,
|
||||
setStepStatusInfo,
|
||||
});
|
||||
|
||||
|
|
@ -412,11 +409,7 @@ export const ProvisioningWizard = memo(function ProvisioningWizard({
|
|||
{activeStep === 'connection' && <ConnectStep />}
|
||||
{activeStep === 'bootstrap' && <BootstrapStep settingsData={settingsData} repoName={repoName} />}
|
||||
{activeStep === 'synchronize' && (
|
||||
<SynchronizeStep
|
||||
isLegacyStorage={isLegacyStorage}
|
||||
onCancel={handleRepositoryDeletion}
|
||||
isCancelling={isCancelling}
|
||||
/>
|
||||
<SynchronizeStep onCancel={handleRepositoryDeletion} isCancelling={isCancelling} />
|
||||
)}
|
||||
{activeStep === 'finish' && <FinishStep />}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import { memo, useEffect, useState } from 'react';
|
|||
import { useFormContext } from 'react-hook-form';
|
||||
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { Alert, Button, Checkbox, Field, Spinner, Stack, Text, TextLink } from '@grafana/ui';
|
||||
import { Alert, Button, Field, Spinner, Stack, Text, TextLink } from '@grafana/ui';
|
||||
import { Job, useGetRepositoryStatusQuery } from 'app/api/clients/provisioning/v0alpha1';
|
||||
|
||||
import { JobStatus } from '../Job/JobStatus';
|
||||
|
|
@ -15,25 +15,18 @@ import { useResourceStats } from './hooks/useResourceStats';
|
|||
import { WizardFormData } from './types';
|
||||
|
||||
export interface SynchronizeStepProps {
|
||||
isLegacyStorage?: boolean;
|
||||
onCancel?: (repoName: string) => void;
|
||||
isCancelling?: boolean;
|
||||
}
|
||||
|
||||
export const SynchronizeStep = memo(function SynchronizeStep({
|
||||
isLegacyStorage,
|
||||
onCancel,
|
||||
isCancelling,
|
||||
}: SynchronizeStepProps) {
|
||||
const { getValues, register, watch } = useFormContext<WizardFormData>();
|
||||
export const SynchronizeStep = memo(function SynchronizeStep({ onCancel, isCancelling }: SynchronizeStepProps) {
|
||||
const { watch } = useFormContext<WizardFormData>();
|
||||
const { setStepStatusInfo } = useStepStatus();
|
||||
const [repoName = '', repoType] = watch(['repositoryName', 'repository.type']);
|
||||
const { requiresMigration } = useResourceStats(repoName, isLegacyStorage);
|
||||
const { createSyncJob, supportsHistory } = useCreateSyncJob({
|
||||
const [repoName = '', syncTarget] = watch(['repositoryName', 'repository.sync.target']);
|
||||
const { requiresMigration } = useResourceStats(repoName, syncTarget);
|
||||
const { createSyncJob } = useCreateSyncJob({
|
||||
repoName,
|
||||
requiresMigration,
|
||||
repoType,
|
||||
isLegacyStorage,
|
||||
setStepStatusInfo,
|
||||
});
|
||||
const [job, setJob] = useState<Job>();
|
||||
|
|
@ -70,8 +63,7 @@ export const SynchronizeStep = memo(function SynchronizeStep({
|
|||
const isButtonDisabled = hasError || (checked !== undefined && isRepositoryHealthy === false) || healthStatusNotReady;
|
||||
|
||||
const startSynchronization = async () => {
|
||||
const [history] = getValues(['migrate.history']);
|
||||
const response = await createSyncJob({ history });
|
||||
const response = await createSyncJob();
|
||||
if (response) {
|
||||
setJob(response);
|
||||
}
|
||||
|
|
@ -118,20 +110,19 @@ export const SynchronizeStep = memo(function SynchronizeStep({
|
|||
<Alert
|
||||
title={t(
|
||||
'provisioning.wizard.alert-title',
|
||||
'Important: No data or configuration will be lost, but dashboards will be temporarily unavailable for a few minutes.'
|
||||
'Important: No data or configuration will be lost. Dashboards remain accessible during migration, but changes made during this process may not be exported.'
|
||||
)}
|
||||
severity={'info'}
|
||||
>
|
||||
<ul style={{ marginLeft: '16px' }}>
|
||||
<li>
|
||||
<Trans i18nKey="provisioning.wizard.alert-point-1">
|
||||
Resources won't be able to be created, edited, or deleted during this process. In the last step,
|
||||
they will disappear.
|
||||
Resources can still be created, edited, or deleted during this process, but changes may not be exported.
|
||||
</Trans>
|
||||
</li>
|
||||
<li>
|
||||
<Trans i18nKey="provisioning.wizard.alert-point-2">
|
||||
Once provisioning is complete, resources will reappear and be managed through external storage.
|
||||
Once provisioning is complete, resources will be marked as managed through external storage.
|
||||
</Trans>
|
||||
</li>
|
||||
<li>
|
||||
|
|
@ -141,7 +132,8 @@ export const SynchronizeStep = memo(function SynchronizeStep({
|
|||
</li>
|
||||
<li>
|
||||
<Trans i18nKey="provisioning.wizard.alert-point-4">
|
||||
Enterprise instance administrators can display an announcement banner to users. See{' '}
|
||||
Enterprise instance administrators can display an announcement banner to notify users that migration is
|
||||
in progress. See{' '}
|
||||
<TextLink external href="https://grafana.com/docs/grafana/latest/administration/announcement-banner/">
|
||||
this guide
|
||||
</TextLink>{' '}
|
||||
|
|
@ -151,26 +143,6 @@ export const SynchronizeStep = memo(function SynchronizeStep({
|
|||
</ul>
|
||||
</Alert>
|
||||
)}
|
||||
{supportsHistory && (
|
||||
<>
|
||||
<Text element="h3">
|
||||
<Trans i18nKey="provisioning.synchronize-step.synchronization-options">Synchronization options</Trans>
|
||||
</Text>
|
||||
<Field noMargin>
|
||||
<Checkbox
|
||||
{...register('migrate.history')}
|
||||
id="migrate-history"
|
||||
label={t('provisioning.wizard.sync-option-history', 'History')}
|
||||
description={
|
||||
<Trans i18nKey="provisioning.synchronize-step.synchronization-description">
|
||||
Include commits for each historical value
|
||||
</Trans>
|
||||
}
|
||||
/>
|
||||
</Field>
|
||||
</>
|
||||
)}
|
||||
|
||||
{healthStatusNotReady ? (
|
||||
<>
|
||||
<Stack>
|
||||
|
|
|
|||
|
|
@ -1,28 +1,18 @@
|
|||
import { t } from '@grafana/i18n';
|
||||
import { useCreateRepositoryJobsMutation } from 'app/api/clients/provisioning/v0alpha1';
|
||||
|
||||
import { isGitProvider } from '../../utils/repositoryTypes';
|
||||
import { RepoType, StepStatusInfo } from '../types';
|
||||
import { StepStatusInfo } from '../types';
|
||||
|
||||
export interface UseCreateSyncJobParams {
|
||||
repoName: string;
|
||||
requiresMigration: boolean;
|
||||
repoType: RepoType;
|
||||
isLegacyStorage?: boolean;
|
||||
setStepStatusInfo?: (info: StepStatusInfo) => void;
|
||||
}
|
||||
|
||||
export function useCreateSyncJob({
|
||||
repoName,
|
||||
requiresMigration,
|
||||
repoType,
|
||||
isLegacyStorage,
|
||||
setStepStatusInfo,
|
||||
}: UseCreateSyncJobParams) {
|
||||
export function useCreateSyncJob({ repoName, requiresMigration, setStepStatusInfo }: UseCreateSyncJobParams) {
|
||||
const [createJob, { isLoading }] = useCreateRepositoryJobsMutation();
|
||||
const supportsHistory = isGitProvider(repoType) && isLegacyStorage;
|
||||
|
||||
const createSyncJob = async (options?: { history?: boolean }) => {
|
||||
const createSyncJob = async () => {
|
||||
if (!repoName) {
|
||||
setStepStatusInfo?.({
|
||||
status: 'error',
|
||||
|
|
@ -36,9 +26,7 @@ export function useCreateSyncJob({
|
|||
|
||||
const jobSpec = requiresMigration
|
||||
? {
|
||||
migrate: {
|
||||
history: (options?.history || false) && supportsHistory,
|
||||
},
|
||||
migrate: {},
|
||||
}
|
||||
: {
|
||||
pull: {
|
||||
|
|
@ -59,7 +47,7 @@ export function useCreateSyncJob({
|
|||
return null;
|
||||
}
|
||||
|
||||
setStepStatusInfo?.({ status: 'success' });
|
||||
// Job status will be tracked by JobStatus component, keep status as 'running'
|
||||
return response;
|
||||
} catch (error) {
|
||||
setStepStatusInfo?.({
|
||||
|
|
@ -73,6 +61,5 @@ export function useCreateSyncJob({
|
|||
return {
|
||||
createSyncJob,
|
||||
isLoading,
|
||||
supportsHistory,
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -11,14 +11,13 @@ import { ModeOption } from '../types';
|
|||
function filterModeOptions(modeOptions: ModeOption[], repoName: string, settings?: RepositoryViewList): ModeOption[] {
|
||||
const folderConnected = settings?.items?.some((item) => item.target === 'folder' && item.name !== repoName);
|
||||
const allowedTargets = settings?.allowedTargets || ['instance', 'folder'];
|
||||
const legacyStorageEnabled = settings?.legacyStorage;
|
||||
|
||||
return modeOptions.map((option) => {
|
||||
if (option.disabled) {
|
||||
return option;
|
||||
}
|
||||
|
||||
const disabledReason = resolveDisabledReason(option, { allowedTargets, folderConnected, legacyStorageEnabled });
|
||||
const disabledReason = resolveDisabledReason(option, { allowedTargets, folderConnected });
|
||||
|
||||
if (!disabledReason) {
|
||||
return option;
|
||||
|
|
@ -35,7 +34,6 @@ function filterModeOptions(modeOptions: ModeOption[], repoName: string, settings
|
|||
type DisableContext = {
|
||||
allowedTargets: string[];
|
||||
folderConnected?: boolean;
|
||||
legacyStorageEnabled?: boolean;
|
||||
};
|
||||
|
||||
// Returns a translated reason why the given mode option should be disabled.
|
||||
|
|
@ -47,13 +45,6 @@ function resolveDisabledReason(option: ModeOption, context: DisableContext) {
|
|||
);
|
||||
}
|
||||
|
||||
if (context.legacyStorageEnabled && option.target !== 'instance') {
|
||||
return t(
|
||||
'provisioning.mode-options.disabled.legacy-storage',
|
||||
'Legacy storage mode only supports syncing the entire Grafana instance.'
|
||||
);
|
||||
}
|
||||
|
||||
if (option.target === 'instance' && context.folderConnected) {
|
||||
return t(
|
||||
'provisioning.mode-options.disabled.folder-connected',
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ function getResourceStats(files?: GetRepositoryFilesApiResponse, stats?: GetReso
|
|||
/**
|
||||
* Hook that provides resource statistics and sync logic
|
||||
*/
|
||||
export function useResourceStats(repoName?: string, isLegacyStorage?: boolean, syncTarget?: RepositoryView['target']) {
|
||||
export function useResourceStats(repoName?: string, syncTarget?: RepositoryView['target']) {
|
||||
const resourceStatsQuery = useGetResourceStatsQuery(repoName ? undefined : skipToken);
|
||||
const filesQuery = useGetRepositoryFilesQuery(repoName ? { name: repoName } : skipToken);
|
||||
|
||||
|
|
@ -121,8 +121,8 @@ export function useResourceStats(repoName?: string, isLegacyStorage?: boolean, s
|
|||
};
|
||||
}, [resourceStatsQuery.data]);
|
||||
|
||||
const requiresMigration = isLegacyStorage || resourceCount > 0;
|
||||
const shouldSkipSync = !isLegacyStorage && (resourceCount === 0 || syncTarget === 'folder') && fileCount === 0;
|
||||
const requiresMigration = resourceCount > 0 && syncTarget === 'instance';
|
||||
const shouldSkipSync = (resourceCount === 0 || syncTarget === 'folder') && fileCount === 0;
|
||||
|
||||
// Format display strings
|
||||
const resourceCountDisplay =
|
||||
|
|
|
|||
|
|
@ -11874,7 +11874,6 @@
|
|||
}
|
||||
},
|
||||
"getting-started": {
|
||||
"alert-temporary-outage": "When you connect your whole instance, dashboards will be unavailable while running the migration. We recommend warning your users before starting the process.",
|
||||
"modal-description-public-access": "Set up public access to your Grafana instance to enable GitHub integration",
|
||||
"modal-description-required-features": "Enable required Grafana features for provisioning",
|
||||
"modal-title-set-up-public-access": "Set up public access",
|
||||
|
|
@ -11886,8 +11885,7 @@
|
|||
"step-title-copy-url": "Copy your public URL",
|
||||
"step-title-enable-feature-toggles": "Enable Required Feature Toggles",
|
||||
"step-title-start-ngrok": "Start ngrok for temporary public access",
|
||||
"step-title-update-grafana-config": "Update your Grafana configuration",
|
||||
"title-setting-connection-could-cause-temporary-outage": "Setting up this connection could cause a temporary outage"
|
||||
"step-title-update-grafana-config": "Update your Grafana configuration"
|
||||
},
|
||||
"getting-started-page": {
|
||||
"subtitle-provisioning-feature": "View and manage your provisioning connections"
|
||||
|
|
@ -11956,18 +11954,15 @@
|
|||
},
|
||||
"home-page": {
|
||||
"button-delete-repositories": "Delete repositories",
|
||||
"configured-repositories-while-running-legacy-storage": "Configured repositories will not work while running legacy storage.",
|
||||
"confirm-delete-repositories": "Are you sure you want to delete all configured repositories? This action cannot be undone.",
|
||||
"error-delete-all-repositories": "Failed to delete all repositories",
|
||||
"remove-all-configured-repositories": "Remove all configured repositories",
|
||||
"subtitle": "View and manage your configured repositories",
|
||||
"success-all-repositories-deleted": "All configured repositories deleted",
|
||||
"tab-getting-started": "Getting started",
|
||||
"tab-getting-started-title": "Getting started",
|
||||
"tab-repositories": "Repositories",
|
||||
"tab-repositories-title": "List of repositories",
|
||||
"title-delete-all-configured-repositories": "Delete all configured repositories",
|
||||
"title-legacy-storage-detected": "Legacy storage detected"
|
||||
"title-delete-all-configured-repositories": "Delete all configured repositories"
|
||||
},
|
||||
"http-utils": {
|
||||
"access-denied": "Access denied. Please check your token permissions.",
|
||||
|
|
@ -12007,7 +12002,6 @@
|
|||
"mode-options": {
|
||||
"disabled": {
|
||||
"folder-connected": "Full instance synchronization is disabled because another folder is already synced with a repository.",
|
||||
"legacy-storage": "Legacy storage mode only supports syncing the entire Grafana instance.",
|
||||
"not-allowed": "Provisioning settings for this repository restrict syncing to specific targets. Update the repository configuration to enable this option.",
|
||||
"not-supported": "This option is not supported yet."
|
||||
},
|
||||
|
|
@ -12087,7 +12081,6 @@
|
|||
"repository-status-page": {
|
||||
"back-to-repositories": "Back to repositories",
|
||||
"cleaning-up-resources": "Cleaning up repository resources",
|
||||
"legacy-storage-message": "Instance is not yet running unified storage -- requires migration wizard",
|
||||
"not-found": "not found",
|
||||
"not-found-message": "Repository not found",
|
||||
"repository-config-exists-configuration": "Make sure the repository config exists in the configuration file.",
|
||||
|
|
@ -12096,7 +12089,6 @@
|
|||
"tab-resources": "Resources",
|
||||
"tab-resources-title": "Repository files and resources",
|
||||
"title": "Repository Status",
|
||||
"title-legacy-storage": "Legacy Storage",
|
||||
"title-queued-for-deletion": "Queued for deletion"
|
||||
},
|
||||
"repository-type-cards": {
|
||||
|
|
@ -12192,9 +12184,7 @@
|
|||
"synchronize-step": {
|
||||
"repository-error": "Repository error",
|
||||
"repository-error-message": "Unable to check repository status. Please verify the repository configuration and try again.",
|
||||
"repository-unhealthy": "The repository cannot be synchronized. Cancel provisioning and try again once the issue has been resolved. See details below.",
|
||||
"synchronization-description": "Include commits for each historical value",
|
||||
"synchronization-options": "Synchronization options"
|
||||
"repository-unhealthy": "The repository cannot be synchronized. Cancel provisioning and try again once the issue has been resolved. See details below."
|
||||
},
|
||||
"token-permissions-info": {
|
||||
"and-click": "and click",
|
||||
|
|
@ -12211,11 +12201,11 @@
|
|||
},
|
||||
"warning-title-default": "Warning",
|
||||
"wizard": {
|
||||
"alert-point-1": "Resources won't be able to be created, edited, or deleted during this process. In the last step, they will disappear.",
|
||||
"alert-point-2": "Once provisioning is complete, resources will reappear and be managed through external storage.",
|
||||
"alert-point-1": "Resources can still be created, edited, or deleted during this process, but changes may not be exported.",
|
||||
"alert-point-2": "Once provisioning is complete, resources will be marked as managed through external storage.",
|
||||
"alert-point-3": "The duration of this process depends on the number of resources involved.",
|
||||
"alert-point-4": "Enterprise instance administrators can display an announcement banner to users. See <2>this guide</2> for step-by-step instructions.",
|
||||
"alert-title": "Important: No data or configuration will be lost, but dashboards will be temporarily unavailable for a few minutes.",
|
||||
"alert-point-4": "Enterprise instance administrators can display an announcement banner to notify users that migration is in progress. See <2>this guide</2> for step-by-step instructions.",
|
||||
"alert-title": "Important: No data or configuration will be lost. Dashboards remain accessible during migration, but changes made during this process may not be exported.",
|
||||
"button-cancel": "Cancel",
|
||||
"button-cancelling": "Cancelling...",
|
||||
"button-next": "Finish",
|
||||
|
|
@ -12233,7 +12223,6 @@
|
|||
"step-finish": "Choose additional settings",
|
||||
"step-synchronize": "Synchronize with external storage",
|
||||
"sync-description": "Sync resources with external storage. After this one-time step, all future updates will be automatically saved to the repository and provisioned back into the instance.",
|
||||
"sync-option-history": "History",
|
||||
"title-bootstrap": "Choose what to synchronize",
|
||||
"title-connect": "Connect to external storage",
|
||||
"title-finish": "Choose additional settings",
|
||||
|
|
|
|||
Loading…
Reference in a new issue