Update dependency prettier to v3.6.2 (#108689)

* Update dependency prettier to v3.6.2

* run prettier

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Ashley Harrison <ashley.harrison@grafana.com>
This commit is contained in:
renovate[bot] 2025-07-25 17:47:44 +01:00 committed by GitHub
parent 20cea80795
commit c94f930950
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
104 changed files with 50 additions and 250 deletions

View file

@ -234,7 +234,6 @@ Grunt & Watch tasks:
- binary to `/usr/sbin/grafana-server`
- init.d script improvements, renamed to `/etc/init.d/grafana-server`
- added default file with environment variables,
- `/etc/default/grafana-server` (deb/ubuntu)
- `/etc/sysconfig/grafana-server` (centos/redhat)

View file

@ -7,7 +7,6 @@ This document describes the rules and governance of the project. It is meant to
- **Maintainers**: Maintainers lead an individual project or parts thereof ([`MAINTAINERS.md`][maintainers]).
- **Projects**: A single repository in the Grafana GitHub organization and listed below is referred to as a project:
- clock-panel
- devtools
- gel-app

View file

@ -142,7 +142,6 @@ Check [`security_config_step.go`](./pkg/app/checks/configchecks/security_config_
2. **Type Safety**: Use type assertions to ensure you're working with the correct type of item.
3. **Severity Levels**: Use appropriate severity levels:
- `CheckReportFailureSeverityHigh`: For critical issues that need immediate attention
- `CheckReportFailureSeverityLow`: For non-critical issues that can be addressed later

View file

@ -234,7 +234,6 @@ In case there is an uncertainty around the prioritization of an issue, please as
**Critical bugs**
1. If a bug has been categorized and any of the following criteria apply, the bug should be labeled as critical and must be actively worked on as someone's top priority right now:
- Results in any data loss
- Critical security or performance issues
- Problem that makes a feature unusable

View file

@ -14,22 +14,18 @@ The end goal is for the Resource APIs to become the only interface for managing
## 1. Key Differences from Legacy API Endpoints
- **URL structure & versioning:**
- **Resource APIs:** Follow Kubernetes conventions (`/apis/<group>/<version>/namespaces/<namespace>/<resource>/<name>`). Includes explicit API versions (e.g., `v0alpha1`, `v1`, `v2beta1`) in the path, allowing for controlled evolution and multiple versions of a resource API to co-exist.
- **Legacy APIs:** Variable path structures (e.g., `/api/dashboards/uid/:uid`, `/api/ruler/grafana/api/v1/rules/:uid/:uid`). Less explicit versioning.
- **Resource schemas:**
- **Resource APIs:** Each resource has a well-defined schema (`spec`) and is wrapped with an envelope with common metadata, all APIs come with an always-in-sync OpenAPI spec.
- **Legacy APIs:** Structures vary
- **Namespacing / org context:**
- **Resource APIs:** Uses explicit namespaces in the path (`/namespaces/<namespace>/...`) mapping to Grafana Organization IDs in OSS/Enterprise and Grafana Cloud Stack IDs in Grafana Cloud for scoping.
- **Legacy APIs:** Org context determined implicitly (session, API key), not usually part of the URL structure.
- **Consistency of convenience features:**
- **Resource APIs:** Convenience features such as resource history, restore and observability-as-code tooling come out of the box; a single implementation of convenience features works across all resources because of the standardization
- **Legacy APIs:** Different APIs support different convenience features; implementations are feature-specific

View file

@ -66,7 +66,6 @@ Instructions below show how to set up a link that can run metrics query for the
```
Two data sources are created: Source (emulating logs data source) and Target (emulating metrics data source):
- A correlation called “App metrics” is created targeting the Target data source with its UID.
- The label and description are provided as text
- Each correlation contains the following configuration:

View file

@ -79,7 +79,6 @@ To configure Grafana for high availability:
In this task you configure Grafana Enterprise to validate the license with AWS instead of Grafana Labs.
1. In AWS IAM, create an access policy with the following permissions:
- `"license-manager:CheckoutLicense"`
- `"license-manager:ListReceivedLicenses"`
- `"license-manager:GetLicenseUsage"`

View file

@ -89,7 +89,6 @@ For more information on Grafana High Availability setup, refer to [Set up Grafan
In this task, you configure Grafana Enterprise to validate the license with AWS instead of Grafana Labs.
1. In AWS IAM, assign the following permissions to the Node IAM role (if you are using a Node Group), or the Pod Execution role (if you are using a Fargate profile):
- `"license-manager:CheckoutLicense"`
- `"license-manager:ListReceivedLicenses"`
- `"license-manager:GetLicenseUsage"`
@ -100,7 +99,6 @@ In this task, you configure Grafana Enterprise to validate the license with AWS
For more information about AWS license permissions, refer to [Actions, resources, and condition keys for AWS License Manager](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awslicensemanager.html).
1. Choose **one** of the following options to update the [license_validation_type](../../../../setup-grafana/configure-grafana/enterprise-configuration/#license_validation_type) configuration to `aws`:
- **Option 1:** Use `kubectl edit configmap grafana` to edit `grafana.ini` add the following section to the configuration:
```

View file

@ -44,7 +44,6 @@ To install Grafana, refer to the documentation specific to your implementation.
To retrieve your license, Grafana Enterprise requires access to your AWS account and license information. To grant access, create an IAM user in AWS with access to the license, and pass its credentials as environment variables on the host or container where Grafana is running. These environment variables allow Grafana to retrieve license details from AWS.
1. In the AWS License Manager service, create an IAM policy with the following permissions:
- `"license-manager:CheckoutLicense"`
- `"license-manager:ListReceivedLicenses"`
- `"license-manager:GetLicenseUsage"`
@ -93,7 +92,6 @@ To retrieve your license, Grafana Enterprise requires access to your AWS account
1. Attach the policy you created to the IAM user.
1. Add the following values as environment variables to the host or container running Grafana:
- AWS region
- IAM user's access key ID
- IAM user's secret access key

View file

@ -136,7 +136,6 @@ After a snapshot is created, a list of resources appears with resource Type and
1. Use the assistant's real-time progress tracking to monitor the migration. The status changes to 'Uploaded to cloud' for resources successfully copied to the cloud.
From Grafana v12.0, you can group and sort resources during and after the migration:
- Click **Name** to sort resources alphabetically.
- Click **Type** to group and sort by resource type.
- Click **Status** to group and sort by upload status (pending upload, uploaded successfully, or experienced errors).

View file

@ -84,7 +84,6 @@ Migration of plugins is the first step when transitioning from Grafana OSS/Enter
```
The command provided above will carry out an HTTP request to this endpoint and accomplish several tasks:
- It issues a GET request to the `/api/plugins` endpoint of your Grafana OSS/Enterprise instance to retrieve a list of installed plugins.
- It filters out the list to only include community plugins and those signed by external parties.
- It extracts the plugin ID and version before storing them in a `plugins.json` file.
@ -111,7 +110,6 @@ Migration of plugins is the first step when transitioning from Grafana OSS/Enter
Replace `<GRAFANA_CLOUD_ACCESS_TOKEN>` with your Grafana Cloud Access Policy Token. To create a new one, refer to Grafana Cloud [access policies documentation](https://grafana.com/docs/grafana-cloud/account-management/authentication-and-permissions/access-policies/)
This script iterates through each plugin listed in the `plugins.json` file:
- It constructs a POST request for each plugin to add it to the specified Grafana Cloud instance.
- It reports back the response for each POST request to give you confirmation or information about any issues that occurred.
@ -260,7 +258,6 @@ Grizzly does not currently support Reports and Playlists as a resource, so you c
```
The command provided above will carry out an HTTP request to this endpoint and accomplish several tasks:
- It fetches an array of all the playlists available in the Grafana OSS/Enterprise instance.
- It then iterates through each playlist to obtain the complete set of details.
- Finally, it stores each playlist's specification as separate JSON files within a directory named `playlists`

View file

@ -249,7 +249,6 @@ To enable backend communication between plugins:
```
This is a comma-separated list that uses glob matching.
- To allow access to all plugins that have a backend:
```

View file

@ -111,7 +111,6 @@ To convert data source-managed alert rules to Grafana managed alerts:
2. Navigate to the Data source-managed alert rules section and click **Import to Grafana-managed rules**.
3. Choose the **Import source** from which you want to import rules:
- Select **Existing data source-managed rules** to import rules from connected Mimir or Loki data sources with the ruler API enabled.
- Select **Prometheus YAML file** to import rules by uploading a Prometheus YAML rule file.

View file

@ -276,7 +276,6 @@ To do this, you need to make sure that your alert rule is in the right evaluatio
This is different to [mute timings](ref:mute-timings), which stop notifications from being delivered, but still allows for alert rule evaluation and the creation of alert instances.
1. In **Configure no data and error handling**, you can define the alerting behavior and alerting state for two scenarios:
- When the evaluation returns **No data** or all values are null.
- When the evaluation returns **Error** or timeout.
@ -297,7 +296,6 @@ Complete the following steps to set up notifications.
1. Configure who receives a notification when an alert rule fires by either choosing **Select contact point** or **Use notification policy**.
**Select contact point**
1. Choose this option to select an existing [contact point](ref:contact-points).
All notifications for this alert rule are sent to this contact point automatically and notification policies aren't used.
@ -305,7 +303,6 @@ Complete the following steps to set up notifications.
1. You can also optionally select a mute or active timing as well as groupings and timings to define when not to send notifications.
**Use notification policy**
1. Choose this option to use the [notification policy tree](ref:notification-policies) to handle alert notifications.
All notifications for this alert rule are managed by the notification policy tree, which routes alerts based on their labels.

View file

@ -38,7 +38,6 @@ Note that in data source-managed groups, the alert rules and recording rules wit
- Verify that you have write permission to the Prometheus or Loki data source. Otherwise, you will not be able to create or update Grafana Mimir managed alerting rules.
- For Grafana Mimir and Loki data sources, enable the ruler API by configuring their respective services.
- **Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types.
- **Mimir** - use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the [Query API](/docs/mimir/latest/operators-guide/reference-http-api/#querier--query-frontend) and [Ruler API](/docs/mimir/latest/operators-guide/reference-http-api/#ruler) are under the same URL. You cannot provide a separate URL for the Ruler API.

View file

@ -233,7 +233,6 @@ This setup reproduces label flapping and shows how dynamic label values affect a
1. Simulate a query (`$A`) that returns a noisy signal.
Select **TestData** as the data source and configure the scenario.
- Scenario: Random Walk
- Series count: 1
- Start value: 51
@ -241,7 +240,6 @@ This setup reproduces label flapping and shows how dynamic label values affect a
- Spread: 100 (ensures large changes between consecutive data points)
1. Add an expression.
- Type: Reduce
- Input: A
- Function: Last (to get the most recent value)

View file

@ -146,7 +146,6 @@ You can use the [TestData data source](ref:testdata-data-source) to replicate th
1. Simulate a query (`$A`) that returns latencies for each service.
Select **TestData** as the data source and configure the scenario.
- Scenario: Random Walk
- Alias: latency
- Labels: service=api-$seriesIndex
@ -179,7 +178,6 @@ You can use the [TestData data source](ref:testdata-data-source) to replicate th
For details on CSV format requirements, see [table data examples](ref:table-data-example).
1. Add a new **Reduce** expression (`$C`).
- Type: Reduce
- Input: A
- Function: Mean
@ -188,7 +186,6 @@ You can use the [TestData data source](ref:testdata-data-source) to replicate th
This calculates the average latency for each service: `api-0`, `api-1`, etc.
1. Add a new **Math** expression.
- Type: Math
- Expression: `$C > $B`
- Set this expression as the **alert condition**.

View file

@ -116,7 +116,6 @@ You can quickly experiment with multi-dimensional alerts using the [**TestData**
1. Go to **Alerting** and create an alert rule
1. Select **TestData** as the data source.
1. Configure the TestData scenario
- Scenario: **Random Walk**
- Labels: `cpu=cpu-$seriesIndex`
- Series count: 3

View file

@ -89,11 +89,9 @@ This section outlines a minimal setup to configure Amazon SNS with Alerting.
### 1. Create an SNS Topic and Email Subscriber
1. **Navigate to SNS in AWS Console**:
- Go to the [Amazon SNS Console](https://console.aws.amazon.com/sns/v3/home).
2. **Create a new topic**:
- On the **Topics** page, choose **"Create topic"**.
- Select **"Standard"** as the type.
- Enter a **Name** for your topic, e.g., `My-Topic`.
@ -110,11 +108,9 @@ This section outlines a minimal setup to configure Amazon SNS with Alerting.
### 2. Create an IAM Policy, User, and Access Key
1. **Navigate to IAM in AWS Console**:
- Go to the [IAM Console](https://console.aws.amazon.com/iam/home).
2. **Create a new policy**:
- On the **Policies** page, choose **"Create policy"**.
- Switch to the **"JSON"** tab and paste the following policy, replacing `Resource` with your SNS topic ARN:
@ -134,7 +130,6 @@ This section outlines a minimal setup to configure Amazon SNS with Alerting.
- Click **"Next"**, name it (e.g., `SNSPublishPolicy`), and click **"Create policy"**.
3. **Create a new IAM user and assign the policy**
- In the IAM Console, on the **Users** page, choose **"Create user"**.
- Enter a **User name**, e.g., `alerting-sns-user`.
- Click **"Next"**.

View file

@ -53,7 +53,6 @@ For Grafana OSS, you enable email notifications by first configuring [SMTP setti
1. Configure SMTP settings.
Within the `[smtp]` settings section, specify the following parameters:
- `enabled = true`: Enables SMTP.
- `host`: The hostname or IP address of your SMTP server, and the port number of your SMTP server (commonly 25, 465, or 587). Default is `localhost:25`.
- `user`: Your SMTP username (if authentication is required).

View file

@ -94,7 +94,6 @@ To create the integration, follow the same steps as described in [Configure an O
1. Navigate to **Alerts & IRM** -> **IRM** -> **Integrations**.
1. Click **+ New integration**.
1. Select either **Alertmanager** or **Webhook** integration from the list.
- **Alertmanager** integration Includes preconfigured IRM templates for processing Grafana and Prometheus alerts.
- **Webhook** integration Uses default IRM templates for general alert processing.

View file

@ -444,7 +444,6 @@ Use one of the following methods to include a dashboard link with the correct ti
```
These URLs include a time range based on the alerts timing:
- `from`: One hour before the alert started.
- `to`: The current time if the alert is firing, or the alerts end time if resolved.

View file

@ -57,7 +57,6 @@ To add an existing notification template to your contact point, complete the fol
1. Click **Edit**.
A dialog box opens where you can select notification templates.
1. Click **Select notification template** or **Enter custom message** to customize a template or message
- You can select an existing notification template and [preview](#preview-a-notification-template) it using the default payload.
- You can also copy the notification template and use it in the **Enter custom message** tab.

View file

@ -55,12 +55,10 @@ Use templating to customize, format, and reuse alert notification messages. Crea
In Grafana, you have various options to template your alert notification messages:
1. [Alert rule annotations](#template-annotations)
- Annotations add extra information, like `summary` and `description`, to alert instances for notification messages.
- Template annotations to display query values that are meaningful to the alert, for example, the server name or the threshold query value.
1. [Alert rule labels](#template-labels)
- Labels are used to differentiate an alert instance from all other alert instances.
- Template labels to add an additional label based on a query value, or when the labels from the query are incomplete or not descriptive enough.
- Avoid displaying query values in labels as this can create numerous alert instances—use annotations instead.

View file

@ -58,7 +58,6 @@ Choose from the options below to import (or provision) your Grafana Alerting res
1. [Use configuration files to provision your alerting resources](ref:alerting_file_provisioning), such as alert rules and contact points, through files on disk.
{{< admonition type="note" >}}
- You cannot edit provisioned resources from files in the Grafana UI.
- Provisioning with configuration files is not available in Grafana Cloud.
{{< /admonition >}}

View file

@ -159,7 +159,6 @@ In this section, we'll create Terraform configurations for each alerting resourc
```
Replace the following field values:
- `<terraform_data_source_name>` with the terraform name of the data source.
- `<terraform_folder_name>` with the terraform name of the folder.
@ -233,13 +232,11 @@ In this section, we'll create Terraform configurations for each alerting resourc
```
Replace the following field values:
- `<terraform_rule_group_name>` with the name of the alert rule group.
Note that the distinct Grafana resources are connected through `uid` values in their Terraform configurations. The `uid` value will be randomly generated when provisioning.
To link the alert rule group with its respective data source and folder in this example, replace the following field values:
- `<terraform_data_source_name>` with the terraform name of the previously defined data source.
- `<terraform_folder_name>` with the terraform name of the previously defined folder.
@ -266,7 +263,6 @@ In this section, we'll create Terraform configurations for each alerting resourc
```
Replace the following field values:
- `<terraform_contact_point_name>` with the terraform name of the contact point. It will be used to reference the contact point in other Terraform resources.
- `<email_address>` with the email to receive alert notifications.
@ -332,7 +328,6 @@ In this section, we'll create Terraform configurations for each alerting resourc
```
Replace the following field values:
- `<terraform_mute_timing_name>` with the name of the Terraform resource. It will be used to reference the mute timing in the Terraform notification policy tree.
1. Continue to add more Grafana resources or [use the Terraform CLI for provisioning](#provision-grafana-resources-with-terraform).
@ -363,7 +358,6 @@ In this section, we'll create Terraform configurations for each alerting resourc
```
To configure the mute timing and contact point previously created in the notification policy tree, replace the following field values:
- `<terraform_data_source_name>` with the terraform name of the previously defined contact point.
- `<terraform_folder_name>` with the terraform name of the previously defined mute timing.

View file

@ -140,7 +140,6 @@ To add a new annotation query to a dashboard, follow these steps:
1. If you don't want the annotation query toggle to be displayed in the dashboard, select the **Hidden** checkbox.
1. Select a color for the event markers.
1. In the **Show in** drop-down, choose one of the following options:
- **All panels** - The annotations are displayed on all panels that support annotations.
- **Selected panels** - The annotations are displayed on all the panels you select.
- **All panels except** - The annotations are displayed on all panels except the ones you select.

View file

@ -101,7 +101,6 @@ Dashboards and panels allow you to show your data in visual form. Each panel nee
{{< /shared >}}
1. In the dialog box that opens, do one of the following:
- Select one of your existing data sources.
- Select one of the Grafana [built-in special data sources](ref:built-in-special-data-sources).
- Click **Configure a new data source** to set up a new one (Admins only).
@ -127,7 +126,6 @@ Dashboards and panels allow you to show your data in visual form. Each panel nee
1. Refer to the following documentation for ways you can adjust panel settings.
While not required, most visualizations need some adjustment before they properly display the information that you need.
- [Configure value mappings](ref:configure-value-mappings)
- [Visualization-specific options](ref:visualization-specific-options)
- [Override field values](ref:override-field-values)

View file

@ -105,13 +105,11 @@ To create a dashboard, follow these steps:
{{< figure src="/media/docs/grafana/dashboards/screenshot-new-dashboard-v12.png" max-width="750px" alt="New dashboard" >}}
1. Under **Panel layout**, choose one of the following options:
- **Custom** - Position and size panels manually. The default selection.
- **Auto grid** - Panels are automatically resized to create a uniform grid based on the column and row settings.
1. Click **+ Add visualization**.
1. In the dialog box that opens, do one of the following:
- Select one of your existing data sources.
- Select one of the Grafana [built-in special data sources](ref:built-in-special-data-sources).
- Click **Configure a new data source** to set up a new one (Admins only).
@ -137,7 +135,6 @@ To create a dashboard, follow these steps:
1. Refer to the following documentation for ways you can adjust panel settings.
While not required, most visualizations need some adjustment before they properly display the information that you need.
- [Configure value mappings](ref:configure-value-mappings)
- [Visualization-specific options](ref:visualization-specific-options)
- [Override field values](ref:override-field-values)
@ -227,9 +224,7 @@ To configure repeats, follow these steps:
1. Expand the **Repeat options** section.
1. Select the **Repeat by variable**.
1. For panels only, set the following options:
- Under **Repeat direction**, choose one of the following:
- **Horizontal** - Arrange panels side-by-side. Grafana adjusts the width of a repeated panel. You cant mix other panels on a row with a repeated panel.
- **Vertical** - Arrange panels in a column. The width of repeated panels is the same as the original, repeated panel.
@ -276,14 +271,12 @@ To configure show/hide rules, follow these steps:
1. Select **Show** or **Hide** to set whether the panel, row, or tab is shown or hidden based on the rules outcome.
1. Click **+ Add rule**.
1. Select a rule type:
- **Query result** - Show or hide a panel based on query results. Choose from **Has data** and **No data**. For panels only.
- **Template variable** - Show or hide the panel, row, or tab dynamically based on the variable value. Select a variable and operator and enter a value.
- **Time range less than** - Show or hide the panel, row, or tab if the dashboard time range is shorter than the selected time frame. Select or enter a time range.
1. Configure the rule.
1. Under **Match rules**, select one of the following:
- **Match all** - The panel, row, or tab is shown or hidden only if _all_ the rules are matched.
- **Match any** - The panel, row, or tab is shown or hidden if _any_ of the rules are matched.
@ -315,7 +308,6 @@ To edit dashboards, follow these steps:
1. Click in the area you want to work with to bring it into focus and display the associated options in the edit pane.
1. Do one of the following:
- For rows or tabs, make the required changes using the edit pane.
- For panels, update the panel title, description, repeat options or show/hide rules in the edit pane. For more changes, click **Configure** and continue in **Edit panel** view.
- For dashboards, update the dashboard title, description, grouping or panel layout. For more changes, click the settings (gear) icon in the top-right corner.
@ -355,7 +347,6 @@ To move or resize, follow these steps:
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
1. Do one of the following:
- Click the panel title and drag the panel to the new location.
- Click and drag the lower-right corner of the panel to change the size of the panel.

View file

@ -40,7 +40,6 @@ To import a dashboard, follow these steps:
1. Click **Dashboards** in the primary menu.
1. Click **New** and select **Import** in the drop-down menu.
1. Perform one of the following steps:
- Upload a dashboard JSON file.
- Paste a [Grafana.com dashboard](#discover-dashboards-on-grafanacom) URL or ID into the field provided.
- Paste dashboard JSON text directly into the text area.

View file

@ -94,7 +94,6 @@ Add links to other dashboards at the top of your current dashboard.
If you don't add any tags, Grafana includes links to all other dashboards.
1. Set link options:
- **Show as dropdown** If you are linking to lots of dashboards, then you probably want to select this option and add an optional title to the dropdown. Otherwise, Grafana displays the dashboard links side by side across the top of your dashboard.
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link. For more information, see [Dashboard URL variables](ref:dashboard-url-variables).
@ -118,7 +117,6 @@ Add a link to a URL at the top of your current dashboard. You can link to any av
1. In the **Tooltip** field, enter the tooltip you want the link to display when the user hovers their mouse over it.
1. In the **Icon** drop-down, choose the icon you want displayed with the link.
1. Set link options; by default, these options are enabled for URL links:
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link.
- **Open link in new tab** Select this option if you want the dashboard link to open in a new tab or window.
@ -134,7 +132,6 @@ To edit, duplicate, or delete dashboard link, follow these steps:
1. Click **Settings**.
1. Go to the **Links** tab.
1. Do one of the following:
- **Edit** - Click the name of the link and update the link settings.
- **Duplicate** - Click the copy link icon next to the link that you want to duplicate.
- **Delete** - Click the red **X** next to the link that you want to delete, and then **Delete**.

View file

@ -58,7 +58,6 @@ Adjust dashboard time settings when you want to change the dashboard timezone, t
1. On the **Settings** page, scroll down to the **Time Options** section of the **General** tab.
1. Specify time settings as follows.
- **Time zone:** Specify the local time zone of the service or system that you are monitoring. This can be helpful when monitoring a system or service that operates across several time zones.
- **Default:** Grafana uses the default selected time zone for the user profile, team, or organization. If no time zone is specified for the user profile, a team the user is a member of, or the organization, then Grafana uses the local browser time.
- **Browser time:** The time zone configured for the viewing user browser is used. This is usually the same time zone as set on the computer.

View file

@ -48,7 +48,6 @@ You can start a playlist in four different view modes. View modes determine how
1. Find the desired playlist and click **Start playlist**.
1. In the dialog box that opens, select one of the [four playlist modes](#playlist-modes) available.
1. Disable any dashboard controls that you don't want displayed while the list plays; these controls are enabled and visible by default. Select from:
- **Time and refresh**
- **Variables**
- **Dashboard links**
@ -100,7 +99,6 @@ You can edit a playlist including adding, removing, and rearranging the order of
1. Click **Dashboards** in the main menu.
1. Click **Playlists**.
1. Find the playlist you want to update and click **Edit playlist**. Do one or more of the following:
- Edit - Update the name and time interval.
- Add dashboards - Search for dashboards by title or tag to add them to the playlist.
- Rearrange dashboards - Click and drag the dashboards into your desired order.

View file

@ -147,7 +147,6 @@ To create a report, follow these steps:
- [Recipients](#4-recipients)
- [Attachments](#5-attachments)
1. Click one of the following buttons at the bottom of the **Schedule report** drawer:
- The menu icon to access the following options:
- **Download CSV**
- **Preview PDF**
@ -179,7 +178,6 @@ To create a report, follow these steps:
- [Recipients](#4-recipients)
- [Attachments](#5-attachments)
1. Click one of the following buttons at the bottom of the **Schedule report** drawer:
- The menu icon to access the following options:
- **Download CSV**
- **Preview PDF**
@ -358,7 +356,6 @@ You can also navigate to the list of all reports from the dashboard-specific lis
To edit a report, follow these steps:
1. Do one of the following:
- In the main menu, click **Dashboards > Reporting**.
- Navigate to the dashboard from which the report was generated and click **Share > Schedule report**.
@ -373,12 +370,10 @@ You can pause and resume sending reports from the report list view.
To do this, follow these steps:
1. Do one of the following:
- In the main menu, click **Dashboards > Reporting**.
- Navigate to the dashboard from which the report was generated and click **Share > Schedule report**.
1. On the row of the report you want to update, do one of the following:
- Click the pause icon - The report won't be sent according to its schedule until it's resumed.
- Click the resume icon - The report resumes on its previous schedule.
@ -389,7 +384,6 @@ You can also pause or resume a report from **Update report** drawer.
To delete a report, follow these steps:
1. Do one of the following:
- In the main menu, click **Dashboards > Reporting**.
- Navigate to the dashboard from which the report was generated and click **Share > Schedule report**.

View file

@ -76,7 +76,6 @@ Folders help you organize and group dashboards, which is useful when you have ma
1. Click **Dashboards** in the primary menu.
1. Do one of the following:
- On the **Dashboards** page, click **New** and select **New folder** in the drop-down.
- Click an existing folder and on the folders page, click **New** and select **New folder** in the drop-down.

View file

@ -259,7 +259,6 @@ To share a personalized, direct link to your panel within your organization, fol
1. Click **Copy link**.
1. Send the copied link to a Grafana user with authorization to view it.
1. (Optional) To [generate an image of the panel as a PNG file](ref:image-rendering), customize the image settings:
- **Width** - In pixels. The default is 1000.
- **Height** - In pixels. The default is 500.
- **Scale factor** - The default is 1.

View file

@ -304,7 +304,6 @@ To edit or delete filters, follow these steps:
1. On the dashboard, click anywhere on the filter you want to change.
1. Do one of the following:
- To edit the operator or value of a filter, click anywhere on the filter and update it.
![Editing an ad hoc filter](/media/docs/grafana/dashboards/screenshot-edit-filters-v11.3.png)

View file

@ -121,13 +121,11 @@ To create a variable, follow these steps:
If you don't enter a display name, then the drop-down list label is the variable name.
1. Choose a **Show on dashboard** option:
- **Label and value** - The variable drop-down list displays the variable **Name** or **Label** value. This is the default.
- **Value:** The variable drop-down list only displays the selected variable value and a down arrow.
- **Nothing:** No variable drop-down list is displayed on the dashboard.
1. Click one of the following links to complete the steps for adding your selected variable type:
- [Query](#add-a-query-variable)
- [Custom](#add-a-custom-variable)
- [Textbox](#add-a-text-box-variable)
@ -157,7 +155,6 @@ Query expressions are different for each data source. For more information, refe
For more information about data sources, refer to [Add a data source](ref:add-a-data-source).
1. In the **Query type** drop-down list, select one of the following options:
- **Label names**
- **Label values**
- **Metrics**
@ -166,7 +163,6 @@ Query expressions are different for each data source. For more information, refe
- **Classic query**
1. In the **Query** field, enter a query.
- The query field varies according to your data source. Some data sources have custom query editors.
- Each data source defines how the variable values are extracted. The typical implementation uses every string value returned from the data source response as a variable value. Make sure to double-check the documentation for the data source.
- Some data sources let you provide custom "display names" for the values. For instance, the PostgreSQL, MySQL, and Microsoft SQL Server plugins handle this by looking for fields named `__text` and `__value` in the result. Other data sources may look for `text` and `value` or use a different approach. Always remember to double-check the documentation for the data source.
@ -175,12 +171,10 @@ Query expressions are different for each data source. For more information, refe
1. (Optional) In the **Regex** field, type a regular expression to filter or capture specific parts of the names returned by your data source query. To see examples, refer to [Filter variables with a regular expression](#filter-variables-with-regex).
1. In the **Sort** drop-down list, select the sort order for values to be displayed in the dropdown list. The default option, **Disabled**, means that the order of options returned by your data source query is used.
1. Under **Refresh**, select when the variable should update options:
- **On dashboard load** - Queries the data source every time the dashboard loads. This slows down dashboard loading, because the variable query needs to be completed before dashboard can be initialized.
- **On time range change** - Queries the data source every time the dashboard loads and when the dashboard time range changes. Use this option if your variable options query contains a time range filter or is dependent on the dashboard time range.
1. (Optional) Configure the settings in the [Selection Options](#configure-variable-selection-options) section:
- **Multi-value** - Enables multiple values to be selected at the same time.
- **Include All option** - Enables an option to include all variables.
@ -200,7 +194,6 @@ For example, if you have server names or region names that never change, then yo
You can include numbers, strings, or key/value pairs separated by a space and a colon. For example, `key1 : value1,key2 : value2`.
1. (Optional) Configure the settings in the [Selection Options](#configure-variable-selection-options) section:
- **Multi-value** - Enables multiple values to be selected at the same time.
- **Include All option** - Enables an option to include all variables.
@ -249,7 +242,6 @@ _Data source_ variables enable you to quickly change the data source for an enti
Leave this field empty to display all instances.
1. (Optional) Configure the settings in the [Selection Options](#configure-variable-selection-options) section:
- **Multi-value** - Enables multiple values to be selected at the same time.
- **Include All option** - Enables an option to include all variables.
@ -271,7 +263,6 @@ You can use an interval variable as a parameter to group by time (for InfluxDB),
1. (Optional) Select on the **Auto option** checkbox if you want to add the `auto` option to the list.
This option allows you to specify how many times the current time range should be divided to calculate the current `auto` time span. If you turn it on, then two more options appear:
- **Step count** - Select the number of times the current time range is divided to calculate the value, similar to the **Max data points** query option. For example, if the current visible time range is 30 minutes, then the `auto` interval groups the data into 30 one-minute increments. The default value is 30 steps.
- **Min interval** - The minimum threshold below which the step count intervals does not divide the time. To continue the 30 minute example, if the minimum interval is set to 2m, then Grafana would group the data into 15 two-minute increments.

View file

@ -130,7 +130,6 @@ The following settings are specific to the Elasticsearch data source.
- **Index name** - Use the index settings to specify a default for the `time field` and your Elasticsearch index's name. You can use a time pattern, for example `[logstash-]YYYY.MM.DD`, or a wildcard for the index name. When specifying a time pattern, the fixed part(s) of the pattern should be wrapped in square brackets.
- **Pattern** - Select the matching pattern if using one in your index name. Options include:
- no pattern
- hourly
- daily

View file

@ -61,7 +61,6 @@ Metrics queries aggregate data and produce a variety of calculations such as cou
- **Alias** - Aliasing only applies to **time series queries**, where the last group is `date histogram`. This is ignored for any other type of query.
- **Metric** - Metrics aggregations include:
- count - see [Value count aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-valuecount-aggregation.html)
- average - see [Avg aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-rate-aggregation.html)
- sum - see [Sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html)
@ -78,7 +77,6 @@ You can select multiple metrics and group by multiple terms or filters when usin
Use the **+ sign** to the right to add multiple metrics to your query. Click on the **eye icon** next to **Metric** to hide metrics, and the **garbage can icon** to remove metrics.
- **Group by options** - Create multiple group by options when constructing your Elasticsearch query. Date histogram is the default option. Below is a list of options in the dropdown menu.
- terms - see [Terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html).
- filter - see [Filter aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html).
- geo hash grid - see [Geohash grid aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html).

View file

@ -83,7 +83,6 @@ If Grafana is running on a Google Compute Engine (GCE) virtual machine, when you
Before you can request data from Google Cloud Monitoring, you must first enable necessary APIs on the Google end.
1. Open the Monitoring and Cloud Resource Manager API pages:
- [Monitoring API](https://console.cloud.google.com/apis/library/monitoring.googleapis.com)
- [Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com)

View file

@ -89,7 +89,6 @@ The following components will help you build a MySQL query:
- **Format** - Select a format response from the drop-down for the MySQL query. The default is **Table**. If you use the **Time series** format option, one of the columns must be `time`.
- **Dataset** - Select a database to query from the drop-down.
- **Table** - Select a table from the drop-down. Tables correspond to the chosen database.
- **Data operations** - _Optional_ Select an aggregation from the drop-down. You can add multiple data operations by clicking the **+ sign**. Click the **X** to remove a data operation. Click the **garbage can icon** to remove the entire column.
@ -99,15 +98,12 @@ The following components will help you build a MySQL query:
- **Alias** - _Optional_ Add an alias from the drop-down. You can also add your own alias by typing it in the box and clicking **Enter**. Remove an alias by clicking the **X**.
- **Filter** - Toggle to add filters.
- **Filter by column value** - _Optional_ If you toggle **Filter** you can add a column to filter by from the drop-down. To filter on more columns, click the **+ sign** to the right of the condition drop-down. You can choose a variety of operators from the drop-down next to the condition. When multiple filters are added you can add an `AND` operator to display all true conditions or an `OR` operator to display any true conditions. Use the second drop-down to choose a filter. To remove a filter, click the `X` button next to that filter's drop-down. After selecting a date type column, you can choose **Macros** from the operators list and select `timeFilter` which will add the `$\_\_timeFilter` macro to the query with the selected date column.
- **Group** - Toggle to add **Group by column**.
- **Group by column** - Select a column to filter by from the drop-down. Click the **+ sign** to filter by multiple columns. Click the **X** to remove a filter.
- **Order** - Toggle to add an ORDER BY statement.
- **Order by** - Select a column to order by from the drop-down. Select ascending (`ASC`) or descending (`DESC`) order.
- **Limit** - You can add an optional limit on the number of retrieved results. Default is 50.

View file

@ -148,7 +148,6 @@ Use the IP address of the Prometheus container, or the hostname if you are using
There are three authentication options for the Prometheus data source.
- **Basic authentication** - The most common authentication method.
- **User** - The username you use to connect to the data source.
- **Password** - The password you use to connect to the data source.

View file

@ -78,7 +78,6 @@ The following video demonstrates how to use the visual Prometheus query builder:
Builder mode contains the following components:
- **Kick start your query** - Click to view a list of predefined operation patterns that help you quickly build queries with multiple operations. These include:
- Rate query starters
- Histogram query starters
- Binary query starters
@ -104,7 +103,6 @@ Click **+ Operations** to select from a list of operations including Aggregation
**Options:**
- **Legend**- Lets you customize the name for the time series. You can use a predefined or custom format.
- **Auto** - Displays unique labels. Also displays all overlapping labels if a series has multiple labels.
- **Verbose** - Displays all label names.
- **Custom** - Lets you customize the legend using label templates. For example, `{{hostname}}` is replaced with the value of the `hostname` label. To switch to a different legend mode, clear the input and click outside the field.
@ -112,7 +110,6 @@ Click **+ Operations** to select from a list of operations including Aggregation
- **Min step** - Sets the minimum interval between data points returned by the query. For example, setting this to `1h` suggests that data is collected or displayed at hourly intervals. This setting supports the `$__interval` and `$__rate_interval` macros. Note that the time range of the query is aligned to this step size, which may adjust the actual start and end times of the returned data.
- **Format** - Determines how the data from your Prometheus query is interpreted and visualized in a panel. Choose from the following format options:
- **Time series** - The default format. Refer to [Time series kind formats](https://grafana.com/developers/dataplane/timeseries/) for information on time series data frames and how time and value fields are structured.
- **Table** - Displays data in table format. This format works only in a [Table panel](ref:table).
- **Heatmap** - Displays Histogram-type metrics in a [Heatmap panel](ref:heatmap) by converting cumulative histograms to regular ones and sorting the series by the bucket bound. Converts cumulative histogram data into regular histogram format and sorts the series by bucket boundaries for proper display.

View file

@ -235,7 +235,6 @@ To use custom queries with the configuration, follow these steps:
1. Specify a custom query to be used to query metrics data.
Each linked query consists of:
- **Link Label:** _(Optional)_ Descriptive label for the linked query.
- **Query:** The query ran when navigating from a trace to the metrics data source.
Interpolate tags using the `$__tags` keyword.

View file

@ -45,7 +45,6 @@ To use trace correlations, you need:
1. On step 1, provide a **label** for the correlation, and an optional **description**.
1. On step 2, configure the correlation **target**.
- Select the **Type** drop-down list and choose **Query** to link to another data source or choose **External** for a custom URL.
- For a query **Target**, select the target drop-down list and select the data source that should be queried when the link is clicked. Define the target query.
@ -68,7 +67,6 @@ To use trace correlations, you need:
{{< figure src="/media/docs/tempo/screenshot-grafana-trace-correlations-loki-step-2.png" max-width="900px" class="docs-image--no-shadow" alt="Setting up a correlation for a Loki target using trace variables" >}}
1. On step 3, configure the correlation data source:
- Select your Tempo data source in the **Source** drop-down list.
- Enter the trace data variable you use for the correlation in the **Results field**.
@ -104,7 +102,6 @@ In this example, you configure trace to logs by service name and a trace identif
{{< figure src="/media/docs/tempo/screenshot-grafana-trace-view-correlations-example-1-step-1.png" max-width="900px" class="docs-image--no-shadow" alt="Using correlations for a trace" >}}
1. On step 2, configure the correlation target:
- Select the target type **Query** and select your Loki data source as **Target**.
- Define the Loki query, using `serviceName` and `traceID` as variables derived from the span data:
@ -116,7 +113,6 @@ In this example, you configure trace to logs by service name and a trace identif
{{< figure src="/media/docs/tempo/screenshot-grafana-trace-view-correlations-example-1-step-2.png" max-width="900px" class="docs-image--no-shadow" alt="Using correlations for a trace" >}}
1. On step 3, configure the correlation source:
- Select your Tempo data source as **Source**.
- Use `traceID` as **Results field**.
@ -138,7 +134,6 @@ In this example, you configure trace corrections with a custom URL.
1. On step 1, add a new correlation with the label **Open custom URL** and an optional description.
1. On step 2, configure the correlation target:
- Select the target type **External**.
- Define your target URL, using variables derived from the span data. In this example, we are using `serviceName` and `traceID`.
@ -148,7 +143,6 @@ In this example, you configure trace corrections with a custom URL.
```
1. On step 3, configure the correlation source:
- Select your Tempo data source as **Source**.
- Use `traceID` as **Results field**.

View file

@ -53,7 +53,6 @@ Explore consists of a toolbar, outline, query editor, the ability to add multipl
- **Outline** - Keeps track of the queries and visualization panels created in Explore. Refer to [Content outline](#content-outline) for more detail.
- **Toolbar** - Provides quick access to frequently used tools and settings.
- **Data source picker** - Select a data source from the dropdown menu, or use absolute time.
- **Split** - Click to compare visualizations side by side. Refer to [Split and compare](#split-and-compare) for additional detail.
- **Add** - Click to add your exploration to a dashboard. You can also use this to declare an incident,create a forecast, detect outliers and to run an investigation.

View file

@ -101,7 +101,6 @@ This panel shows the details of the trace in different segments.
- The next segment shows the entire span for the specific trace as a narrow strip.
All levels of the trace from the client all the way down to database query is displayed, which provides a bird's eye view of the time distribution across all layers over which the HTTP request was processed.
1. You can click within this strip view to display a magnified view of a smaller time segment within the span. This magnified view shows up in the bottom segment of the panel.
1. In the magnified view, you can expand or collapse the various levels of the trace to drill down to the specific span of interest.

View file

@ -68,17 +68,14 @@ Refer to the [Foundation SDK](../foundation-sdk) documentation for more informat
If you're already using established Infrastructure as Code or other configuration management tools, Grafana offers integrations to manage resources within your existing workflows.
- [Terraform](https://grafana.com/docs/grafana-cloud/developer-resources/infrastructure-as-code/terraform/)
- Use the Grafana Terraform provider to manage dashboards, alerts, and more.
- Understand how to define and deploy resources using HCL/JSON configurations.
- [Ansible](https://grafana.com/docs/grafana-cloud/developer-resources/infrastructure-as-code/ansible/)
- Learn to use the Grafana Ansible collection to manage Grafana Cloud resources, including folders and cloud stacks.
- Write playbooks to automate resource provisioning through the Grafana API.
- [Grafana Operator](https://grafana.com/docs/grafana-cloud/developer-resources/infrastructure-as-code/grafana-operator/)
- Utilize Kubernetes-native management with the Grafana Operator.
- Manage dashboards, folders, and data sources via Kubernetes Custom Resources.
- Integrate with GitOps workflows for seamless version control and deployment.

View file

@ -87,7 +87,6 @@ This token needs to be added to your Git Sync configuration to enable read and w
1. Create a new token using [Create new fine-grained personal access token](https://github.com/settings/personal-access-tokens/new). Refer to [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) for instructions.
1. Under **Permissions**, expand **Repository permissions**.
1. Set these permissions for Git Sync:
- **Contents**: Read and write permission
- **Metadata**: Read-only permission
- **Pull requests**: Read and write permission

View file

@ -97,7 +97,6 @@ Saving changes requires opening a pull request in your GitHub repository.
1. Click **Save dashboard**.
1. On the **Provisioned dashboard** panel, choose the options you want to use:
- **Update default refresh value**: Check this box to make the current refresh the new default.
- **Update default variable values**: Check this box to make the current values the new default.
- **Path**: Provide the path for your repository, ending in a JSON or YAML file.

View file

@ -90,7 +90,6 @@ To configure repeating panels, follow these steps:
1. Open the **Panel options** section of the panel editor pane.
1. Under **Repeat options**, select a variable in the **Repeat by variable** drop-down list.
1. Under **Repeat direction**, choose one of the following:
- **Horizontal** - Arrange panels side-by-side. Grafana adjusts the width of a repeated panel. You can't mix other panels on a row with a repeated panel.
- **Vertical** - Arrange panels in a column. The width of repeated panels is the same as the original, repeated panel.

View file

@ -198,7 +198,6 @@ To display timestamps that are in seconds since epoch, multiply your timestamp v
1. Click **Add transformation**.
1. Select the **Add field from calculation** transformation.
1. Set the following options:
- **Mode** - **Binary operation**
- **Operation**
- Select your timestamp field

View file

@ -184,7 +184,6 @@ The following image shows a table visualization with value mappings. If you want
1. Scroll to the **Value mappings** section and expand it.
1. Click **Add value mappings**.
1. Click **Add a new mapping** and then select one of the following:
- **Value** - Enter a single value to match.
- **Range** - Enter the beginning and ending values of a range to match.
- **Regex** - Enter a regular expression pattern to match.

View file

@ -43,7 +43,6 @@ Grafana generates a CSV file that contains your data, including any transformati
1. Click **Data**.
If your panel contains multiple queries or queries multiple nodes, then you have additional options.
- **Select result**: Choose which result set data you want to view.
- **Transform data**
- **Join by time**: View raw data from all your queries at once, one result set per column. Click a column heading to reorder the data.

View file

@ -32,7 +32,6 @@ For general information on Grafana expressions, refer to [Write expression queri
## Before you begin
- Enable SQL expressions under the feature toggle `sqlExpressions`.
- If you self-host Grafana, you can find feature toggles in the configuration file `grafana.ini`.
```
@ -158,12 +157,10 @@ Grafana supports three types of data source response formats:
1. **Single Table-like Frame**:
This refers to data returned in a standard tabular structure, where all values are organized into rows and columns, similar to what you'd get from a SQL query.
- **Example**: Any query against a SQL data source (e.g., PostgreSQL, MySQL) with the format set to Table.
2. **Dataplane: Time Series Format**:
This format represents time series data with timestamps and associated values. It is typically returned from monitoring data sources.
- **Example**: Prometheus or Loki Range Queries (queries that return a set of values over time).
3. **Dataplane: Numeric Long Format**:

View file

@ -130,7 +130,6 @@ Set the mode of the gradient fill. Fill gradient is based on the line color. To
- **Opacity** - Transparency of the gradient is calculated based on the values on the y-axis. Opacity of the fill is increasing with the values on the Y-axis.
- **Hue** - Gradient color is generated based on the hue of the line color.
- **Scheme** - The bar receives a gradient color defined by the **Standard options > Color scheme** selection.
- **From thresholds** - If the **Color scheme** selection is **From thresholds (by value)**, then each bar is the color of the defined threshold.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-colors-by-thresholds-v11.3.png" alt="Color scheme From thresholds" caption="Color scheme: From thresholds" >}}

View file

@ -470,7 +470,6 @@ You can style the selected connection using the following options:
- **Size** - Control the size of the connection by entering a number in the **Value** field.
- **Radius** - Add curve to the connection by entering a value to represent the degree.
- **Arrow Direction** - Control the appearance of the arrow head. Choose from:
- **Forward** - The arrow head points in the direction in which the connection was drawn.
- **Reverse** - The arrow head points in the opposite direction of which the connection was drawn.
- **Both** - Adds arrow heads to both ends of the connection.

View file

@ -160,7 +160,6 @@ To filter column values, follow these steps:
1. Click the checkbox next to the values that you want to display or click **Select all**.
1. Enter text in the search field at the top to show those values in the display so that you can select them rather than scroll to find them.
1. Choose from several operators to display column values:
- **Contains** - Matches a regex pattern (operator by default).
- **Expression** - Evaluates a boolean expression. The character `$` represents the column value in the expression (for example, "$ >= 10 && $ <= 12").
- The typical comparison operators: `=`, `!=`, `<`, `<=`, `>`, `>=`.

View file

@ -91,7 +91,6 @@ This procedure uses dashboard variables and templates to allow you to enter trac
1. From your Grafana stack, create a new dashboard or go to an existing dashboard where you'd like to add traces visualizations.
1. Do one of the following:
- New dashboard - Click **+ Add visualization**.
- Existing dashboard - Click **Edit** in the top-right corner and then select **Visualization** in the **Add** drop-down.

View file

@ -120,9 +120,7 @@ In this example we use Apache as a reverse proxy in front of Grafana. Apache han
- The first four lines of the virtualhost configuration are standard, so we wont go into detail on what they do.
- We use a **\<proxy>** configuration block for applying our authentication rules to every proxied request. These rules include requiring basic authentication where user:password credentials are stored in the **/etc/apache2/grafana_htpasswd** file. This file can be created with the `htpasswd` command.
- The next part of the configuration is the tricky part. We use Apaches rewrite engine to create our **X-WEBAUTH-USER header**, populated with the authenticated user.
- **RewriteRule .\* - [E=PROXY_USER:%{LA-U:REMOTE_USER}, NS]**: This line is a little bit of magic. What it does, is for every request use the rewriteEngines look-ahead (LA-U) feature to determine what the REMOTE_USER variable would be set to after processing the request. Then assign the result to the variable PROXY_USER. This is necessary as the REMOTE_USER variable is not available to the RequestHeader function.
- **RequestHeader set X-WEBAUTH-USER “%{PROXY_USER}e”**: With the authenticated username now stored in the PROXY_USER variable, we create a new HTTP request header that will be sent to our backend Grafana containing the username.

View file

@ -42,16 +42,12 @@ To enable the Azure AD/Entra ID OAuth, register your application with Entra ID.
1. Note the **Application ID**. This is the OAuth client ID.
1. Click **Endpoints** from the top menu.
- Note the **OAuth 2.0 authorization endpoint (v2)** URL. This is the authorization URL.
- Note the **OAuth 2.0 token endpoint (v2)**. This is the token URL.
1. Click **Certificates & secrets** in the side menu, then add a new entry under the supported client authentication option you want to use. The following are the supported client authentication options with their respective configuration steps.
- **Client secrets**
1. Add a new entry under **Client secrets** with the following configuration.
- Description: Grafana OAuth 2.0
- Expires: Select an expiration period
@ -60,16 +56,12 @@ To enable the Azure AD/Entra ID OAuth, register your application with Entra ID.
{{< admonition type="note" >}}
Make sure that you copy the string in the **Value** field, rather than the one in the **Secret ID** field.
{{< /admonition >}}
1. You must have set `client_authentication` under `[auth.azuread]` to `client_secret_post` in the Grafana server configuration for this to work.
- **Federated credentials**
- **_Managed Identity_**
1. Refer to [Configure an application to trust a managed identity (preview)](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-config-app-trust-managed-identity?tabs=microsoft-entra-admin-center) for a complete guide on setting up a managed identity as a federated credential.
Add a new entry under Federated credentials with the following configuration.
- Federated credential scenario: Select **Other issuer**.
- Issuer: The OAuth 2.0 / OIDC issuer URL of the Microsoft Entra ID authority. For example: `https://login.microsoftonline.com/{tenantID}/v2.0`.
- Subject identifier: The Object (Principal) ID GUID of the Managed Identity.
@ -88,10 +80,8 @@ To enable the Azure AD/Entra ID OAuth, register your application with Entra ID.
{{< /admonition >}}
- **_Workload Identity (K8s/AKS)_**
1. Refer to [Federated identity credential for an Azure AD application](https://azure.github.io/azure-workload-identity/docs/topics/federated-identity-credential.html#azure-portal-ui) for a complete guide on setting up a federated credential for workload identity.
Add a new entry under Federated credentials with the following configuration.
- Federated credential scenario: Select **Kubernetes accessing Azure resources**.
- [Cluster issuer URL](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer#get-the-oidc-issuer-url): The OIDC issuer URL that your cluster is integrated with. For example: `https://{region}.oic.prod-aks.azure.com/{tenant_id}/{uuid}`.
- Namespace: Namespace of your Grafana deployment. For example: `grafana`.
@ -141,7 +131,6 @@ This section describes setting up basic application roles for Grafana within the
1. Click **App roles** and then **Create app role**.
1. Define a role corresponding to each Grafana role: Viewer, Editor, and Admin.
1. Choose a **Display name** for the role. For example, "Grafana Editor".
1. Set the **Allowed member types** to **Users/Groups**.

View file

@ -450,7 +450,6 @@ Support for the Auth0 "audience" feature is not currently available in Grafana.
To set up Generic OAuth authentication with Auth0, follow these steps:
1. Create an Auth0 application using the following parameters:
- Name: Grafana
- Type: Regular Web Application
@ -485,7 +484,6 @@ To set up Generic OAuth authentication with Bitbucket, follow these steps:
1. Navigate to **Settings > Workspace setting > OAuth consumers** in BitBucket.
1. Create an application by selecting **Add consumer** and using the following parameters:
- Allowed Callback URLs: `https://<grafana domain>/login/generic_oauth`
1. Click **Save**.
@ -518,7 +516,6 @@ By default, a refresh token is included in the response for the **Authorization
To set up Generic OAuth authentication with OneLogin, follow these steps:
1. Create a new Custom Connector in OneLogin with the following settings:
- Name: Grafana
- Sign On Method: OpenID Connect
- Redirect URI: `https://<grafana domain>/login/generic_oauth`
@ -526,7 +523,6 @@ To set up Generic OAuth authentication with OneLogin, follow these steps:
- Login URL: `https://<grafana domain>/login/generic_oauth`
1. Add an app to the Grafana Connector:
- Display Name: Grafana
1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the client ID and client secret from the **SSO** tab of the app details page:

View file

@ -88,7 +88,6 @@ Ensure that you have access to the [Grafana configuration file](../../../configu
To configure GitLab authentication with Grafana, follow these steps:
1. Create an OAuth application in GitLab.
1. Set the redirect URI to `http://<my_grafana_server_name_or_ip>:<grafana_server_port>/login/gitlab`.
Ensure that the Redirect URI is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `/login/gitlab`.

View file

@ -87,7 +87,6 @@ Map LDAP groups to Grafana roles.
1. **Manage group mappings**:
When managing group mappings, the following fields are available. To add a new group mapping, click the **Add group mapping** button.
1. **Add a group DN mapping**: The name of the key used to extract the ID token.
1. **Add an organization role mapping**: Select the Basic Role mapped to this group.
1. **Add the organization ID membership mapping**: Map the group to an organization ID.

View file

@ -30,7 +30,6 @@ To follow this guide, ensure you have permissions in your Okta workspace to crea
1. For **Sign-in method**, select **OIDC - OpenID Connect**.
1. For **Application type**, select **Web Application** and click **Next**.
1. Configure **New Web App Integration Operations**:
- **App integration name**: Choose a name for the app.
- **Logo (optional)**: Add a logo.
- **Grant type**: Select **Authorization Code** and **Refresh Token**.
@ -54,7 +53,6 @@ To follow this guide, ensure you have permissions in your Okta workspace to crea
1. In the **Okta Admin Console**, select **Directory > Profile Editor**.
1. Select the Okta Application Profile you created previously (the default name for this is `<App name> User`).
1. Select **Add Attribute** and fill in the following fields:
- **Data Type**: string
- **Display Name**: Meaningful name. For example, `Grafana Role`.
- **Variable Name**: Meaningful name. For example, `grafana_role`.

View file

@ -50,7 +50,6 @@ Configuration in the API takes precedence over the configuration in the Grafana
Grafana supports the following SAML 2.0 bindings:
- From the Service Provider (SP) to the Identity Provider (IdP):
- `HTTP-POST` binding
- `HTTP-Redirect` binding
@ -181,12 +180,10 @@ By default, new Grafana users using SAML authentication will have an account cre
If you are also using SCIM provisioning for this Grafana application in Azure AD, it's crucial to align the user identifiers between SAML and SCIM for seamless operation. The unique identifier that links the SAML user to the SCIM provisioned user is determined by the `assertion_attribute_external_uid` setting in the Grafana SAML configuration. This `assertion_attribute_external_uid` should correspond to the `externalId` used in SCIM provisioning (typically set to the Azure AD `user.objectid`).
1. **Ensure Consistent Identifier in SAML Assertion:**
- The unique identifier from Azure AD (typically `user.objectid`) that you mapped to the `externalId` attribute in Grafana in your SCIM provisioning setup **must also be sent as a claim in the SAML assertion.** For more details on SCIM, refer to the [SCIM provisioning documentation](/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-scim-provisioning/).
- In the Azure AD Enterprise Application, under **Single sign-on** > **Attributes & Claims**, ensure you add a claim that provides this identifier. For example, you might add a claim named `UserID` (or similar, like `externalId`) that sources its value from `user.objectid`.
2. **Configure Grafana SAML Settings for SCIM:**
- In the `[auth.saml]` section of your Grafana configuration, set `assertion_attribute_external_uid` to the name of the SAML claim you configured in the previous step (e.g., `userUID` or the full URI like `http://schemas.microsoft.com/identity/claims/objectidentifier` if that's how Azure AD sends it).
- The `assertion_attribute_login` setting should still be configured to map to the attribute your users will log in with (e.g., `userPrincipalName`, `mail`).

View file

@ -28,7 +28,6 @@ Grafana supports user authentication through Okta, which is useful when you want
1. Click **Create**.
1. On the **General Settings** tab, enter a name for your Grafana integration. You can also upload a logo.
1. On the **Configure SAML** tab, enter the SAML information related to your Grafana instance:
- In the **Single sign on URL** field, use the `/saml/acs` endpoint URL of your Grafana instance, for example, `https://grafana.example.com/saml/acs`.
- In the **Audience URI (SP Entity ID)** field, use the `/saml/metadata` endpoint URL, by default it is the `/saml/metadata` endpoint of your Grafana instance (for example `https://example.grafana.com/saml/metadata`). This could be configured differently, but the value here must match the `entity_id` setting of the SAML settings of Grafana.
- Leave the default values for **Name ID format** and **Application username**.

View file

@ -64,7 +64,6 @@ Sign in to Grafana and navigate to **Administration > Authentication > Configure
### 2. Sign Requests Section
1. In the **Sign requests** field, specify whether you want the outgoing requests to be signed, and, if so, then:
1. Provide a certificate and a private key that will be used by the service provider (Grafana) and the SAML IdP.
Use the [PKCS #8](https://en.wikipedia.org/wiki/PKCS_8) format to issue the private key.

View file

@ -33,7 +33,6 @@ You can use an encryption key from AWS Key Management Service to encrypt secrets
<br><br>a. Add a new section to the configuration file, with a name in the format of `[security.encryption.awskms.<KEY-NAME>]`, where `<KEY-NAME>` is any name that uniquely identifies this key among other provider keys.
<br><br>b. Fill in the section with the following values:
<br>
- `key_id`: a reference to a key stored in the KMS. This can be a key ID, a key Amazon Resource Name (ARN), an alias name, or an alias ARN. If you are using an alias, use the prefix `alias/`. To specify a KMS key in a different AWS account, use its ARN or alias. For more information about how to retrieve a key ID from AWS, refer to [Finding the key ID and key ARN](https://docs.aws.amazon.com/kms/latest/developerguide/find-cmk-id-arn.html).<br>
| `key_id` option | Example value |
| --- | --- |

View file

@ -35,7 +35,6 @@ You can use an encryption key from Azure Key Vault to encrypt secrets in the Gra
<br><br>a. Add a new section to the configuration file, with a name in the format of `[security.encryption.azurekv.<KEY-NAME>]`, where `<KEY-NAME>` is any name that uniquely identifies this key among other provider keys.
<br><br>b. Fill in the section with the following values:
<br>
- `tenant_id`: the **Directory ID** (tenant) from the application that you registered.
- `client_id`: the **Application ID** (client) from the application that you registered.
- `client_secret`: the VALUE of the secret that you generated in your app. (Don't use the Secret ID).

View file

@ -33,7 +33,6 @@ You can use an encryption key from Google Cloud Key Management Service to encryp
<br><br>a. Add a new section to the configuration file, with a name in the format of `[security.encryption.azurekv.<KEY-NAME>]`, where `<KEY-NAME>` is any name that uniquely identifies this key among other provider keys.
<br><br>b. Fill in the section with the following values:
<br>
- `key_id`: encryption key ID, refer to [Getting the ID for a Key](https://cloud.google.com/kms/docs/getting-resource-ids#getting_the_id_for_a_key_and_version).
- `credentials_file`: full path to service account key JSON file on your computer.

View file

@ -31,7 +31,6 @@ You can use an encryption key from Hashicorp Vault to encrypt secrets in the Gra
<br><br>a. Add a new section to the configuration file, with a name in the format of `[security.encryption.hashicorpvault.<KEY-NAME>]`, where `<KEY-NAME>` is any name that uniquely identifies this key among other provider keys.
<br><br>b. Fill in the section with the following values:
<br>
- `token`: a periodic service token used to authenticate within Hashicorp Vault.
- `url`: URL of the Hashicorp Vault server.
- `transit_engine_path`: mount point of the transit engine.

View file

@ -68,12 +68,10 @@ When you enable SCIM in Grafana, the following requirements and restrictions app
1. **Use the same identity provider for user provisioning and for authentication flow**: You must use the same identity provider for both authentication and user provisioning.
2. **Authentication restrictions**:
- Users attempting to log in through other methods (LDAP, OAuth) will be blocked
- By default, users who are not provisioned through SCIM cannot access Grafana
3. **Security restriction**: When using SAML, the login authentication flow requires the SAML assertion exchange between the Identity Provider and Grafana to include the `userUID` SAML assertion with the user's unique identifier at the Identity Provider.
- Configure `userUID` SAML assertion in [Azure AD](/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-authentication/saml/configure-saml-with-azuread/#configure-saml-assertions-when-using-scim-provisioning)
- Configure `userUID` SAML assertion in [Okta](/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-security/configure-authentication/saml/configure-saml-with-okta/#configure-saml-assertions-when-using-scim-provisioning)

View file

@ -57,13 +57,11 @@ For detailed configuration steps specific to the identity provider, see:
SCIM uses a specific process to establish and maintain user identity between the identity provider and Grafana:
1. **Initial user lookup:**
- The administrator configures SCIM at the Identity Provider, defining the **Unique identifier field**
- The identity provider looks up each user in Grafana using this unique identifier field as a filter
- The identity provider expects a single result from Grafana for each user
2. **Identity linking based on lookup results:**
- **If there's a single matching result:** The identity provider retrieves the user's unique ID at Grafana, saves it, confirms it can fetch the user's information, and updates the user's information in Grafana
- **If there are no matching results:** The identity provider attempts to create the user in Grafana. If successful, it retrieves and saves the user's unique ID for future operations. If there's a conflict with an existing user, the identity provider flags the error and Grafana logs the error message
- The identity provider learns the relationship between the found Grafana user and the Grafana internal ID
@ -97,12 +95,10 @@ For users who already exist in the Grafana instance:
To prevent conflicts and maintain consistent user management, disable or restrict other provisioning methods when implementing SCIM. This ensures that all new users are created through SCIM and prevents duplicate or conflicting user records.
- SAML Just-in-Time (JIT) provisioning:
- Disable `allow_sign_up` in SAML settings to prevent automatic user creation
- Existing JIT-provisioned users will continue to work but should be migrated to SCIM
- Terraform or API provisioning:
- Stop creating new users through these methods
- Existing users will continue to work but should be migrated to SCIM
- Consider removing or archiving Terraform user creation resources
@ -141,13 +137,11 @@ The migration process uses the same [user identification mechanism](#how-scim-id
### Migration steps
1. **Prepare the identity provider:**
- Ensure all existing Grafana users have corresponding accounts in your IDP
- Verify that the unique identifier field (e.g., email, username, or object ID) matches between systems
- Configure SCIM application in your IDP but don't assign users yet
2. **Configure SCIM in Grafana:**
- Set up SCIM endpoint and authentication as described in [Configure SCIM in Grafana](../../configure-scim-provisioning#configure-scim-in-grafana)
- Enable `user_sync_enabled = true`
- Configure the unique identifier field to match your IDP setup
@ -165,7 +159,6 @@ allow_non_provisioned_users = true
{{< /admonition >}}
3. **Test the matching mechanism:**
- Use the SCIM API to verify that existing users can be found using the unique identifier:
```bash
@ -176,7 +169,6 @@ allow_non_provisioned_users = true
- This should return exactly one user record for each existing user
4. **Assign users in the IDP:**
- Begin assigning existing users to the Grafana application in your IDP
- The SCIM identification process will automatically link existing Grafana users with their IDP identities
- Monitor the process for any conflicts or errors

View file

@ -51,7 +51,6 @@ If you have already grouped some users into a team, then you can synchronize tha
1. Insert the value of the group you want to sync with. This becomes the Grafana `GroupID`.
Examples:
- For LDAP, this is the LDAP distinguished name (DN) of LDAP group you want to synchronize with the team.
- For Auth Proxy, this is the value we receive as part of the custom `Groups` header.

View file

@ -109,7 +109,6 @@ When you create a new namespace in Kubernetes, you can better organize, allocate
```
Where:
- `helm install`: Installs the chart by deploying it on the Kubernetes cluster
- `my-grafana`: The logical chart name that you provided
- `grafana/grafana`: The repository and package name to install
@ -147,7 +146,6 @@ This section describes the steps you must complete to access Grafana via web bro
```
This command will print out the chart notes. You will the output `NOTES` that provide the complete instructions about:
- How to decode the login password for the Grafana admin account
- Access Grafana service to the web browser

View file

@ -391,9 +391,7 @@ Instead of using the `annotate` flag, you can still use the `--record` flag. How
1. In the editor, change the container image under the `kind: Deployment` section.
For example:
- From
- `yaml image: grafana/grafana-oss:10.0.1`
- To
@ -573,7 +571,6 @@ This section outlines general instructions for provisioning Grafana resources wi
```
You can follow the same process to provision additional Grafana resources by supplying the following folders:
- `provisioning/dashboards`
- `provisioning/datasources`
- `provisioning/plugins`

View file

@ -35,7 +35,6 @@ To install Grafana on macOS using Homebrew, complete the following steps:
```
The brew page downloads and untars the files into:
- `/usr/local/Cellar/grafana/[version]` (Intel Silicon)
- `/opt/homebrew/Cellar/grafana/[version]` (Apple Silicon)

View file

@ -131,7 +131,6 @@ The instructions provided in this section are for a Debian-based Linux system. F
```
These commands:
- Uninstall `certbot` from your system if it has been installed using a package manager
- Install `certbot` using `snapd`

View file

@ -18,7 +18,6 @@ labels:
To set up authentication:
1. Select an authentication method from the drop-down list:
- **Basic authentication**: Authenticates your data source using a username and password
- **Forward OAuth identity**: Forwards the OAuth access token and the OIDC ID token, if available, of the user querying to the data source
- **No authentication**: No authentication is required to access the data source

View file

@ -66,7 +66,6 @@ To upgrade Grafana installed using RPM or YUM complete the following steps:
This enables you to upgrade Grafana without the risk of losing your configuration changes.
1. Perform one of the following steps based on your installation.
- If you [downloaded an RPM package](https://grafana.com/grafana/download) to install Grafana, then complete the steps documented in [Install Grafana on Red Hat, RHEL, or Fedora](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/installation/redhat-rhel-fedora/) or [Install Grafana on SUSE or openSUSE](https://grafana.com/docs/grafana/<GRAFANA_VERSION>//setup-grafana/installation/suse-opensuse/) to upgrade Grafana.
- If you used the Grafana YUM repository, run the following command:

View file

@ -51,17 +51,14 @@ Learning about alert instances and notification policies is useful if you have m
There are different ways you can follow along with this tutorial.
- **Grafana Cloud**
- As a Grafana Cloud user, you don't have to install anything. [Create your free account](http://www.grafana.com/auth/sign-up/create-user).
Continue to [Alert instances](#alert-instances).
- **Interactive learning environment**
- Alternatively, you can try out this example in our interactive learning environment: [Get started with Grafana Alerting - Alert routing](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started-pt2/). It's a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
- If you opt to run a Grafana stack locally, ensure you have the following applications installed:
- [Docker Compose](https://docs.docker.com/get-docker/) (included in Docker for Desktop for macOS and Windows)
@ -252,7 +249,6 @@ Grafana includes a [test data source](https://grafana.com/docs/grafana/latest/da
The above CSV data simulates a data source returning multiple time series, each leading to the creation of an alert instance for that specific time series. Note that the data returned matches the example in the [Alert instance](#alert-instances) section.
1. In the **Alert condition** section:
- Keep `Last` as the value for the reducer function (`WHEN`), and `IS ABOVE 1000` as the threshold value. This is the value above which the alert rule should trigger.
1. Click **Preview alert rule condition** to run the queries.

View file

@ -68,17 +68,14 @@ In this tutorial, you will:
There are different ways you can follow along with this tutorial.
- **Grafana Cloud**
- As a Grafana Cloud user, you don't have to install anything. [Create your free account](http://www.grafana.com/auth/sign-up/create-user).
Continue to [How alert rule grouping works](#how-alert-rule-grouping-works).
- **Interactive learning environment**
- Alternatively, you can try out this example in our interactive learning environment: [Get started with Grafana Alerting - Grouping](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started-pt3/). It's a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
- If you opt to run a Grafana stack locally, ensure you have the following applications installed:
- [Docker Compose](https://docs.docker.com/get-docker/) (included in Docker for Desktop for macOS and Windows)
@ -227,16 +224,13 @@ Following the above example, [notification policies](ref:notification-policies)
<!-- INTERACTIVE ignore START -->
1. Sign in to Grafana:
- **Grafana Cloud** users: Log in via Grafana Cloud.
- **OSS users**: Go to [http://localhost:3000](http://localhost:3000).
1. Navigate to **Notification Policies**:
- Go to **Alerts & IRM > Alerting > Notification Policies**.
1. Add a child policy:
- In the Default policy, click **+ New child policy**.
- **Label**: `region`
- **Operator**: `=`
@ -245,31 +239,26 @@ Following the above example, [notification policies](ref:notification-policies)
This label matches alert rules where the region label is us-west.
1. Choose a **Contact point**:
- Select **Webhook**.
If you dont have any contact points, add a [Contact point](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/#add-a-contact-point).
1. Enable Continue matching:
- Turn on **Continue matching subsequent sibling nodes** so the evaluation continues even after one or more labels (i.e. region label) match.
1. Override grouping settings:
- Toggle **Override grouping**.
- **Group by**: Add `region` as label. Remove any existing labels.
**Group by** consolidates alerts that share the same grouping label into a single notification. For example, all alerts with `region=us-west` will be combined into one notification, making it easier to manage and reducing alert fatigue.
1. Set custom timing:
- Toggle **Override general timings**.
- **Group interval**: `2m`. This ensures follow-up notifications for the same alert group will be sent at intervals of 2 minutes. While the default is 5 minutes, we chose 2 minutes here to provide faster feedback for demonstration purposes.
**Timing options** control how often notifications are sent and can help balance timely alerting with minimizing noise.
1. Save and repeat:
- Repeat the steps above for `region = us-east` but without overriding grouping and timing options. Use a different webhook endpoint as the contact point.
{{< figure src="/media/docs/alerting/notificaiton-policies-region.png" max-width="750px" alt="Two nested notification policies to route and group alert notifications" >}}
@ -290,7 +279,6 @@ Following the above example, [notification policies](ref:notification-policies)
1. Visit [http://localhost:3000](http://localhost:3000), where Grafana should be running
1. Navigate to **Alerts & IRM > Alerting > Notification policies**.
1. In the Default policy, click **+ New child policy**.
- In the Default policy, click **+ New child policy**.
- **Label**: `region`
- **Operator**: `=`
@ -299,31 +287,26 @@ Following the above example, [notification policies](ref:notification-policies)
This label matches alert rules where the region label is us-west
1. Choose a **Contact point**:
- Select **Webhook**.
If you dont have any contact points, add a Contact point.
1. Enable Continue matching:
- Turn on **Continue matching subsequent sibling nodes** so the evaluation continues even after one or more labels (i.e. region label) match.
1. Override grouping settings:
- Toggle **Override grouping**.
- **Group by**: `region`.
**Group by** consolidates alerts that share the same grouping label into a single notification. For example, all alerts with `region=us-west` will be combined into one notification, making it easier to manage and reducing alert fatigue.
1. Set custom timing:
- Toggle **Override general timings**.
- **Group interval**: `2m`. This ensures follow-up notifications for the same alert group will be sent at intervals of 2 minutes. While the default is 5 minutes, we chose 2 minutes here to provide faster feedback for demonstration purposes.
**Timing options** control how often notifications are sent and can help balance timely alerting with minimizing noise.
1. Save and repeat:
- Repeat for `region = us-east` with a different webhook or a different contact point.
**Note**: Label matchers are combined using the `AND` logical operator. This means that all matchers must be satisfied for a rule to be linked to a policy. If you attempt to use the same label key (e.g., region) with different values (e.g., us-west and us-east), the condition will not match, because it is logically impossible for a single key to have multiple values simultaneously.
@ -355,7 +338,6 @@ Grafana includes a [test data source](https://grafana.com/docs/grafana/latest/da
1. From the drop-down menu, select **TestData** data source.
1. From **Scenario** select **CSV Content**.
1. Copy in the following CSV data:
- Select **TestData** as the data source.
- Set **Scenario** to **CSV Content**.
- Use the following CSV data:
@ -375,7 +357,6 @@ Grafana includes a [test data source](https://grafana.com/docs/grafana/latest/da
The returned data simulates a data source returning multiple time series, each leading to the creation of an alert instance for that specific time series.
1. In the **Alert condition** section:
- Keep `Last` as the value for the reducer function (`WHEN`), and `IS ABOVE 75` as the threshold value. This is the value above which the alert rule should trigger.
1. Click **Preview alert rule condition** to run the queries.

View file

@ -100,17 +100,14 @@ There are different ways you can follow along with this tutorial.
> Note: Some of the templating features in Grafana Alerting discussed in this tutorial are currently available in Grafana Cloud but have not yet been released to the Open Source (OSS) version.
- **Grafana Cloud**
- As a Grafana Cloud user, you don't have to install anything. [Create your free account](http://www.grafana.com/auth/sign-up/create-user).
Continue to [how templating works](#how-templating-works).
- **Interactive learning environment**
- Alternatively, you can try out this example in our interactive learning environment: [Get started with Grafana Alerting - Templating](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started-pt4/). It's a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
- If you opt to run a Grafana stack locally, ensure you have the following applications installed:
- [Docker Compose](https://docs.docker.com/get-docker/) (included in Docker for Desktop for macOS and Windows)
@ -215,7 +212,6 @@ Now that we've introduced how templating works, lets move on to the next step
### Create an alert rule
1. Sign in to Grafana:
- **Grafana Cloud** users: Log in via Grafana Cloud.
- **OSS users**: Go to [http://localhost:3000](http://localhost:3000).
@ -224,7 +220,6 @@ Now that we've introduced how templating works, lets move on to the next step
- Click **+ New alert rule**.
- Enter an **alert rule name**. Name it `High CPU usage`
1. **Define query an alert condition** section:
- Select TestData data source from the drop-down menu.
[TestData](https://grafana.com/docs/grafana/latest/datasources/testdata/) is included in the demo environment. If youre working in Grafana Cloud or your own local Grafana instance, you can add the data source through the Connections menu.
@ -243,7 +238,6 @@ Now that we've introduced how templating works, lets move on to the next step
This dataset simulates a data source returning multiple time series, with each time series generating a separate alert instance.
1. **Alert condition** section:
- Keep Last as the value for the reducer function (`WHEN`), and `IS ABOVE 75` as the threshold value, representing CPU usage above 75% .This is the value above which the alert rule should trigger.
- Click **Preview alert rule condition** to run the queries.
@ -252,7 +246,6 @@ Now that we've introduced how templating works, lets move on to the next step
{{< figure src="/media/docs/alerting/part-4-firing-instances-preview.png" max-width="1200px" caption="Preview of a query returning alert instances" >}}
1. Add folders and labels section:
- In **Folder**, click **+ New folder** and enter a name. For example: `System metrics` . This folder contains our alert rules.
Note: while it's possible to template labels here, in this tutorial, we focus on templating the summary and annotations fields instead.
@ -265,13 +258,11 @@ Now that we've introduced how templating works, lets move on to the next step
1. **Configure notifications** section:
Select who should receive a notification when an alert rule fires.
- Select a **Contact point**. If you dont have any contact points, click _View or create contact points_.
1. **Configure notification message** section:
In this step, youll configure the **summary** and **description** annotations to make your alert notifications informative and easy to understand. These annotations use templates to dynamically include key information about the alert.
- **Summary** annotation: Enter the following code as the value for the annotation.:
```go
@ -412,7 +403,6 @@ In this tutorial, we learned how to use templating in Grafana Alerting to create
To deepen your understanding of Grafanas templating, explore the following resources:
- **Overview of the functions and operators used in templates**:
- [Notification template language](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/language/)
- [Alert rule template language](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/templates/language/)

View file

@ -37,7 +37,6 @@ In this tutorial you will learn how to:
## Before you begin
- **Interactive learning environment**
- Alternatively, you can [try out this example in our interactive learning environment](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started-pt5/). Its a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
@ -174,14 +173,12 @@ Notification policies route alert instances to contact points via label matchers
Although our application doesn't explicitly include an `environment` label, we can rely on other labels like `instance` or `deployment`, which may contain keywords (like prod or staging) that indicate the environment.
1. Sign in to Grafana:
- **Grafana Cloud** users: Log in via Grafana Cloud.
- **OSS users**: Go to [http://localhost:3000](http://localhost:3000).
1. Navigate to **Alerts & IRM > Alerting > Notification Policies**.
1. Add a child policy:
- In the **Default policy**, click **+ New child policy**.
- **Label**: `environment`.
- **Operator**: `=`.
@ -189,17 +186,14 @@ Although our application doesn't explicitly include an `environment` label, we c
- This label matches alert rules where the environment label is `prod`.
1. Choose a **contact point**:
- If you dont have any contact points, add a [Contact point](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/#add-a-contact-point).
For a quick test, you can use a public webhook from [webhook.site](https://webhook.site/) to capture and inspect alert notifications. If you choose this method, select **Webhook** from the drop-down menu in contact points.
1. Enable continue matching:
- Turn on **Continue matching subsequent sibling nodes** so the evaluation continues even after one or more labels (i.e., _environment_ labels) match.
1. Save and repeat
- Create another child policy by following the same steps.
- Use `environment = staging` as the label/value pair.
- Feel free to use a different contact point.
@ -234,7 +228,6 @@ Make it short and descriptive, as this will appear in your alert notification. F
```
1. **Alert condition** section:
- Enter `75` as the value for **WHEN QUERY IS ABOVE** to set the threshold for the alert.
- Click **Preview alert rule condition** to run the queries.

View file

@ -39,7 +39,6 @@ In this tutorial you will learn how to:
## Before you begin
- **Interactive learning environment**
- Alternatively, you can [try out this example in our interactive learning environment](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started-pt6/). Its a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
@ -150,12 +149,10 @@ To keep track of these metrics you can set up a visualization for CPU usage and
The time-series visualization supports alert rules to provide more context in the form of annotations and alert rule state. Follow these steps to create a visualization to monitor the applications metrics.
1. Log in to Grafana:
- Navigate to [http://localhost:3000](http://localhost:3000), where Grafana should be running.
- Username and password: `admin`
1. Create a time series panel:
- Navigate to **Dashboards**.
- Click **+ Create dashboard**.
- Click **+ Add visualization**.
@ -163,7 +160,6 @@ The time-series visualization supports alert rules to provide more context in th
- Enter a title for your panel, e.g., **CPU and Memory Usage**.
1. Add queries for metrics:
- In the query area, copy and paste the following PromQL query:
** switch to **Code** mode if not already selected **
@ -177,7 +173,6 @@ The time-series visualization supports alert rules to provide more context in th
This query should display the simulated CPU usage data for the **prod** environment.
1. Add memory usage query:
- Click **+ Add query**.
- In the query area, paste the following PromQL query:
@ -219,7 +214,6 @@ Make it short and descriptive, as this will appear in your alert notification. F
```
1. **Alert condition**
- Enter 75 as the value for **WHEN QUERY IS ABOVE** to set the threshold for the alert.
- Click **Preview alert rule condition** to run the queries.

View file

@ -57,17 +57,14 @@ After you have completed Part 1, dont forget to explore the advanced but esse
There are different ways you can follow along with this tutorial.
- **Grafana Cloud**
- As a Grafana Cloud user, you don't have to install anything. [Create your free account](http://www.grafana.com/auth/sign-up/create-user).
Continue to [Create a contact point](#create-a-contact-point).
- **Interactive learning environment**
- Alternatively, you can [try out this example in our interactive learning environment](https://killercoda.com/grafana-labs/course/grafana/alerting-get-started/). It's a fully configured environment with all the dependencies already installed.
- **Grafana OSS**
- If you opt to run a Grafana stack locally, ensure you have the following applications installed:
- [Docker Compose](https://docs.docker.com/get-docker/) (included in Docker for Desktop for macOS and Windows)
@ -194,7 +191,6 @@ Grafana includes a [test data source](https://grafana.com/docs/grafana/latest/da
1. Select the **TestData** data source from the drop-down menu.
1. In the **Alert condition** section:
- Keep **Random Walk** as the _Scenario_.
- Keep `Last` as the value for the reducer function (`WHEN`), and `IS ABOVE 0` as the threshold value. This is the value above which the alert rule should trigger.

View file

@ -65,12 +65,10 @@ There are different ways you can follow along with this tutorial.
- **Grafana OSS**
To run a Grafana stack locally, ensure you have the following applications installed:
- [Docker Compose](https://docs.docker.com/get-docker/) (included in Docker for Desktop for macOS and Windows)
- [Git](https://git-scm.com/)
- **Interactive learning environment**
- Alternatively, you can [try out this example in our interactive learning environment](https://killercoda.com/grafana-labs/course/grafana/alerting-loki-logs). It's a fully configured environment with all the dependencies already installed.
## Set up the Grafana stack
@ -227,7 +225,6 @@ In this section, we use the default options for Grafana-managed alert rule creat
{{< /docs/ignore >}}
1. In the **Alert condition** section:
- Keep `Last` as the value for the reducer function (`WHEN`), and `0` as the threshold value. This is the value above which the alert rule should trigger.
1. Click **Preview alert rule condition** to run the query.

View file

@ -375,7 +375,6 @@ The most basic alert rule consists of two parts:
> {{< /docs/ignore >}}
Some popular channels include:
- [Email](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/configure-email/)
- [Webhooks](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/webhook-notifier/)
- [Telegram](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/configure-telegram/)

View file

@ -124,14 +124,12 @@ Here is an overview of version support through 2026:
The level of support changes as new versions of Grafana are released. Here are the key details:
- **Full Support**:
- All new features
- All bug fixes
- Security patches
- Regular updates
- **Security & Critical Bugs Only**:
- Security vulnerability patches
- Critical bug fixes that cause feature degradation
- No new features

View file

@ -62,7 +62,6 @@ For configuring controls outside of playlist playback, you can use the following
- The [variable usage check](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/variables/inspect-variable/) is not yet available.
- Editing a panel:
- The **Library panels** tab is not available anymore. You can replace a library panel from the panel menu.
- The **Overrides** tab is not in panel options (coming in Grafana v11.3.0). Overrides are shown at the bottom of the option list.
- The drop-down menu to collapse the visualization picker is missing (coming in Grafana v11.3.0).

View file

@ -233,7 +233,7 @@
"postcss-loader": "8.1.1",
"postcss-reporter": "7.1.0",
"postcss-scss": "4.0.9",
"prettier": "3.4.2",
"prettier": "3.6.2",
"publint": "^0.3.12",
"react-refresh": "0.14.0",
"react-select-event": "5.5.1",
@ -463,7 +463,7 @@
},
"packageManager": "yarn@4.9.2",
"dependenciesMeta": {
"prettier@3.4.2": {
"prettier@3.6.2": {
"unplugged": true
}
},

View file

@ -53,7 +53,6 @@ Every commit to main that has changes within the `packages` directory is a subje
3. Run `yarn packages:build` script that compiles distribution code in `packages/grafana-*/dist`.
4. Run `yarn packages:pack` script to compress each package into `npm-artifacts/*.tgz` files. This is required for yarn to replace properties in the package.json files declared in the `publishConfig` property.
5. Depending on whether or not it's a prerelease:
- When releasing a prerelease run `./scripts/publish-npm-packages.sh --dist-tag 'next' --registry 'https://registry.npmjs.org/'` to publish new versions.
- When releasing a stable version run `./scripts/publish-npm-packages.sh --dist-tag 'latest' --registry 'https://registry.npmjs.org/'` to publish new versions.
- When releasing a test version run `./scripts/publish-npm-packages.sh --dist-tag 'test' --registry 'https://registry.npmjs.org/'` to publish test versions.

View file

@ -92,7 +92,11 @@ describe('Plugin Extensions / Utils', () => {
// Check if the right components are selected
const limitedComponents = getLimitedComponentsToRender({ props, components, limit: 3 });
const rendered = render(
<>{limitedComponents?.map((Component, index) => <Component key={index} {...props} />)}</>
<>
{limitedComponents?.map((Component, index) => (
<Component key={index} {...props} />
))}
</>
);
expect(rendered.getByText('Test 1')).toBeInTheDocument();
@ -127,7 +131,11 @@ describe('Plugin Extensions / Utils', () => {
// Check if the right components are selected
const limitedComponents = getLimitedComponentsToRender({ props, components, limit: 1 });
const rendered = render(
<>{limitedComponents?.map((Component, index) => <Component key={index} {...props} />)}</>
<>
{limitedComponents?.map((Component, index) => (
<Component key={index} {...props} />
))}
</>
);
expect(rendered.getByText('Test 1')).toBeInTheDocument();
@ -158,7 +166,11 @@ describe('Plugin Extensions / Utils', () => {
pluginId: ['plugin-id-2', 'plugin-id-3'],
});
const rendered = render(
<>{limitedComponents?.map((Component, index) => <Component key={index} {...props} />)}</>
<>
{limitedComponents?.map((Component, index) => (
<Component key={index} {...props} />
))}
</>
);
expect(rendered.getByText('Test 2')).toBeInTheDocument();

View file

@ -30,7 +30,6 @@ If you were using additional props, there are roughly 3 categories of changes:
- `hideHorizontalTrack` and `hideVerticalTrack` have been removed.
- You can achieve the same behavior by setting either `overflowX` or `overflowY` to `hidden`. These names more closely match the CSS properties they represent.
- `scrollTop` and `setScrollTop`.
- You can achieve the same behavior by using the `ref` prop to get a reference to the `ScrollContainer` component, and then calling `scrollTo` on that reference.
- Before:

View file

@ -116,7 +116,9 @@ export const RolesLabel = ({ showBuiltInRole, numberOfRoles, appliedRoles }: Rol
<Tooltip
content={
<div className={styles.tooltip}>
{appliedRoles?.map((role) => <p key={role.uid}>{role.group + ':' + (role.displayName || role.name)}</p>)}
{appliedRoles?.map((role) => (
<p key={role.uid}>{role.group + ':' + (role.displayName || role.name)}</p>
))}
</div>
}
>

View file

@ -14,7 +14,13 @@ export const OrgUnits = ({ units, icon }: OrgUnitProps) => {
return units.length > 1 ? (
<Tooltip
placement={'top'}
content={<Stack direction={'column'}>{units?.map((unit) => <span key={unit.name}>{unit.name}</span>)}</Stack>}
content={
<Stack direction={'column'}>
{units?.map((unit) => (
<span key={unit.name}>{unit.name}</span>
))}
</Stack>
}
>
<Content icon={icon}>{units.length}</Content>
</Tooltip>

View file

@ -25,12 +25,16 @@ export const ErrorContainerUnconnected = ({ error, warning, resetError, resetWar
<div>
{error && (
<Alert title={error.message} onRemove={() => resetError()}>
{error.errors?.map((e, i) => <div key={i}>{e}</div>)}
{error.errors?.map((e, i) => (
<div key={i}>{e}</div>
))}
</Alert>
)}
{warning && (
<Alert title={warning.message} onRemove={() => resetWarning()} severity="warning">
{warning.errors?.map((e, i) => <div key={i}>{e}</div>)}
{warning.errors?.map((e, i) => (
<div key={i}>{e}</div>
))}
</Alert>
)}
</div>

View file

@ -61,7 +61,9 @@ export function RowItemRepeater({
return (
<>
<row.Component model={row} key={row.state.key!} />
{repeatedRows?.map((rowClone) => <rowClone.Component model={rowClone} key={rowClone.state.key!} />)}
{repeatedRows?.map((rowClone) => (
<rowClone.Component model={rowClone} key={rowClone.state.key!} />
))}
</>
);
}

Some files were not shown because too many files have changed in this diff Show more