mattermost/server/einterfaces/cluster.go

41 lines
1.9 KiB
Go
Raw Permalink Normal View History

// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
package einterfaces
import (
"github.com/mattermost/mattermost/server/public/model"
"github.com/mattermost/mattermost/server/public/shared/request"
)
type ClusterMessageHandler func(msg *model.ClusterMessage)
type ClusterInterface interface {
StartInterNodeCommunication()
StopInterNodeCommunication()
RegisterClusterMessageHandler(event model.ClusterEvent, crm ClusterMessageHandler)
GetClusterId() string
IsLeader() bool
// HealthScore returns a number which is indicative of how well an instance is meeting
// the soft real-time requirements of the protocol. Lower numbers are better,
// and zero means "totally healthy".
HealthScore() int
2017-09-05 10:58:30 -04:00
GetMyClusterInfo() *model.ClusterInfo
GetClusterInfos() ([]*model.ClusterInfo, error)
SendClusterMessage(msg *model.ClusterMessage)
SendClusterMessageToNode(nodeID string, msg *model.ClusterMessage) error
NotifyMsg(buf []byte)
GetClusterStats(rctx request.CTX) ([]*model.ClusterStats, *model.AppError)
GetLogs(rctx request.CTX, page, perPage int) ([]string, *model.AppError)
QueryLogs(rctx request.CTX, page, perPage int) (map[string][]string, *model.AppError)
GenerateSupportPacket(rctx request.CTX, options *model.SupportPacketOptions) (map[string][]model.FileData, error)
MM-8622: Improved plugin error reporting (#8737) * allow `Wait()`ing on the supervisor In the event the plugin supervisor shuts down a plugin for crashing too many times, the new `Wait()` interface allows the `ActivatePlugin` to accept a callback function to trigger when `supervisor.Wait()` returns. If the supervisor shuts down normally, this callback is invoked with a nil error, otherwise any error reported by the supervisor is passed along. * improve plugin activation/deactivation logic Avoid triggering activation of previously failed-to-start plugins just becase something in the configuration changed. Now, intelligently compare the global enable bit as well as the each individual plugin's enabled bit. * expose store to manipulate PluginStatuses * expose API to fetch plugin statuses * keep track of whether or not plugin sandboxing is supported * transition plugin statuses * restore error on plugin activation if already active * don't initialize test plugins until successfully loaded * emit websocket events when plugin statuses change * skip pruning if already initialized * MM-8622: maintain plugin statuses in memory Switch away from persisting plugin statuses to the database, and maintain in memory instead. This will be followed by a cluster interface to query the in-memory status of plugin statuses from all cluster nodes. At the same time, rename `cluster_discovery_id` on the `PluginStatus` model object to `cluster_id`. * MM-8622: aggregate plugin statuses across cluster * fetch cluster plugin statuses when emitting websocket notification * address unit test fixes after rebasing * relax (poor) racey unit test re: supervisor.Wait() * make store-mocks
2018-05-23 14:26:35 -04:00
GetPluginStatuses() (model.PluginStatuses, *model.AppError)
ConfigChanged(previousConfig *model.Config, newConfig *model.Config, sendToOtherServer bool) *model.AppError
// WebConnCountForUser returns the number of active webconn connections
// for a given userID.
WebConnCountForUser(userID string) (int, *model.AppError)
MM-61904: Make reliable websockets work in HA (#29489) We do a cluster request to get the active and dead queues from other nodes in the cluster to sync any missing information. We check the dead queue in the other nodes to see if there's been any message loss or not. Accordingly, we send just the active queue or both active and dead queues. There's still an edge case that is left out where a client could have potentially connected and reconnected to multiple nodes leaving multiple active queues in multiple nodes. We don't handle this scenario because then potentially we need to create a slice of sendQueueSize * number_of_nodes. And then this can happen again, leading to an infinite increase in sendQueueSize. We leave this edge-case to Redis, acknowledging a limitation in our architecture. In this PR, when there's no message loss, we just take the active queue from the last node it connected to. And if there's message loss where the client's seqNum is within the last node's dead queue, we also handle that. But if there's severe message loss where the client's seqNum falls within the dead queue of another node, then we just send the data from that node to reconstruct the data as much as possible. It could be possible to set a new connection ID in this case, but this involves more data transfer always from all nodes and recomputing the state in the requestor node. https://mattermost.atlassian.net/browse/MM-61904 ```release-note NONE ``` Co-authored-by: Mattermost Build <build@mattermost.com>
2025-01-17 00:41:32 -05:00
// GetWSQueues returns the necessary websocket queues from a cluster for a given
// connectionID and sequence number.
GetWSQueues(userID, connectionID string, seqNum int64) (map[string]*model.WSQueues, error)
}