mirror of
https://github.com/mattermost/mattermost.git
synced 2026-02-03 20:40:00 -05:00
* Index all public channels when a user joins a team * Precompute team members for indexChannelsForTeam * Refactor RequestContextWithMaster to store package This way, we can import it from both the sqlstore and the searchlayer packages. The alternative for this is duplicating the code in those two packages, but that will *not* work: The context package expects custom types for the keys stored in it, so that different packages never clash with each other when trying to register a new key. See the docs for the WithValue function: https://pkg.go.dev/context#WithValue If we try to duplicate the storeContextKey type in both the sqlstore and searchlayer packages, although they *look* the same, they are not, and HasMaster will fail to get the value of the storeContextKey(useMaster) key if it's from the other package. * Use master in call to GetTeamMembersForChannel In GetTeamMembersForChannel, use the DB from the newly passed context, which will be the receiving context everywhere except in the call done from indexChannelsForTeam, to avoid the read after write issue when saving a team member. * Fix GetPublicChannelsForTeam paging We were using the page and perPage arguments as is in the call to GetPublicChannelsForTeam, but that function expects and offset and a limit as understood by SQL. Although perPage and limit are interchangeable, offset is not equal to page, but to page * perPage. * Add a synchronous bulk indexer for Opensearch * Implement Opensearch's SyncBulkIndexChannels * Add a synchronous bulk indexer for Elasticsearch * Implement Elasticsearch's SynkBulkIndexChannels * Test SyncBulkIndexChannels * make mocks * Bulk index channels on indexChannelsForTeam * Handle error from SyncBulkIndexChannels * Fix style * Revert indexChannelWithTeamMembers refactor * Remove defensive code on sync bulk processor * Revert "Add a synchronous bulk indexer for Opensearch" This reverts commitbfe4671d96. * Revert "Add a synchronous bulk indexer for Elasticsearch" This reverts commit6643ae3f30. * Refactor bulk indexers with a common interface * Test all the different implementations Assisted by Claude * Remove debug statements * Refactor common code into _stop * Rename getUserIDsFor{,Private}Channel * Wrap error * Make perPage a const * Fix typos * Call GetTeamsForUser only if needed * Differentiate errors for sync/async processors --------- Co-authored-by: Ibrahim Serdar Acikgoz <serdaracikgoz86@gmail.com> Co-authored-by: Mattermost Build <build@mattermost.com>
43 lines
1 KiB
Go
43 lines
1 KiB
Go
// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
|
|
// See LICENSE.txt for license information.
|
|
|
|
package utils
|
|
|
|
// Pager fetches all items from a paginated API.
|
|
// Pager is a generic function that fetches and aggregates paginated data.
|
|
// It takes a fetch function and a perPage parameter as arguments.
|
|
//
|
|
// The fetch function is responsible for retrieving a slice of items of type T
|
|
// for a given page number. It returns the fetched items and an error, if any.
|
|
// Ideally a developer may want to use a closure to create a fetch function.
|
|
//
|
|
// The perPage parameter specifies the number of items to fetch per page.
|
|
//
|
|
// Example usage:
|
|
//
|
|
// items, err := Pager(fetchFunc, 10)
|
|
// if err != nil {
|
|
// // handle error
|
|
// }
|
|
// // process items
|
|
func Pager[T any](fetch func(page int) ([]T, error), perPage int) ([]T, error) {
|
|
var list []T
|
|
var page int
|
|
|
|
for {
|
|
fetched, err := fetch(page)
|
|
if err != nil {
|
|
return list, err
|
|
}
|
|
|
|
list = append(list, fetched...)
|
|
|
|
if len(fetched) < perPage {
|
|
break
|
|
}
|
|
|
|
page++
|
|
}
|
|
|
|
return list, nil
|
|
}
|