Increase HA architecture tested load in downstream documentation

Closes #47195

Signed-off-by: Ryan Emerson <remerson@ibm.com>
This commit is contained in:
Ryan Emerson 2026-03-17 04:16:45 +00:00 committed by GitHub
parent 607096fd4e
commit eea43029e2
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 8 additions and 6 deletions

View file

@ -55,7 +55,7 @@ The following setup was used to retrieve the settings above to run tests of abou
* Database Amazon Aurora PostgreSQL in a multi-AZ setup.
* Default user password hashing with Argon2 and 5 hash iterations and minimum memory size 7 MiB https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id[as recommended by OWASP] (which is the default).
* Client credential grants do not use refresh tokens (which is the default).
* Database seeded with 20,000 users and 20,000 clients.
* Database seeded with 1,000,000 users and 20,000 clients.
* Infinispan local caches at default of 10,000 entries, so not all clients and users fit into the cache, and some requests will need to fetch the data from the database.
* All authentication sessions in distributed caches as per default, with two owners per entries, allowing one failing Pod without losing data.
* All user and client sessions are stored in the database and are not cached in-memory as this was tested in a multi-cluster setup.

View file

@ -7,10 +7,12 @@
We regularly test {project_name} with the following load:
<@profile.ifProduct>
* 100,000 users
</@profile.ifProduct>
<@profile.ifCommunity>
* 1,000,000 users
</@profile.ifCommunity>
* 300 requests per second
[IMPORTANT]
====
It is imperative that your production deployments are integrated with an observability stack in order to identify issues
early and facilitate troubleshooting when they do arise. Additional demand on {project_name} makes isolating issues harder,
therefore this becomes increasingly pertinent as the total number of users and requests per second increases.
====