I have deployed Redash using helm (contrib-helm-chart). Are there any suggested values for CPU and memory for each pod? The goal is to support 100 concurrent users.
For example, the “genericworker” pod recommends the values CPU: 100m, memory: 500Mi in the provided helm chart. When using similar values, the worker took a very long time to process queued jobs. It took “genericworker” ~48 hours to process 400,000 jobs before the events table was populated.
I am looking for other recommended values across all pods (server, adhocWorker, scheduledWorker, hookInstallJob, hookUpgradeJob, genericWorker, scheduler) that have worked well for similar helm deployments. Thank you!
To follow up on this question, when using the contrib-helm-chart deployment, is there a suggested method to determine the optimal WORKERS_COUNT for adhocWorker, scheduledWorker, and genericWorker?
I would like to make sure there are enough workers to prevent queues from being backed up.
Does the helm chart have built in methods for autoscaling pods? I am interested in using horizontal pod autoscaling on the pods themselves, and manually provisioning a certain WORKER_COUNT per pod.