Issue Summary

I have deployed Redash using helm (contrib-helm-chart). Are there any suggested values for CPU and memory for each pod? The goal is to support 100 concurrent users.

For example, the “genericworker” pod recommends the values CPU: 100m, memory: 500Mi in the provided helm chart. When using similar values, the worker took a very long time to process queued jobs. It took “genericworker” ~48 hours to process 400,000 jobs before the events table was populated.

I am looking for other recommended values across all pods (server, adhocWorker, scheduledWorker, hookInstallJob, hookUpgradeJob, genericWorker, scheduler) that have worked well for similar helm deployments. Thank you!

Technical details:

  • Redash Version: 10.1.0
  • Browser/OS: chrome v97.0.4692.71 / macos v11.6.2
  • How did you install Redash: contrib-helm-chart

To follow up on this question, when using the contrib-helm-chart deployment, is there a suggested method to determine the optimal WORKERS_COUNT for adhocWorker, scheduledWorker, and genericWorker?

I would like to make sure there are enough workers to prevent queues from being backed up.

There are no suggested values because every workload is different. We’d recommend going by trial-and-error as it is quite easy to adjust these values.

1 Like

Does the helm chart have built in methods for autoscaling pods? I am interested in using horizontal pod autoscaling on the pods themselves, and manually provisioning a certain WORKER_COUNT per pod.

You should ask this question of the helm_chart maintainers. It’s not part of the core team so we don’t have any information about it.