Issue Summary

I am using Kubernetes to deploy Redash 9 beta, the worker pod keeps failing. and the log says ValueError: There's already an active RQ scheduler

Technical details:

  • Redash Version: 9.0.0-beta.b42121
  • Browser/OS: Chrome/Mac
  • How did you install Redash: Kubernetes with Docker image

Can you share details on how you setup your K8s deployments? I’m mostly interested in what entrypoint command you’re using for each deployment.

I’m using ecs-deploy to deploy my Redash 9 beta to ECS. I have the same issue because of the “blue-green-deployment”. A new scheduler is spun up while the old one is still running, and I receive exact the same error. I have to manually kill the old scheduler task and then the new one can register.

I don’t have problems deploying other workers since each new worker has its own new name.

seems to be fixed in rq-scheduler v0.10.0
See this issue

I encountered this error as well. With some investigation, I found I just removed the docker container for redash scheduler with option ‘-f’ when deploy. After change the way, that stopping the container first and then removing it, and run ‘flushall’ in redis, the error went away.
Actually, the error is caused by the code in rq_scheduler/scheduler.py

 scheduler_key = 'rq:scheduler'
 def register_birth(self):
     self.log.info('Registering birth')
     if self.connection.exists(self.scheduler_key) and \
             not self.connection.hexists(self.scheduler_key, 'death'):
         raise ValueError("There's already an active RQ scheduler")
1 Like

We deployed the redash through the ECS fargate, and for each container we created a standalone taskdefinition, so for scheduler task, the maximum and minimum task count is 1, which basically disabled the auto scaling. You probably would like to change the scheduler scaling policy.