Issue Summary

I deployed redash by helm in my kubernetes cluster. I tested 8.0.2.b37747 version. I don’t have any problems. Then I desired to deployed 9.0.0-beta.b42121 version. I can’t connect any datasource. I see an error in the logs of the redash.
[2020-09-09 08:55:03,296][PID:10][ERROR][redash.app] Exception on /api/data_sources/1/test [POST]
Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1949, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1935, in dispatch_request
return self.view_functionsrule.endpoint
File “/usr/local/lib/python3.7/site-packages/flask_restful/init.py”, line 458, in wrapper
resp = resource(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask_login/utils.py”, line 261, in decorated_view
return func(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask/views.py”, line 89, in view
return self.dispatch_request(*args, **kwargs)
File “/app/redash/handlers/base.py”, line 33, in dispatch_request
return super(BaseResource, self).dispatch_request(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask_restful/init.py”, line 573, in dispatch_request
resp = meth(*args, **kwargs)
File “/app/redash/permissions.py”, line 71, in decorated
return fn(*args, **kwargs)
File “/app/redash/handlers/data_sources.py”, line 262, in post
job.refresh()
File “/usr/local/lib/python3.7/site-packages/rq/job.py”, line 461, in refresh
raise NoSuchJobError(‘No such job: {0}’.format(self.key))
rq.exceptions.NoSuchJobError: No such job: b’rq:job:45fdd074-125d-46a1-878c-e89394288b6a’
[2020-09-09 08:55:03,302][PID:10][INFO][metrics] method=POST path=/api/data_sources/1/test endpoint=datasourcetestresource status=500 content_type=application/json content_length=36 duration=90467.64 query_count=4 query_duration=11.65

Technical details:

  • Redash Version: 9.0.0-beta.b42121
  • Browser/OS: Chrome, Firefox/ Windows
  • How did you install Redash: docker by helm in kubernetes

I have the same problem, please tell me if anybody knows it.

Are you certain your Redis is up and running? NoSuchJobError can happen in that case.

Thank you very much! It is my Redis’s problem, I restart the image and it works!

I have the same error but my Redis seems to be running ok:

$ kubectl logs redash-b-5d89994b9b-6rkmz -n production -c redis
1:C 28 Jan 2021 00:32:54.126 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 28 Jan 2021 00:32:54.136 # Redis version=5.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 28 Jan 2021 00:32:54.136 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 28 Jan 2021 00:32:54.138 * Running mode=standalone, port=6379.
1:M 28 Jan 2021 00:32:54.144 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 28 Jan 2021 00:32:54.144 # Server initialized
1:M 28 Jan 2021 00:32:54.145 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 28 Jan 2021 00:32:54.145 * Ready to accept connections
1:M 28 Jan 2021 00:37:55.026 * 100 changes in 300 seconds. Saving…
1:M 28 Jan 2021 00:37:55.026 * Background saving started by pid 13
13:C 28 Jan 2021 00:37:55.028 * DB saved on disk
13:C 28 Jan 2021 00:37:55.029 * RDB: 0 MB of memory used by copy-on-write
1:M 28 Jan 2021 00:37:55.126 * Background saving terminated with success
1:M 28 Jan 2021 00:42:56.034 * 100 changes in 300 seconds. Saving…
1:M 28 Jan 2021 00:42:56.034 * Background saving started by pid 14

I installed Redash on a Kubernetes cluster using my own deployment files. I m not using the postgres image as I already have AWS RDS for my database and I have connected Redash to that server.

The query returns its results just fine, it is just that we get the following behavior:

1 Like

Do you see traffic between your Redash instance and the Redis instance? Are you sure that all your Redash hosts (workers, server, scheduler etc.) are aimed at the same Redis instance?

I have defined the variable REDASH_REDIS_URL: “redis://127.0.0.1:6379/0” and it is defined for all the instance.

I am using the beta version 9.0.0 which no longer use Celery for queues. I had to make the following changes to my containers:

Upgrading

Typically, if you are running your own instance of Redash and wish to upgrade, you would simply modify the Docker tag in your docker-compose.yml file. Since RQ has replaced Celery in this version, there are a couple extra modifications that need to be done in your docker-compose.yml:

  1. Under services/scheduler/environment, omit QUEUES and WORKERS_COUNT (and omit environment altogether if it is empty).
  2. Under services, add a new service for general RQ jobs:
worker:
  <<: *redash-service
  command: worker
  environment:
    QUEUES: "periodic emails default"
    WORKERS_COUNT: 1

I found the cause of this issue. We are running two redash pods and each of them contains a container for Redis. Kubernetes is assigning the same address to both instances of Redis causing some jobs to be stored in one and be queried from another.

I have solved the issue by running only one pod of Redash for now.

1 Like