Hello. I have come here after reading this.

I have been using a single ec2 instance for redash (redash1) and I am trying to create an additional one (redash2).
I changed the /opt/redash/env file for redash2 exactly equal to redash1 after reading the above discussion.
However, the dashboard, queries and the data source that are saved in redash1 wouldn’t appear in redash2. (They don’t seem to sync)
Please help me out Thanks!

Redash stores state in its metadata database (postgres). Does your redash2 connect to the postgres service in the redash1 ec2 network?

If it doesn’t, that would explain your issue.

Hello Thanks for your response.
I don’t have much experience in programming, especially in docker. I need some help…
What do I need to configure for redash2 postgres service to connect to redash1 postgres service?
Does making the env file same and restarting the docker not do the job?

Slow down a moment. Why are you doing this? That’s not a normal situation. I wonder if a better solution is available, given that you are unfamiliar with this setup.

I don’t think you really need two complete instances of Redash pointed at the same metadata database. In fact doing this can produce strange bugs around database locking.


To actually answer your question:

I understand. It would be helpful for you to understand a little about docker-compose going into this. Particularly around how networking behaves. I’ll give you a tiny summary here, but you can always check out their docs for something deeper.

In a typical network, each computer receives an IP address from a DHCP server. Sometimes these addresses change. Other times they are fixed or static. For common resources like a database or an email server, the DHCP server can be configured to always give the same IP address to that machine. So a database could always be found at 172.20.16.20 for example.

When you create a service in docker-compose, each service will receive its own IP address. And it’s a lot of hassle to configure a DHCP server to always grant each one the same IP address every time it starts. So what if the services need to speak to one another?

For this, Docker allows you to configure networks using the name of a service. Whenever one service speaks to another, the configuration doesn’t need to include an IP address. It can use the name of the service instead.

Looking at Redash’s default configuration, you can see that it uses service names instead of IP addresses. For example, the REDASH_REDIS_URL is defaulted to redis://redis:6379/0. That’s because the name of the service is redis. So even if the service restarts and gets a new IP address every time, the config file doesn’t require an update.

It does not, unless you update both of your docker-compose files to use externally accessible URLs. Because right now you have two completely unique instances of Redash. And both of them are talking to their own copy of postgres and redis. If you want them to use the same postgres, you need to configure your Amazon settings so that postgres always has the same IP address. And then you need to configure both instances to use the same one.

As I wrote above, this is not a good idea. Proceed at your own risk.

Thank you very much Jesse! I wasn’t expecting such a detailed response. Your explanation mounted on the elementary knowledge I had about Docker network and now I get the overall picture.

The initial problem I encountered was this. My m5.2xlarge redash ec2 instance server went down time to time. Docker log showed Worker Timeout and failing to reboot. CPU usage boosted up to 12% at the time. (It maintained 6% normally)

What might be a possible reason to this and how can I fix this issue?

Glad to help!

I have some questions:

What does this mean? Could you access it from a browser? Could you not run queries? Was the interface buggy?

I’m interested to see these log messages. If only a worker timed out that wouldn’t take down the whole server (normally).

You can run docker stats to see which running services are consuming CPU. I have a feeling the problem here was the nginx service, rather than the worker.

[edit] I wonder if you are experiencing the same issue being discussed / debugged here: Fault finding guide

The server became unhealthy and crashed, resulting in 500 internal error when I tried to access from the browser.

This is the log message at the time the server crashed.