Self hosted on AWS using the eu-west-1 AMI.
Version 8

We are finding that redash repeatedly crashes throughout the day. The EC2 needs to be rebooted before it will work again. The gui freezes and then gets a 504 error.
From the metrics we have, it looks like the memory is maxing out.
It looks like it might be related to this issue:

I am thinking our problem might be that people are running large queries, redash doesn’t limit it in any way, the memory is maxing out and then it crashes. We have some large tables (multiple 200+GB), but most of the queries made throughout the day are small. If we resize the instance it would have to be pretty massive just to handle the odd time someone runs a big query without a LIMIT clause.

If it is related to the above git topic, is there any kind of workaround? Hard to believe that people have just lived with it repeatedly crashing for 5+ years in that case, so presumably there is something else going on in our situation.

What you’re describing sounds familiar to an issue I’m working on with V10 performance (here). The first thing I’d recommend is increasing your instance provision at least temporarily so you can debug without absurd delays. If you’re on a t2.small go up to t2.medium, for example.

Next, I have a few clarifying questions for you:

  1. How large are your query results? Redash is built to visualise results. Not for making large data extracts. The git issue you referenced refers to improving performance if your individual query results are sizable. Until we can address this: you should expect degraded performance of the front-end if your query results exceed around 20mb in size. However, this will not cause a 504 timeout.

  2. You mentioned the front-end “freezes”. Under what circumstances? When you execute a large query?

  3. How much RAM are your containers using? You can find this by SSH into the EC2 instance and run sudo docker stats. You can also run the top command to see if your system is using kswap0 frequently,.

  4. How many types of data sources are you using? If you only need a few data sources, you can reduce Redash’s memory footprint by disabling the ones you don’t need.

1 Like

Hi Jesse, thanks for your response.

I ran some experiments and think I can clarify a bit better what’s going on.

We have an instance with 128GB memory (r5.4xlarge).
This is pointed to two PostgreSQL data sources. These sources have several tables which are 200+GB in size.
Typical queries are fairly small as most people use the tool appropriately (not for making large data extracts). Outside of these crashing queries, the max memory I’ve seen is around 7GB.
sudo docker stats shows typical memory usage quite low (highest is redash_server_1 with 1.7GB, rest are between 10-400MB).

The problem occurs when people aren’t using the tool properly. They uncheck the LIMIT 1000 tickbox, don’t set their own LIMIT clause, whatever. So they do something like “SELECT * FROM 200gb_table” without LIMIT checked. We can see that the query starts to run, the memory usage just goes up and up until it hits max, then redash crashes and the EC2 needs rebooting before anyone can use it again.
The behaviour when crashing is:

  • the query hangs
  • the query says it cannot connect
  • shortly afterwards the page gets a 504 error if you refresh, until the instance is rebooted
    Below screenshot shows the memory metrics during one of these crashes (we resized to 128GB after this).

We could continue to resize the EC2 until it can handle every full table read, but as you say it’s not what redash is for. It would be very expensive and only ever needed for those times people run massive table extracts without thinking. The problem as I see it is that it’s too easy for users to bring the whole instance down and make it unusable for anyone else.

What we’d like is to be able to handle these bad queries better. We tried a timeout but with a simple query on a large table it can be quite fast to crash. Ideally some way so that redash can run out of memory for a query, stay alive, and then just return a failure result on the query without taking the instance down for anyone else. Maybe we need to host it on two load balanced EC2s so that if a large query kills an instance the other one can continue serving queries without users being impacted?

Forgot to mention, we are on v10

This is an amazing write-up. Good job.

Your memory usage looks dead-on to me (1.7GB on the main, 10mb-400mb for the other services).

It’s normal for a worker process to crash if it pulls too much data. But it’s not normal for this to take down the entire worker container or the entire EC2 instance. Ideally, the worker will just fail gracefully, restart itself, and everything will stabilise.

Can you share your docker-compose file? I wonder how many worker services you have configured.

Thanks!

docker compose:

version: "2"
x-redash-service: &redash-service
  image: redash/redash:10.0.0-beta.b49597
  depends_on:
    - postgres
    - redis
  env_file: /opt/redash/env
  restart: always
services:
  server:
    <<: *redash-service
    command: server
    ports:
      - "6000:5000"
    environment:
      REDASH_WEB_WORKERS: 4
  scheduler:
    <<: *redash-service
    command: scheduler
  worker:
    <<: *redash-service
    command: worker
    environment:
      QUEUES: "periodic emails default"
      WORKERS_COUNT: 1  
  scheduled_worker:
    <<: *redash-service
    command: worker
    environment:
      QUEUES: "scheduled_queries,schemas"
      WORKERS_COUNT: 1
  adhoc_worker:
    <<: *redash-service
    command: worker
    environment:
      QUEUES: "queries"
      WORKERS_COUNT: 2
  redis:
    image: redis:5.0-alpine
    restart: always
  postgres:
    image: postgres:9.6-alpine
    env_file: /opt/redash/env
    volumes:
      - /opt/redash/postgres-data:/var/lib/postgresql/data
    restart: always
  nginx:
    image: redash/nginx:latest
    ports:
      - "80:80"
    depends_on:
      - server
    links:
      - server:redash
    restart: always

So I should just increase my worker services count to 2 for all WORKERS_COUNT? (Just tried this and tested without success)

It shouldn’t make a difference, but is there any difference if you unify the formatting of your QUEUES specifications? Some of them are space-separated where others are comma-separated.

No improvement unfortunately

Looking at docker stats again, I only see one worker even though I increased the counts to 2. Is that right? Did I maybe do something wrong?

    redash_scheduler_1          0.05%               186.9MiB / 124.4GiB   0.15%               51.3kB / 64.9kB     0B / 147kB          2
    redash_nginx_1              0.00%               3.555MiB / 124.4GiB   0.00%               345kB / 233kB       0B / 0B             2
    redash_adhoc_worker_1       0.02%               559.9MiB / 124.4GiB   0.44%               5.35kB / 5.85kB     0B / 442kB          7
    redash_server_1             0.02%               730.1MiB / 124.4GiB   0.57%               564kB / 496kB       8.19kB / 590kB      9
    redash_worker_1             0.02%               561.4MiB / 124.4GiB   0.44%               8.32MB / 744kB      0B / 442kB          7
    redash_scheduled_worker_1   0.02%               560.9MiB / 124.4GiB   0.44%               62.3kB / 66.3kB     0B / 442kB          7
    redash_postgres_1           0.00%               14.85MiB / 124.4GiB   0.01%               501kB / 8.5MB       946kB / 676kB       10
    redash_redis_1              0.05%               5.719MiB / 124.4GiB   0.00%               631kB / 370kB       0B / 0B             4