We are facing disk storage full issue with Redash docker setup.
How we can clean the cache or unnecessary data. Since we couldn’t extend the hard disk storage every time when growing redash data.
And I have find the exact same issue reported already here.
Redash should automatically clear out your old query results with a periodic job. Is this job running? And if not, why not? In general, the query result cache shouldn’t continue growing unless your usage is also increasing. More likely, this job is not running. Here’s a description of the job from the source code:
"""
Job to cleanup unused query results -- such that no query links to them anymore, and older than
settings.QUERY_RESULTS_CLEANUP_MAX_AGE (a week by default, so it's less likely to be open in someone's browser and be used).
Each time the job deletes only settings.QUERY_RESULTS_CLEANUP_COUNT (100 by default) query results so it won't choke
the database in case of many such results.
"""
If the job is not running, it’s either a configuration problem (i.e. something you need to fix) or it’s a bug (something the community needs to fix).
Since I can’t reproduce this behavior on V10.1 I assume it’s either a bug that we squashed (intentionally or not) in the three years since V8 was released, or you have misconfigured your instance.
I’d advise anyone to update from V8 to V10.1 since we’re not supporting V8 anymore.
Thanks @jesse for the reply.
We don’t have much query on our side…
Currently we are using only around 15-20 queries. So I don’t think these query having 8GB of query results.
Anyway I’ll check the queries once again and I’ll take a look on the job process you have mentioned.