Redash runs out of memory, if we execute SELECT * type of queries on big tables. It is usually an accident, but it is hard to avoid it if the team gets bigger. Is there any mechanism that prevents such an execution right now?
A good solution to this is used by Periscope Data. They return 5000 records on default (LIMIT 5000) unless the limit is overwritten.
Yes, Redash runs your queries as is, and therefore doesn’t limit the number of records you query for.
We do want to address this at some point, but due to the variety of data sources we support (35+) this requires some planning. You can follow the following issue for updates:
But to clarify/set expectations: there are no concrete plans to work on this right now.
Thanks @arikfr for the update.
We run out of memory on the instance and Redash UI stops working if such a query is executed. Is there any other way how to avoid it?
This requires hard LIMIT injection into your SQL/DSL query. If we have SQL/DSL Parser that can parse the SQL into AST I can help create PR to inject this hard LIMIT.
Another approach, is if your database support this settings then you can apply these settings early on (either using SQL statement or ask your DB Admin to set this default max rows limit parameter for you
This post is a couple years old. There have been a few efforts in this area since then. Nothing in the main repo, though. Redash doesn’t parse SQL into an AST, though, as this isn’t feasible for the 40+ data sources…
Login or sign up disabled while the site is in read only mode