How do I speed up execution of queries that expects large data

I am using Bitnami Redash AMI in AWS Environment (t3.small). I am expecting large amount of data from one of my sources. How do I minimize the time taken for loading the results. Is it possible to control threads and memory read/write operations. Sometimes simple queries fail to load the results as the data is large.

I am going to be using these results in QRDS later, so memory write is required. But I observed that Loading results takes more time than the execution.

How do I improve the performance!!
Any suggestions? :slight_smile:

How much data are you talking about? Redash can visualize around ~20mb of data at once which shouldn’t be a performance issue. You can squeeze out better backend performance with a larger instance size (more ram helps). But this only gets you perhaps 200mb, and only if you are using Redash exclusively as an API proxy to your database. It will crash if you load it in your browser. If you’re merging large datasets you should use a proper ETL tool.

Also, don’t use the bitnami images. They’re unofficial and have lots of issues with performance / maintenance.