The database is oracle 12c. The table contains 1 million rows.

I am running two sets of queries. The first one -

select * from table_name
select * from table_name where rownum <=10

Another:

select * from table_name fetch first 10 rows only

The first set does not run and is stuck for a long time while the second set runs just fine. I am not sure how to debug it.
I am sure a normal select does not fetch the whole table at the same time or does it? And when you cancel the query it takes a lot of time too (which I think is kind of okay).
There must be some setup issues but can’t figure that out. I need some help here. Thanks

EDIT: I had to flush Redis and then only things started working. I believe the long-running queries are getting stuck and then subsequent queries are sufferingl

It’s not a bug. Redash loads query results directly into your browser. If a query returns 1 million rows then I’d expect the entire browser tab to crash :stuck_out_tongue:

Why do you need to load so much information at once? Usually, result sets around 20mb are the sweet spot (though it varies from one browser to the next).

Ahh thanks for the reply. Can I put a default limit(say 1000 rows) on a plain Select * query - somewhere on the backend? Is it possible tweaking the code somewhere?

Hi @nihalsharma. If you want to view large number of records, you have to page the data. No viewer will be able to load millions of records in one go.
As I understand, redash will load the results and then page the table visualization. But this means all the data is already in memory before it can be paged.
I would page the data before it reaches redash. The remaining issue is finding a way to let the user select the page to load.
I have not done this but I imagine you could have a parameter to this query that simulates paging?