Hi there,
we have an automatic process that creates tables on top of data in S3. Unfortunately, not all tables are created correctly and some of them cannot be queried. All tables are from the single data source and can be browsed in Redash.
Is there any way how I can identify the list of unqueryable tables in Redash?

Many thanks.

Interesting question.

I don’t think you’ve provided enough information to answer it, though.

What makes the tables unqueryable? Is there a commonality to their names?

for instance, a table created on top of compressed JSON files. Here is a sample of such query:

This isn’t really a question about Redash, since this error is coming from AWS.

What is your end-goal here? Do you want to not display these tables in the schema browser?

the end goal here is:

  1. identify unqueryable tables via Redash (if it’s possible);
  2. hide unqueryable tables in the schema browser.

For 1. - AFAIK the only way to identify these tables in Redash alone is to query them one-by-one. This is not a good idea. Instead you should specify a naming a convention for your unqueryable tables. For example: is every table whose name ends in json_gzip unqueryable?

For 2. - Once you have specified a naming convention, you can modify the get_schema method in your local query runner to exclude the unqueryable tables. There are lots of ways to do this once you know exactly how your unqueryable tables are named.

thanks a lot, Jesse!

1 Like