In addition to that, we had also encountered long queue time for BigQuery sources. However, if we try to run it directly from BigQuery we dont’ have the problem. Furthermore, after we shortened the column name, the query could be run smoothly. Here are the example of renaming column names:
expected_delivery_date ==> expct_dlv_date,
payment_invoice_url ==> pmnt_inv_url,
purchasing_entity_id ==> purch_ent_id,
is there any max char of column naming for BigQuery data sources in Redash?
Welcome to the forum! I think there’s a few things going on here that aren’t related to one another.
If you have that many jobs in queue and the number doesn’t seem to reduce over time that indicates you haven’t got enough worker threads provisioned to handle so many jobs. Try increasing the value of WORKERS_COUNT in the environment for your worker service. Be careful not to make this number too high as it will increase the RAM and CPU footprint of your worker service (if the number is 4 right now, try 8. But if the number is 40, don’t try 80 )
Please post a follow-up of your outcome here. Just to confirm that it works as expected.
What happens when the column alias is too long? Does the query fail? Do you see any results? If it fails, can you check your Redash logs to see what the stack trace is?