Schemas Queues Stuck and BigQuery Column Name Issues

Issue Summary

We have a lot of schemas on queued (372 out of 385 total jobs) as depicted below


Is there any specific reason for that?

In addition to that, we had also encountered long queue time for BigQuery sources. However, if we try to run it directly from BigQuery we dont’ have the problem. Furthermore, after we shortened the column name, the query could be run smoothly. Here are the example of renaming column names:
expected_delivery_date ==> expct_dlv_date,
payment_invoice_url ==> pmnt_inv_url,
purchasing_entity_id ==> purch_ent_id,
is there any max char of column naming for BigQuery data sources in Redash?

Technical details:

  • Redash Version: 9.0.0-beta (2641562b)
  • Browser/OS: Chrome/Mac OS

Welcome to the forum! I think there’s a few things going on here that aren’t related to one another.

If you have that many jobs in queue and the number doesn’t seem to reduce over time that indicates you haven’t got enough worker threads provisioned to handle so many jobs. Try increasing the value of WORKERS_COUNT in the environment for your worker service. Be careful not to make this number too high as it will increase the RAM and CPU footprint of your worker service (if the number is 4 right now, try 8. But if the number is 40, don’t try 80 :joy: )

1 Like

Thank you for the reply! Just noticed that the schemas queues are for the schema update.
I’ll try to add some workers for the schemas reload! :grin:

However I still can’t find any solution for the long column name :frowning:

Please post a follow-up of your outcome here. Just to confirm that it works as expected.

What happens when the column alias is too long? Does the query fail? Do you see any results? If it fails, can you check your Redash logs to see what the stack trace is?

1 Like

Just added the worker specifically for the schemas and scheduled_queries.

It also solves the column naming issues.

Anw thank you @jesse for your help!

1 Like