Trino not showing schema

Issue Summary

I am able to connect to my Trino instance via Redash. However, schema does not display when creating a new query so end user can’t see what tables are available. I am able to run queries on the Trino catalogs and getting results though.

Please advise.

Technical details:

  • Redash Version: v10.1.0 running on K8s
  • Browser/OS: Chrome Version 102.0.5005.61 (Official Build) (x86_64)
  • How did you install Redash: Docker compose from Redash github
  • Trino Version: latest docker image (trinodb/trino)

This could happen for a few reasons. There’s a non-zero chance that your k8s installation is setup incorrectly (we see a lot of this since there’s no official documentation for how to do it). It could be that the schema is too large so the fetch fails. It could be that there’s no workers monitoring the queue that contains schema refresh jobs.

This is the query that should run when a schema refresh is called. Does this query return results when you execute it from the query screen?

SELECT table_schema, table_name, column_name
FROM information_schema.columns
WHERE table_schema NOT IN ('pg_catalog', 'information_schema')

Do you see any failed jobs or crashes in your logs?

Thanks @jesse . I ran the query in the query screen and it returns the error below:

Error running query: Query exceeded Redash query execution time limit.

In the System Status screen I do see 4 queues: default, periodic, queries, schemas. I have 3 worker pods running as well. No failures in the logs.
We do have a lot of schemas in Redshift so it could just be that. Can increasing the Redash query execution time limit be an option?

Oh fascinating. Yes you can increase the limit. I wonder if adding LIMIT 1000 allows it to return?

So tried increasing the REDASH_ADHOC_QUERY_TIME_LIMIT: 1000 and no luck.

The adhoc time limit has not affect on schema refreshes. Any feedback on attempting to append LIMIT 1000 to the query?

Ran for about 16+ minutes and then this:
Error running query: Query exceeded Redash query execution time limit.

[2022-06-06 21:38:46,774][PID:16915][INFO][trino.client] <Response [204]>
[2022-06-06 21:38:46,774][PID:16915][WARNING][rq.job.redash.tasks.queries.execution] job.func_name=redash.tasks.queries.execution.execute_query job.id=98b81ff0-4980-498e-8382-6739f324ede5 Unexpected error while running query:
Traceback (most recent call last):
  File "/app/redash/tasks/queries/execution.py", line 188, in run
    data, error = query_runner.run_query(annotated_query, self.user)
  File "/app/redash/query_runner/trino.py", line 124, in run_query
    results = cursor.fetchall()
  File "/usr/local/lib/python3.7/site-packages/trino/dbapi.py", line 457, in fetchall
    return list(self.genall())
  File "/usr/local/lib/python3.7/site-packages/trino/client.py", line 443, in __iter__
    rows = self._query.fetch()
  File "/usr/local/lib/python3.7/site-packages/trino/client.py", line 519, in fetch
    response = self._request.get(self._request.next_uri)
  File "/usr/local/lib/python3.7/site-packages/trino/client.py", line 356, in get
    proxies=PROXIES,
  File "/usr/local/lib/python3.7/site-packages/trino/exceptions.py", line 128, in decorated
    raise error
  File "/usr/local/lib/python3.7/site-packages/trino/exceptions.py", line 115, in decorated
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 384, in _make_request
    six.raise_from(e, None)
  File "<string>", line 2, in raise_from
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 380, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/local/lib/python3.7/http/client.py", line 1373, in getresponse
    response.begin()
  File "/usr/local/lib/python3.7/http/client.py", line 319, in begin
    version, status, reason = self._read_status()
  File "/usr/local/lib/python3.7/http/client.py", line 280, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
    return self._sock.recv_into(b)
  File "/usr/local/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 309, in recv_into
    if not util.wait_for_read(self.socket, self.socket.gettimeout()):
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/wait.py", line 143, in wait_for_read
    return wait_for_socket(sock, read=True, timeout=timeout)
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/wait.py", line 104, in poll_wait_for_socket
    return bool(_retry_on_intr(do_poll, timeout))
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/wait.py", line 42, in _retry_on_intr
    return fn(timeout)
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/wait.py", line 102, in do_poll
    return poll_obj.poll(t)
  File "/usr/local/lib/python3.7/site-packages/rq/timeouts.py", line 64, in handle_death_penalty
    '({0} seconds)'.format(self._timeout))
rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (1000 seconds)`

How can I see the failed jobs details from System Status > RQ Status > Workers. Would that only be populated if a scheduled run actually fails and not adhoc queries?

Just to clarify: you received this error after you applied a LIMIT 1000 to the schema fetch query?

[edit] It’s very surprising that a LIMIT 1000 query would hit a time limit.

That is correct. I added LIMIT 1000 to the query and received that error. Anything else I should look at?

How bizarre.

What about LIMIT 1?

Still timing out :frowning_face: with error below.
Error running query: **Query exceeded Redash query execution time limit.**

When I’m connected to our Apache Pinot catalog, it works but when pointing to our Redshift catalog it times out. We are using Superset as well and it works fine for both catalogs. Seems to be something on Redash + Trino.

Thanks for following up. I spun up an instance of Trino to play around with this.

I’d like to clarify that when you say:

do you mean that you are changing the Catalog setting under Redash data source settings?

Hi Jesse, thanks for continuing to looking into this! Yep, I just changed the catalog setting under redash data source.

What happens if you run something like:

SHOW SCHEMAS FROM redshift

or whatever is the name of your redshift catalog?

That ran fine, without any errors.

Thanks.

From what you posted, it looks like the schema fetch succeeds, but fetching the result times out the worker. By running SHOW SCHEMAS FROM ${your_catalog} we prove that it can work.

How many rows are returned by this?

Hi Jesse, the query returns 54 records.

Got it. What happens if you configure not only the catalog but also the schema in Redash data source configuration settings?

I’m beginning to suspect that the issue here is simply that your schema is enormous. The only other way to gather it would be to loop through SHOW TABLES FROM {catalog}.{schema} for each schema in the redshift catalog.

Just how many tables are in your redshift, all together?

Wait for it…3,295 tables across 51 schemas.

I believe waiting for it is the problem XD.


Given that information I think the schema is simply too large. Each of those tables will have multiple fields at least. And the Redash schema component expects a sub-5k number of tokens (table names + field names). I’m sure you’re total will be several multiples of that limit.

The sad truth is our schema browsing behaviour doesn’t work well for data sources where the amount of metadata is this substantial. For the Databricks connector we actually wrote a custom UI component that can traverse different catalogs and schemas. And even then the performance isn’t great without a background job that warms specific caches periodically (read more about that here).

Maybe we could repurpose the Databricks component for Trino as well. But longer-term I think we’ll want a different solution. I’m struggling to think of a short-term fix that would improve this behaviour for you short of specifying exactly which schemas to fetch (which is how we did it in the first version of the Databricks connector).