Redash is a very powerful tool and we’re already using it with Big Query for some use cases. Unfortunately we’re tied to a Teradata MPP Datawarehouse for highly sensible and critical data.
Teradata published a Python module (https://developer.teradata.com/tools/reference/teradata-python-module) last year. Since we’re already used to this module we made a first approach (https://gist.github.com/mrbungie/5fcc076e67e047c8a61de706403b1138) but it’s still in rough shape.
There are a few problems about our current implementation:
- Teradata’s module uses ODBC, so there are a few dependencies (unixodbc and Teradata ODBC driver) that may add complexity to the entire project.
- It also handles datatypes in such a way that we’re getting NaNs and Infinite values via json.dumps. That means invalid JSON. We made a workaround via a custom JSON Encoder, but is dirty. Another workaround would involve using a custom datatype handler.
Both Redash and Teradata are excellent tools by themselves but we think integrating them would make them even more powerful.
PS: Sorry about my english.