Hi all,
I didn’t see Spark SQL in the list of data sources but it would be very nice to implement it. What would it require ?

I final had the time to document the process of creating a new query runner: Creating a new query runner (data source) in Redash

As long as the data source has a Python driver, it’s a simple process.

You need to make sure the standalone project you’re launching is launched with python 3. If your are submitting your standalone program through spark-submit then it should work fine, but if you are launching it with python make sure you use python3 to start your app.

Also make sure you have set your env variables in ./conf/spark-env.sh (if it doesnt exist you can use spark-env.sh.template as a base.

Hi,
I know, this may seem a bit newbie, and actually i am. But i don’t understand why it has to be a standalone, and not with yarn.