Unable to refresh large dataset via API

#1

Hi,
I am trying to fetch 10 million rows via Redash API, the API is failing.

anyone has faced the same issue?

LOGS:
server_1 | [2019-05-08 08:48:45,393][PID:1294][INFO][metrics] method=GET path=/api/jobs/f6c54abd-3350-4883-88f9-79f934196d75 endpoint=job status=200 content_type=application/json content_length=123 duration=0.80 query_count=2 query_duration=2.38
nginx_1 | 192.168.51.117 - - [08/May/2019:08:48:45 +0000] “GET /api/jobs/f6c54abd-3350-4883-88f9-79f934196d75 HTTP/1.1” 200 137 “-” “python-requests/2.21.0” “-”
adhoc_worker_1 | [2019-05-08 08:48:45,436][PID:1][ERROR][MainProcess] Process ‘ForkPoolWorker-15’ pid:158 exited with ‘signal 9 (SIGKILL)’
adhoc_worker_1 | [2019-05-08 08:48:45,451][PID:1][ERROR][MainProcess] Task handler raised error: WorkerLostError(‘Worker exited prematurely: signal 9 (SIGKILL).’,)
adhoc_worker_1 | Traceback (most recent call last):
adhoc_worker_1 | File “/usr/local/lib/python2.7/dist-packages/billiard/pool.py”, line 1223, in mark_as_worker_lost
adhoc_worker_1 | human_status(exitcode)),
adhoc_worker_1 | WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL).