Safe to upgrade from 4.x to 6.x?


We have a Redash deployment, via Docker, that is running the 4.x branch and would like to upgrade to 6.x. Can we do a direct upgrade/data migration, or should we upgrade to 5.x then 6.x in sequence?


It’s actually not a bad idea to do upgrades in one version increments. Although in this case at least one of the versions didn’t have migrations at all.


OK, we started by running the migration from 4.0.0.b3948 to 5.0.2.b5486 and got this traceback:

[2019-01-16 09:09:23,088][PID:1][ERROR][MainProcess] Task redash.tasks.refresh_queries[37b7eb64-bf46-4642-ba4d-991468505cec] raised unexpected: ProgrammingError('(psycopg2.ProgrammingError) column queries.tags does not exist\nLINE 1: ..., queries.search_vector AS queries_search_vector, queries.ta...\n ^\n',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/app/redash/", line 71, in __call__
return TaskBase.__call__(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/", line 438, in __protected_call__
return*args, **kwargs)
File "/app/redash/tasks/", line 275, in refresh_queries
for query in models.Query.outdated_queries():
File "/app/redash/", line 1010, in outdated_queries
for query in queries:
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/", line 2925, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/", line 2948, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 1200, in _execute_context
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 1413, in _handle_dbapi_exception
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 1193, in _execute_context
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/", line 507, in do_execute
cursor.execute(statement, parameters)
ProgrammingError: (psycopg2.ProgrammingError) column queries.tags does not exist
LINE 1: ..., queries.search_vector AS queries_search_vector, queries.ta...
[SQL: 'SELECT queries.query AS queries_query, queries.updated_at AS queries_updated_at, queries.created_at AS queries_created_at, AS queries_id, queries.version AS queries_version, queries.org_id AS queries_org_id, queries.data_source_id AS queries_data_source_id, queries.latest_query_data_id AS queries_latest_query_data_id, AS queries_name, queries.description AS queries_description, queries.query_hash AS queries_query_hash, queries.api_key AS queries_api_key, queries.user_id AS queries_user_id, queries.last_modified_by_id AS queries_last_modified_by_id, queries.is_archived AS queries_is_archived, queries.is_draft AS queries_is_draft, queries.schedule AS queries_schedule, queries.schedule_failures AS queries_schedule_failures, queries.options AS queries_options, queries.search_vector AS queries_search_vector, queries.tags AS queries_tags, AS query_results_1_id, query_results_1.retrieved_at AS query_results_1_retrieved_at \nFROM queries LEFT OUTER JOIN query_results AS query_results_1 ON = queries.latest_query_data_id \nWHERE queries.schedule IS NOT NULL ORDER BY'] (Background on this error at:


I just stepped through each of the intermediary versions in 4.x, 5.x, up to 6.x series, and things work. Do the migrations normally account for the current version of the schema and run all intermediary versions? That would make upgrades a bit easier.


Yes, migrations are incremental from the point your database is currently at.

The exception you had above might be because you had Celery running while you were still migrating, so it used the old schema.