[2020-07-24 17:16:47,299][PID:34][ERROR][ForkPoolWorker-16] Task redash.tasks.refresh_queries[66861e35-fa05-4164-9aa4-e0c0c1c76585] raised unexpected: TypeError(‘float() argument must be a string or a number’,) Traceback (most recent call last): File “/usr/local/lib/python2.7/site-packages/celery/app/trace.py”, line 385, in trace_task
**R = retval = fun(*args, kwargs) query_text = query.parameterized.apply(parameters).query File “/app/redash/models/parameterized_query.py”, line 125, in apply invalid_parameter_names = [key for (key, value) in parameters.iteritems() if not self._valid(key, value)] File “/app/redash/models/parameterized_query.py”, line 169, in _valid return validate(value) File “/app/redash/models/parameterized_query.py”, line 89, in _is_number float(string) TypeError: float() argument must be a string or a number
The actual deployment contains a database migrated from a self-hosted redash deployment running on a GCP VM with version 7.0.0.
If I change the env values for the adhocworker in the helmfile values the problem seems to be “solved” but of course it does not accomplish with the need.
I think this is the portion of helmfile which is giving this error but I don’t quite understand why it’s happening.
Hey, thanks for your quick answer, I did not do anything related to update the schema.
I did a pgdump of all my database running on the VM, Then, I restored it inside the postgres pod.
before that I’ve created the redash and redash_reader role in psql with the specific permissions.
I can see some dashboards but not all.
How can I update the schema properly? sorry but I don’t find any documentation or issue refering to this.
I think the problem is you took a database from V7 and tried running V8 on top of it. The database migrations happen when you upgrade from one version to another.
My old redash is running on an vm in v7 with a psql in the same vm, I don’t want to update this one, so I’ve created a new helmfile definition to deploy on my new gke cluster with all the requeriments I needed: (ha postgres, external redis, redash V8).
When I get this deployment running and working fine, I’ve stopped the V7 vm redash and then did a PG_DUMP (6gb) of the vm psql.
With that dump, I managed to get it into the cluster and at this point everything goes well.
Do you think I might need to update my old instance to v8 then pgdump all database and then transport that pgdump to the new cluster?
I think the problem relays on the python file and the adhocworker QUEUES variable, why do you think it is the migration?
K4s1m, I did the steps you mentioned.
I see the database is working very well, I have doubs about the upgrade to v8 in the new environment.
This new environment is running on kubernetes and deployed via helm, upgrading the image tag will be enough or there is anything that I could be missing?