"Signature did not match digest" causing 500s after using helm chart

I’ve been experimenting with Redash on a local kubernetes cluster using the community helm chart at https://getredash.github.io/contrib-helm-chart/ This is using an external postgres for storage.

This has all been going well, I set up a couple of data sources and lots and lots of queries on them.

However the last time I tried to reinstall from the helm chart it repeatedly timed out. I eventually traced this to a problem with how the secrets were being generated - updating all the keys to new, good values, then meant redash came back up again.

The problem now is that whenever I try to do anything involving a Data Source, redash throws an “Internal Server Error” message. This means if I click through to view results from a query, or even if I go to Settings.

Looking in the main redash pod’s log I see the following error

File "/usr/local/lib/python3.7/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 74, in verify
    raise InvalidSignature("Signature did not match digest.")

I’ve tried re-running the helm with various permutations of the secrets used when I was trying to fix the previous bug but have not had any success. I’ve also tried the “default” key mentioned in the forums here, and still see the error above.

Is there any way I can
A) manage to fix my current instance so that it can see the existing data sources again?

Failing that is there any way I can
B) at least fix my data source state so that redash can allow me to edit them (without losing the queries associated with them)

And failing that even is there anything else I can try to get things running again without losing all the query and dashboard data?


If you blew away the secrets used when you created the data sources then anything encrypted with those secrets is permanently unrecoverable. In this case that’s the data source configurations. Redash encrypts it because that’s where your database passwords are held (can’t be in plain text).

The good news is your queries will be fine. Just recreate your data sources, get their id’s and do a mass update of the queries table to set them to use a new data source ID.

OK, thanks for the reply Jesse.

How would I go about recreating the data sources? Everything in redash’s UI throws an internal error whenever I get close to data source management. Is there some example code (postgres presumably) which I could use to clean out the existing ones so I can start adding new ones and get the environment back to the point I can add data sources again?

The procedure is going to be a little complicated. You will need to manually connect to your postgres instance and also run commands using the Redash CLI. It should go without saying that you should back up your postgres database before attempting this. I had to experiment for awhile before I found a way to make this work.

Note: I’m using regular Docker-Compose. k8s is not (yet) officially supported for Redash. You will need to adapt these commands to your environment.

Step 1: Connect to Postgres

docker-compose exec postgres psql -U postgres     


select id, name from data_sources

This will list your existing data sources. Only the id and name fields are readable. Write down all of the name and id values. Everything else is encrypted with a key that you’ve since lost. Our end goal is to replace these data sources with new ones.

Step 2: modify Redash CLI source

You need to modify redash > cli > data_sources.py. On line 183 the delete() method is defined. Make it look like the following:

def delete(name, organization="default"):
    """Delete data source by name."""

    from sqlalchemy.orm import load_only
        org = models.Organization.get_by_slug(organization)
        data_source = models.DataSource.query.filter(
            models.DataSource.name == name, models.DataSource.org == org
        print("Deleting data source: {} (id={})".format(name, data_source.id))
    except NoResultFound:
        print("Couldn't find data source named: {}".format(name))

The only change here is to add a call to .options() in the ORM call so that the CLI doesn’t attempt to load the entire object prior to deletion. Without this change, the CLI crash because it can’t decrypt the data source options fields. This is the same problem that happens when you try loading the data sources settings screen.

By calling load_only() we limit SQLAlchemy to pulling a field that it doesn’t need to decrypt.

Follow-up item for me: open a pull request to add this to master.

Step 3: Call Redash CLI to delete data sources

For each name you wrote down in step 1, execute the CLI data source delete command.

docker-compose run --rm server manage ds delete "<name of data source>"

Step 4: Create new data sources in web application

You can now reload the data source settings screen in Redash and create new ones. Take note of the new data source id that appears in your URL bar after you press save. Figure out which new data source id maps to the old id that we collected in step 1 and deleted in step 3.

Step 5: Update queries table with new data_source_id’s

In your terminal window from step 1, run SQL commands to update the data_source_id for your queries according to the mapping from step 4.

For example if you delete data source 21 and replaced it with data source 29, you can reassign all the queries like this:

update queries set data_source_id = 29 where data_source_id = 21

After this, your queries will be visible and should execute like normal.

Thanks Jesse, I really appreciate the detail on how to do this.

I’ve been away since the last message and in my absence one of my colleagues managed to fix the problem (I had pointed them to this thread). We apparently had another instance of redash which happened to be pointing to the same database, and we were able to use the secret key which was still available there to restore the data.

This means I’ve been unable to confirm whether or not your steps above would have fixed the problem. Can I ask if anyone else comes to the thread and performs the steps above that they add a comment just letting the community know that this has worked for them?

Thanks again Jesse for all the support on this.

1 Like