My name is “Coworker Who Writes Scripts” (Eric). Hopefully I can contribute a bit and give some perspective on the original incentives to build out the scripts we did.
Bot as an external tool
As far as process/workflow goes, having the process integrated into the re:dash UI would be valuable to our end users from a simplicity’s standpoint. Much of our work is done within re:dash and then we share out links to the original requester for them to view the visualizations/data tables they had asked for and this also allows them to make edits to the original base query if they would like. Given re:dash’s schedule functionality, we can set that to correspond to how often the data itself is updated and it keeps people coming back to re:dash as the source of truth for their data needs. So going back to simplicity, it makes sense to also have this notification functionality baked directly in to the re:dash UI.
I also appreciated Arik mentioning changing the alerting up to just “notifications”. I think this makes sense as some data sets are not going to work with visualizations and the end user may just want content delivered in text format. This could simply be periodic information or an alert, but either way I believe it makes sense to have this all nested in the same location. I’m not 100% sold on the functionality being built in to the query page itself as I see this working much like how you build out a dashboard. You can press (+) to add a new element and from there you can search through existing queries and then select whether its just the table data, or rather, the visualization you would like posted.
I certainly understand the technical difficulties however, so that’s definitely being considered.
On the topic of whether these posts correspond to the refreshing of the query’s data, I think they absolutely need to be separate. Clearly our business units will want these visualizations and data posted during business hours, but we found out quickly having a mass of scheduled queries running over business hours murdered performance. It’s not to say all queries will be that intensive, and I am sure business units will want data from the morning, afternoon, etc, but for the queries covering days, weeks, years, at a time, it makes sense for the functionality of our clusters to be able to run some of those queries in off-hours and then post when users are working.
Hopefully that helps. I am sure we aren’t the only ones providing feedback on a feature with this type of functionality though, so I am excited to hear what you think!