Cultivating Agile Data Analytics Flow

Share This Post

Traditionally, users seeking “data analytics” would ask for huge tables of data with as many columns as would fit in a view. With Excel as one of their favorite tools, these users were comfortable with pouring through thousands of rows of information to find an insight, because, well, that’s what they’re used to.

As more organizations are starting to invest in data and moving traditional data architectures to contemporary cloud applications like Tableau, data analysts and designers have a lot more horsepower at their fingertips to delight users with something much more meaningful. But DATA DESIGN IS HARD. So many analysts start a view by taking the data available and playing around with how they should visualize it, immediately jumping into the toolset to explore nuggets of insight gold. But this is backwards – like choosing a solution before you know the business problem you are trying to solve or finding an answer before you know the question.  Users may come to you and say, “I just need a basic table of Grubhub sales by hour”. Assuming you already have the data, a view like this is quite simple and might only take an hour or two to produce. But, there is a reason the user needs to understand Grubhub sales by hour, and just taking the order of that solution without exploring the underlying need will result in the same old experience they’re used to – huge tables of data that must be exported and manually massaged to be useful.

“Data Analytics Flow” is the concept of continuously releasing new insights to the business, based on an intimate understanding of their most burning questions. This process requires much more than an hour or two of building a quick and dirty tabular view, as you need to elicit such a deep comprehension of how their process works that you should be able to articulate the questions they need to ask almost better than they can. One of the most effective ways we’ve found to develop this understanding is through a Visualization White Paper, a 1-2 page write-up describing the motivation and intent behind a dashboard.

In the write-up we would articulate the business context, what tasks the user is trying to achieve, the questions they were trying to answer, and explore the right visualizations to fit those questions. At first this seemed like a daunting extra task that would slow down the build. Soon, we started to see the light, as pausing to think very carefully about what questions you are answering will result in a way faster, better initial draft than if you were to just jump straight into the data. We learned just because you COULD visualize something easily, does not mean you SHOULD. So, we implemented a rule that every new dashboard required a visualization white paper, and we would not start the build until the white paper and design were fleshed out.

Over time, patterns started to emerge with the types of sources we were engineering and the visualizations we were building. We generically had the following sets of tasks for a given feature requiring both engineering and visualization:

1. Elicit key business decisions and process challenges
2. Model Business Data Diagram to understand core data objects and the data domain
3. Model Data Dictionary to elaborate object fields and necessary measures
4. Draft visualization whitepaper to flesh out business decisions from the given data set
5. Design Data Schema
6. Design tables & transformations
7. Engineer data pipeline
8. Develop tables & views
9. Load & validate incremental data (with monitoring and alerts)
10. Define visualization dimension, measure, and interaction requirements
11. Develop visualization
12. Review & iterate visualization with users

Based on this repeatable process, we developed a data analytics estimation template in phases that may be helpful for you. You can download a sample of this Data and Analytics estimation template here, which features a fun example of a movie theater looking to build a data warehouse and expand into subscription offerings.

Getting to a well-articulated white paper takes time, and usually, several models like process flows to get you the understanding you need to be articulate. As we were running sprints, we had to plan the design conceptualization appropriately, so we did not end up blocked by the time we were slated to begin development. Our flow worked in the following steps:

  • Elicit business needs and craft visualization white paper (1 sprint)
  • Engineer required data (1-2 sprints)
  • Develop and iterate on visualization (1-2 sprints)

Your flow may be different, or the scope of tasks your team needs to complete to derive an insight may be much smaller or larger than the example we provide in the estimation template. But hopefully this gets your gears turning and inspires your team to cultivate their own unique flow, resulting in data insights that will astound and delight your users.

HAPPY DATA TRAILS!

More To Explore

b2b auto pay

B2B Auto Pay: Automation Use Cases

Migrating a B2B “Auto Pay” Program Companies migrating to SAP often have daunting challenges to overcome in Accounts Receivable as part of the transition. You might have different divisions running

ArgonDigital | Making Technology a Strategic Advantage