¿Cómo confiar en tus datos?

In an age where data is multiplying before our eyes and terms like “BIG DATA” and “DATA MASHING” are starting to become mainstream, it is vital to step back from all of the marketing hype surrounding the domain of BI and analytics and understand the following:

• What it is that you are analyzing?
• What it is that you are looking to achieve?
• And most importantly, who is relying on your outputs from the data you model?

I recently saw that Microsoft is having their first ever Data Insights Summit in Seattle in March this year. Our company, ZAP, will be there showcasing the amazing data level solutions that allow us to effectively “Power” the Power BI platform and transform complex data needs using the Power BI visualization and self-service capabilities. But what I also saw as the lead article in the advertising for the event was that the World Men’s Squash tournament had been held in late November in Seattle and that some of the fans, coaches, and other supporters had used Power BI to deliver excellent real-time outputs that covered results, player profiles, seeding’s and a range of other outputs that had really added tremendous value to the event.

I am not writing to disparage Self Service BI, but in fact, applaud it. Only 5 years ago all of this tournament data would have been analyzed in Excel as both the data store and the visualization. Thanks now to the capabilities of solutions like Power BI, Tableau and others, BI can be an active part of data integration from the simplest column and row table to the most complex of integrated data source manipulation, all capable of mobile and PDA display and all delivering incredible value. However, while the science and tools are available to analyze your data—understanding your audience, their reliability on the data, the complexity of that data, the relationships in the data that might not easily be seen at face value and above all, making sure that the data-store you attach to creates the right level of governance, security, authenticity and control depending on your audience, is not as simple. Additionally, when you get deeper into the realm of analytical data interrogation across multiple data sources, there is a level of qualification required to ensure that you understand what you are looking at; after all, everyone is capable of balancing our checkbooks or managing a spreadsheet but when it comes to preparing and auditing a company’s financial performance, we must hire accountants.

For many years, I have used analogies to explain the differences or likenesses of one market to another. Take, for example, ERP (Enterprise Resource Planning), which effectively is business and accounting software designed to manage a company’s financial and commercial transactions through aligned industry practices that follow regulatory requirements. In this well-known discipline that most who read this blog will understand, you have SAP as “The Godfather” of large-scale ERP solutions, challengers such as MS Dynamics AX and all the way down to solutions like QuickBooks, designed for a totally different customer segment such as a small business or home office. What is interesting is that all of these applications are marketed as ERP, but it is as clear as the nose on anyone’s face that they are all totally different, require different skills, different investments, and from the outset have a massive difference in expectation.

This leads me to the title for this article, “Is all data created equally?” Although Self Service solutions have revolutionized the capability for data to be analyzed quickly, alone they have also created all sorts of issues in the corporate world that affect governance, control, risk, and security. Supporting this is the following quote fromGartner that underpins the real danger that is faced without the proper level of understanding, education, and governance when dealing with corporately held data:

“Through 2016, less than 10% of Self Service BI initiatives will be governed sufficiently
to prevent inconsistencies that adversely affect the business”. 

This type of statement shows that even though self-service BI solutions can have a real place in the presentation of corporate data, the presentation itself is only the tip of the “iceberg” and that all the control required needs to be established at the data level first so that the organization can ensure that all those who receive data internally and externally are doing so from a “trusted data” environment and not from one person’s interpretation of how that data should be manipulated. As such, it is vital that corporations invest in solutions first that align their dataand then focus on choosing a corporate BI presentation tool from a variety of self-service tools in the market like Power BI.

The real challenge for any organization now is achieving a trusted data store that can then be the basis for any and all decisions around presentation. The age-old saying of “don’t put the cart before the horse” is so incredibly valuable when considering a strategy in your business for analytics and BI. Just because a vendor can supply you with a web-based data surfing tool at $10 per month has absolutely no bearing on the decision you must take first, which is, what data, from where, how often, married to what, controlled and managed by who. The exciting opportunity now is that there are also vendors that have “Intelligent Data” solutions that can mitigate time to value, risk, and provide deep levels of governance and control, all from productized data solutions available in the market. Productized data solutions now more than ever are the “secret sauce” of BI—not the self-service presentation tools.

Building your data foundation with a productized “Intelligent Data” solution will enable companies to deliver reliable BI to the business in a quicker, less risky, less expensive, and consistent way with the integration to self-service as the icing on the cake.

By Garth D. Laird, Chief Operating Officer at ZAP