Most of the large organizations today understand the importance of analytics, yet they struggle to decide on whether to use a centralized or decentralized model of analytics implementation. It is important to consider the pros and cons of both the models and to structure the Analytics function correctly to gain the competitive advantage that analytics provide in today’s environment.
In most organizations that try to start up an Analytics team, decentralized Analytics provides the quickest route to reaping the benefits of investing in analytics. In a decentralized analytics environment, an organization employs analytics in each function and department separately. This would mean that each operational and functional unit would have its own analysts, separate silos of data to work upon and dedicated analytics requests.
This makes sense as a short term analytics strategy. Each line-of-business would have direct, full-time access to an analytics resource, and will get the quickest turnaround on their requests. More importantly, in today’s organizations, data is hardly centralized with various data streams from different applications and business units that the organizations have acquired. A department knows its data the best and where exactly it lies in the complex web of information that organizations create over time. Thus, it obviously is less time consuming to have separate analytics implementation in each department. A decentralized model leads to deeper knowledge of specific line of business and thus a better hold of data requests.
Yet in long term, decentralized analytics almost always lead to inefficient implementation of analytics and lack of cross functional synergies in the organization. In most scenarios, the Analytics function is a specialized, knowledge based activity. Without some level of central coordination, decentralized Analytics will frequently lead to non-strategic investment on resources that may become obsolete without proper training and handholding.
Another inadequacy of decentralized model is that the figures and calculated metrics across functional areas almost always never match. The numbers would never foot on organization wide reports and business-rule changes are never reflected consistently across all information sources. Also, a thinly created analytics layer in each functional unit leads to an over or under utilization of resources over time.
In contrast, a more centralized structure inherently promotes better utilization of resources and data, enhanced communication of information within an organization and a deeper command over analytics implementation and consumption. In a centralized analytics structure, there is one analytics team that serves the entire organization and every analytics need in the company. [pullquote align=”left”]This in turn entails having a Chief (Analytics|Knowledge|Data) Officer in the CXO layer, who leads the delivery, management and innovation of the analytics function.[/pullquote]
Centralization of any function in an organization is crucial when standardization and risk management takes higher precedence than localized control or customization, and this is most true for an analytics function. Models should have some level of consistency in design, numbers should look comparable, even reports have to have a comparable format. Numbers reaching higher management or external stakeholders from different lines of business should be identical else it risks resource loss in footing these numbers and loss of reliability on data. Sales figures from POS should match with say inventory turnover, even though these might come from totally disparate applications.
Most importantly, centralization provides some level of autonomy to the Analytics function and ultimately leads to innovation, a deeper corporate understanding of how to draw the most valuable insights from the available data and to answer questions that aren’t being asked.
Try deep learning using MATLAB