How IT and Business Can Work Together To Maximize Big Data ROI

Capturing big ROI from big data can be elusive at times. Chuck Currin, a principal data architects at Mather Economics, has some suggestions for how IT and business can work together to improve their odds for big impact.

Tags: alignment, analytics, big data, Hadoop, management, ROI,

Chuck Currin
Principal Data Architect


"To connect-the-dots between technologies and use cases, business stakeholders should be immediately involved."

Intelligent Data Summit
Manage Expanding Data Volumes for Analytics & Operations
October 27, 2016
Online Conference

One of the central technologies to the big data movement, Hadoop, is now 10 years old. Typically, with a technology of this age, there’d be much greater mainstream adoption. However, the complexities of the technology stack, along with its rapid evolution, have seen it lag behind expectations. 

 

Even as the buzz about big data seems to be at a peak, businesses claim big data projects have still yet to deliver on ROI expectations.

 

A survey of technical managers with operational big data projects revealed they expected three to four times return on investment. Actual ROI fell far short, in fact they didn’t even break even.  The survey on Wikobon reported 55 cents ROI for every dollar spent. It is evident that there’s a disconnect between reality and expectations. (Follow the Money: Big Data ROI and Inline Analytics, by David Floyer, 2015)

 

An earlier survey identified reasons that big data projects have historically failed to meet ROI expectations. The main reasons identified were the lack of relevant business use cases, scarce big data talent, and rapidly evolving, immature technology. (Enterprises Struggling to Derive Maximum Value from Big Data by Jeff Kelly, 2013).

 

So, in light of these realities, here are some thoughts on what IT and business can do to work together to maximize big data ROI.

 

#1. Confirm that the project technologies align with the business use case.  This is essential to big data project success. Otherwise, extracting value out of big data technologies can be very difficult.  To make sure you connect-the-dots between available technologies and planned use cases, business stakeholders should be immediately involved.  For some reason, big data projects have typically been treated as an exclusive domain of IT, without business stakeholders involved. In the end, big data success is dependent on alignment of a business use case, proper tools and measurable outcomes.

 

#2. Pick the right project.  Let me share some examples of big data projects that have a a good chance for success.  These are typically analytics applications, including  conversion funnels and customer churn models. In these cases, the business will be able to use data and analytics to determine pricing changes and observe measurable impact.

 

#3. Prepare for talent scarcities. Scarcity of talent is a big barrier to big data ROI. This is especially true in the skills needed to apply complex statistical modelling to data. As a result, big data projects are often driven by consulting resources.  This can present a real problem, and an often overlooked impediment to achieving strong ROI.  Why?  Once the project is delivered and the consultants leave, the big data initiatives flounder.

 

So, any enterprise planning on big data as a core competency needs to plan ahead for staffing. Early in the process, this probably will be a mix of consultants and internal staff. A good approach to building the right staff is to work closely with trusted consultants to hire employees with an aptitude for big data. Also staffing should be done by identifying skills that align with business goals, so a process for identifying these skills should occur early in the project lifecycle.

 

#4. Make sure to include technology management and technology assessment in your big data initiatives.  Much of big data technology remains a work in progress, and can often be experimental or immature. Also many of the big data technologies are based on open source frameworks.

 

These big data technology considerations stand in contrast to conventional data warehousing, which has many commercial vendors and has reached market maturity.  As a result, businesses looking at big data investments need technology management that monitors technology maturity and policies that ensure to adopt big data technologies only when they are mature, useful and offer a measurable impact on the business. This will involve a combination of solid business use case and proper technology resources as it makes sense relative to the industry adoption of the technology.

 

In summary, the right way to assess any technology adoption is within the context of your particular business and by weighing the risks and rewards of early adoption. Big data is no different in this regard. Decision-makers must make the call for being an early or late mover, depending on whether there are relevant use cases that align with company goals and whether the risk of being an early adopter can create a sustainable competitive advantage.

 

But even the conversion funnel and churn model examples mentioned earlier can take multiple iterations to optimize. So, the indiscriminate pooling of data into a data lake can provide ample opportunities to recoup big data technology investments through continuous improvement of business models.  Having the data lake will enable staff to expose and integrate additional metrics to these models much more quickly and greatly increase the likelihood of success.

 

For the future, there are promising new technologies that will enable quicker integration into the Hadoop stack. The notion of “containerization” through products such as Docker is about to be realized in Hadoop. Containerization allows technology staff to separate out the technology components and dependencies into a container that can be deployed as a self-contained unit that can be monitored independently.  Employees at Yahoo are working on “assemblies.”  These follow the notion of containerization and will allow developers to deploy self-contained Hadoop stack applications on Hadoop’s YARN without needing a Ph.D. in computer science.

 

These developments should prove very significant, and could likely pave the way to much faster Hadoop (and easier) adoption and the ability to realize better ROI.  In the meantime, we hope the above recommendations for improving big data outcomes will prove valuable.

 


Chuck Currin is the principal data architect at Mather Economics. He is responsible for understanding emerging and evolving data technologies and translating business requirements into a solutions and data architecture that maximizes ROI.




back