Text: | Print|

Skeptics rush to judge before data is in

2014-04-21 10:44 China Daily Web Editor: qindexing
1

Companies must be fluid, flexible and ready to move to different solutions

There has been apprehension about big data and its benefits, but it is too early to express such sentiments.

There is no doubt that big data analysis can yield huge benefits. But, as with most worthwhile pursuits, it is something that will happen over time.

Over the past three years, I have spent a lot of time developing business intelligence and analytics solutions that have helped companies save on time and costs, including substantially improving the time to market.

In my experience, companies need to take four important steps to unlock value from big data implementations and steer clear of the "trough of disillusionment".

The first and foremost steps is to "think even bigger". Companies must think of a larger, more comprehensive business model and figure out ways to populate it with multiple data sources. Doing so will help them have a much bigger picture.

At the same time, the companies must also envisage the kind of infrastructure needed to support data at that scale, and consider ways to multiply the data by 10 or more within the same parameters.

This is precisely what Oregon Health & Science University in the US is doing on its big data project of speeding up the analysis of human genomic profiles. The project is expected to help create personalized treatments for cancer and support many other scientific breakthroughs.

Calculating about a terabyte of data per patient, multiplied by potentially millions, OHSU and its technology partners are developing infrastructure to handle the massive amount of data involved in sequencing an individual human genome and noting changes over time.

With breakthroughs in big data processing, the cost for such sequencing could come down to as low as $1,000 per person for this now elite research, which means demand will skyrocket and so will the data.

The second important aspect that companies should consider is to find the relevant data for their business.

It is also important to learn from other business leaders about the challenges they have encountered and how they have worked to boost business impact. Following this, it is important for companies to search for the relevant data that can help solve their business problems.

This is exactly what lies behind Intel's internal big data initiatives. Some of the work has been on helping the Intel sales team find the right resellers and the suitable products for them. In 2012, this project helped generate an estimated $20 million in new revenue and the value of opportunities, with more expected for 2013.

It is also important for companies to have a flexible approach. This is so as we are in a phase of rapid innovation and big data projects are vastly different from other major implementations like enterprise resource planning modules.

From a technology standpoint, companies must be fluid, flexible and ready to move to a different solution if the need arises. For example, the database architecture built to collect smart grid energy data in Austin, Texas, with Pecan Street Inc, a nonprofit group of universities, technology companies and utility providers, is now in its third iteration.

As smart meters generate more and more detailed data, Pecan Street Inc is finding new ways for consumers to use less energy as well as helping utilities better manage their grids. But Pecan Street also had to be flexible to keep changing its infrastructure to meet demand.

The bottom line is companies should be ready to adapt to situations and necessities. If you think you know what tools you need to build big data solutions, a year from now, then it will be a different story altogether.

Having said that, companies should also take care to connect the dots, or in other words match the data with the core operations. At Intel, we realized there could be tremendous benefit in correlating design data with manufacturing data.

A big part of our development cycle is "test, reengineer, test, reengineer". There is value in speeding up that cycle. The analytics team began looking at the manufacturing data - from the specific units that were coming out of manufacturing - and tying it back to the design process. In doing so, it became evident that standard testing processes could be streamlined without negatively impacting quality.

We used predictive analytics to streamline the chip-design validation and debug process by 25 percent and to compress processor test times. In making processor test times more efficient, we saved $3 million on costs in 2012 on the testing of one line of Intel Core processors. Extending this solution into 2014 is expected to result in reduced spending of $30 million.

We are only at the beginning of understanding how we can use big data for big gains. Far from being disillusioned with big data, we find many exciting possibilities as we look at large business problems holistically and see ways to help both the top line and the bottom line, all while helping our IT infrastructure run more efficiently and securely. It is not easy to get started, but it is certainly well worth the time and effort.

Comments (0)
Most popular in 24h
  Archived Content
Media partners:

Copyright ©1999-2018 Chinanews.com. All rights reserved.
Reproduction in whole or in part without permission is prohibited.