This article is more than 1 year old

A colleague leans over, nods sagely, and whispers: 'Data is the new oil, you know...' Here's how you respond

Here's the real score on what's happening with information in business today

Analysis We keep hearing data is the new oil. It’s one of those annoying axioms that holds a germ of truth, and is repeated by every marketing executive who’s read a business periodical this year.

The notion of value in data is well established. Consultant McKinsey some years back reckoned data had become an important factor of production “alongside labor and capital.” Gartner last year said, by 2022, 90 per cent of corporate strategies would explicitly mention information as a “critical enterprise and analytics as an essential competency.” According to Gartner: “Leading organisations in every industry are wielding data and analytics as competitive weapons.”

Organisations are therefore looking for ways to extract value from their data, and while that may sound straightforward, IT must navigate several challenges to enable this.

Striking black gold

Broadly speaking, there are two key ways data can bring value to an organisation: it can reveal the performance of anything from IT systems to sales, and it can be used to identify new markets and war-game new business or technology scenarios, such as new products and services before you build them. Data has, in other words, become a means of tuning your IT infrastructure to support the business, and also a means for the business to take on the competition.

The key enabler of that is “big data,” and the job of mastering it in support of the business has fallen to IT.

It’s a big world

Not so long ago, pretty much every piece of information a company maintained was in a particular format – structured data – and all strategy proceeded from that point. That data was housed in relational databases using consistent formats, such as names, addresses, zip or post codes, and phone numbers. Most businesses store much of their essential data in these databases: sales figures, stock control, financial transactions, and more. A multi-billion-dollar industry has sprung up around relational, making it a relatively straight-forward and widely accepted technology. If it’s in a relational database, then an SQL query will do the trick.

The challenge for today’s IT operations, though, is that relational is no longer the only data on their plate. The rise of unstructured data has changed the process by which companies must store and analyse their data in order to serve the greater needs of the business. Unstructured data is pervasive: it could include elements such as email, video, social media posts, and messages – companies could, for example, be interested in tracking interactions from the likes of Facebook and Twitter posts, to website messages and chats. We’ve seen the rise of NoSQL databases as a counterpoint to SQL to house these data types, which – as the name implies – cannot be accessed using SQL queries.

It is now increasingly accepted that working with both types of data lets companies gain value, though the complexity of the task means that more traditional methods won’t be up to the job. Over the years, we’ve seen SQL and relational vendors develop tools that let their products pull data and analyse NoSQL.

One way to work with unstructured data is to identify common elements across all objects within the data set, and use those elements to access the data as if it were structured. This can be a challenge as it involves extra programming, particularly if working with a multitude of different formats of data. For instance, each database entry may have some kind of timestamp, which can be used to locate particular objects within the data store although you'll have to deal with all the various ways times and dates can be specified.

Enter machine-learning

Such is the growth in the volume of data, combined with the growing complexity of the IT systems generating that data or supporting the business, that it’s out-growing humans’ ability to keep up. That’s fueling the introduction of automation with machines that process the data and identify patterns or predict future actions. That automation has a name: artificial intelligence – or, specifically, machine learning. This can be used in a number of basic scenarios, such as for tuning system performance, spotting patterns of fraud in financial services transactions, or route planning in logistics for speedier delivery and for fuel economy. One recent McKinsey report found AI improved on “traditional analytics techniques” in 400 use cases in 19 industries and nine business functions.

AI, however, isn’t the end state – it's more a workhorse, in that there remains a significant role for humans conducting valuable analysis to augment the computers. That means people analyzing huge data sets using the right techniques and tools. Most organisations, however, are behind in this field. Gartner reckons on 87 per cent of organisations possess what it calls “low business intelligence and analytics and maturity” that can create huge obstacles to those who want to exploit analytics tools and technologies such as machine learning and other branches of AI.

We don't want to blame IT, but...

Part of the responsibly for this "low maturity" of business intelligence lies with IT operating a primitive or aging infrastructure. What does that look like? Business intelligence that's mostly based on reporting and an information process in which the IT team handles content authoring and data modelling centrally. It will involve the presence of data silos, too: that is, islands of information.

Historically, IT departments have been responsible for pulling data from various sources and silos but it wasn’t exactly a rapid process. It was by no means uncommon for a simple data request to take a month – or even longer – to be answered. This time lag is no longer acceptable. In the era of self-service and with the rise of digitalisation where IT and business managers work jointly to analyse challenges and problems, the model of IT running the data queries is outdated.

This is why we are seeing IT teams overhaul data architectures, and why we have seen a growth in data management: organizing data so it can be managed and accessed quickly without tying up resources.

The question of “what is” a data management strategy, however, can be a challenge for IT. The international Data Management Association has tried to offer guidance, defining data-management strategy as the development of architectures, policies, practices and procedures to manage the data lifecycle, though even that can be interpreted in different ways. Does it refer to the way that data is stored; whether that be in traditional formats, or in the cloud? Does it refer to the way that data is held for deeper analysis? Does it mean transforming data from a variety of sources into a common format?

There are so many interpretations of what data management means, no two companies could agree on the specific terms let alone the implementations. Picking a way through this, arriving at a clear definition, and rolling out the suitable policies and appropriate technologies, is unavoidable in this day and age, and a clear priority for IT.

As the data layer is re-architected, we have seen the growth of cloud as a storage platform, and changes in the on-premises market, too. The rise of all-flash memory has had a profound effect on the storage market as we enter a new world of storage equipment operating on a level beyond what was achievable just a couple of years ago.

Also, we’re seeing growing use of hyper-converged systems, where compute power, storage, and networks are combined into one entity and where the underlying infrastructure becomes software defined. Hyper convergence means increasingly that with IT working closely with the business units, new services can be spun up and taken down without waiting for dedicated systems.

Regulate this

But this is not the wild west where anything goes in data, and with its growth in value so regulators have awoken to the need to protect peoples’ personal information with regulation.

Regulation is not new, and many organisations have had to work in some form of regulatory environments: for example, the finance industry has to cope with Sarbannes – Oxley or BASEL 2. These industry-specific regulatory environments put another set of requirements on organisations, making it a tougher job to meet all obligations.

Now, however, we have Europe's General Data Protection Regulation, which has completely transformed the way all companies think about data. For the first time, any company, regardless of sector, has to think about how their customers would react to the way their data is being handled. And, more importantly, work out how to disclose how that information is handled to customers. This has meant devising ways for ready access to data – some of which may be in non-digital form – and setting up a whole new set of procedures to make this task easier. This is not a trivial task, and failing to deliver a system that can support things such as customer access requests, or that prevent data loss or do not detect and report theft, have huge implications, with potentially non-trivial fines for the business.

New engineers

We can quibble about the metaphors and clichés, but there’s no doubt “data” today has more value – actual or perceived – than ever before. That’s feeding changes in storage and analysis as IT must overhaul dilapidated or siloed infrastructures. That’s seen the role of IT evolve, too: no longer the facilitator of reports or the master of silos, IT has become part of the business and is responsible for building an infrastructure that’s automated, invisible and scalable.

If data is the new oil, IT has become the mechanics of a new business engine. ®

More about

TIP US OFF

Send us news