Month: March 2013

Strategy In A Big Data World

Strategy is creating a unique value in a unique way. I learned this while working on a project with Michael Porter, the world’s foremost authority on competitive strategy. The video above tells the story. Or you can read this.

If you create a unique product or service by doing things a rival could copy, you don’t have a strategy. You have a temporary advantage that the competition is likely to erode. Conversely, if you do things none of your competitors do to create a product or service they can make through other means, you don’t have a strategy. You again have a temporary advantage, perhaps in differentiation or in lower costs. But it’s likely to fade away.

The implications for business software are profound. When a company buys software that its rivals can also buy, say, to reduce inventory costs or speed up design processes, it gains no competitive advantage. It may become more efficient compared to its old self, but it’s just keeping pace with competitors. This doesn’t mean that companies can stop buying software to make processes more efficient. Far from it. Industry best practice is a rising tide that sinks all leaky boats. But firms should realize that operational effectiveness rarely creates competitive advantage.

So, if buying third-party technology puts companies on a productivity treadmill, what can they do? How can they avoid running faster to stand still? One answer is to customize the third-party software you install so that your implementation looks like none other. It’s tailored to your unique ways of working. However, this kind of one-off deployment raises consulting fees and increases maintenance costs over time because any upgrade or modification becomes uniquely difficult.

Another answer is to build your own technology. This is the path taken by the online giants Google, Amazon and Facebook. Each of them has created proprietary technologies like custom scanners for digitizing books (Google), massively scalable datacenter capacity that can be resold in little bites (Amazon), or a new kind of database (Facebook). However, this option is open to only a select few because it requires extraordinary technical talent which is in short supply. Any company taking this approach has to not only hire in-demand experts but keep them, because few technological innovations remain uncontested over time. Today’s innovation needs tomorrow’s innovation on top to stay ahead. Witness the different kinds of competition for Amazon’s Web Services from the likes of Microsoft, IBM, Oracle, and VMWare.

There is another answer, and this one is open to companies of all shapes and sizes.  Use a new generation of software that dramatically cuts the cost and effort of getting a unique perspective on data the moment you realize you want it. It may sound like you can do this with a spreadsheet. And maybe you can with small amounts of homogenous data. But when you want to know what social media can tell you about distribution shortages, you’ve got a question that can’t be answered on the desktop. This is what non-relational technologies like Hadoop, NoSQL databases, and Endeca Information Discovery are all about.

Companies will still buy this capability from software makers, whether on premise or out of the cloud, meaning that it’s available to rivals. But, unlike traditional software which requires you to organize data ahead of time based on how you intend to use it, this new software lets you organize any data at the moment of use, at the point of use. One is a model-first world that excels in standardizing and automating routine processes. The other is a data-first world that excels in enhancing human discretion at the scale of modern organizations.

Which brings us back to strategy. Professor Porter also taught me that strategy is about choice. Sustainable competitive advantage comes from people choosing to create a unique value, choosing a unique way to create it, and choosing and choosing again when the competitive environment changes. A new generation of technology that helps a company get the perspective it just realized it wants on customers, products, suppliers — even its own processes — will reinforce strategy, rather than driving it out.

Flatworms and Erdös with Bacon

Six_degrees_of_separation_02On June 4th this year we’ll celebrate the 15th anniversary of a famous letter to the journal Nature. Start planning your Small World Day festivities now.

In 1998, Duncan Watts and Steven Strogatz wrote about a nifty little effect in networks. Most networks, including the neurons of a flatworm, mathematical paper citations, and Hollywood are neither regularly nor randomly connected. Instead, they have clusters of tightly-connected nodes, linked together by a few cross-cluster nodes, like Paul Erdös or Kevin Bacon. Duncan and Strogatz called these small world networks.

They weren’t the first to think of this idea. That prize goes to a Hungarian novelist from the 1920s, Frigyes Karinthy, who figured we’re all probably connected somehow. They weren’t the first to test the idea. Stanley Milgram, a Harvard psychologist, did that in 1967 with a chain-letter experiment. And they weren’t the first to give this phenomenon a catchy name. John Guare did that in his 1990 play Six Degrees of Separation.

So what, if anything, did Duncan and Strogatz do?

They demonstrated that small world networks are a reliable feature of just about any network, including the electrical power grid in the western US. Since then this effect has been demonstrated in Twitter and Facebook, where people are connected by an average of 4.67 and 4.74 hops, respectively. This  is closer than the 5.2 hops Milgram found among 296 volunteers but, curiously, not by much.

This seems encouraging at first. The world is more connected. People are .39 to .46 hops closer. Peace is at hand. But Guare pointed out the problem with this kind of connectedness: “Six degrees of separation between me and everyone else on this planet. But to find the right six people.”

It’s not really a small world after all. It’s many small worlds, loosely joined (apologies to David Weinberger).

Herb Simon, a pioneer of what we now call behavioral economics, saw this problem at work in human decision making. For Simon, the factors we weigh when making choices are connected. But not every factor is connected to every other factor. He used buying a car as an example. When you shop for a car, you consider factors like how much you make and how you like to live. But you might not consider others, like whether you might move to a different city where you won’t need a car or the relative merits of spending your money on entertaining friends at dinner versus getting the sport package.

Simon summed it up: “We live in what might be called a nearly empty world– one in which there are millions of variables that in principle could affect each other but that most of the time don’t.” Everything’s connected, but we’re constantly looking for the right set of connections to focus on at the moment. Six degrees of separation. But to find the right six.

Why does this matter? Because most of the analytical technology we use to help us make better decisions assumes we know the right factors ahead of time. We build models, predetermining the factors. We fill the models with conforming data. For things we already understand really well, this works. For things we don’t like derivatives trading, not so much.

We need a new set of analytical tools to help us find the right six degrees of separation between possible choices and their potential outcomes. This is not a question of building better models. It’s a question of better exploration of unmodelled connections.