Humans or machines: Which makes better business decisions?
(Spoiler alert: both)
Within the past year, a series of unrelated technical glitches brought the New York Stock Exchange (NYSE) to a grinding halt for more than three hours, delayed hundreds of United Airlines flights and took down the Wall Street Journal’s home page.
Experts agree the amount of damage done was relatively minor, but the episodes reopened the issue of whether we have grown too dependent on technology, specifically, automation that makes important decisions for us.
There’s no doubt automation is a boon in many respects. From next-day package delivery to ATMs to safer industrial workplaces, automation definitely makes our lives easier and safer in a number of ways.
Some even claim that automation can save local jobs by helping factories remain cost-competitive with cheaper overseas labor. In many cases, computers are able to outperform humans in tasks such as legal discovery, making medical diagnosis and evaluating the potential of a job candidate.
On the other hand, increased automation creates new risks for society. United Airlines, for instance, pointed to an “automation issue” to explain its recent system-wide shut down.
On a larger scale, many experts say it was faulty predictive modeling that caused the financial crisis of 2008. Statistical models said it was unlikely that a critical mass of homeowners would default on their mortgages all at the same time, but the models were built with one bad assumption, so they did default – and an epic downturn of mortgage-backed securities caused a global financial crisis.
So what is the answer? Should there be a limit to how much automation we accept into our lives, or should we hand more and more tasks to computers, willy-nilly?
It’s a false dichotomy.
Not only have automated technologies and predictive modeling been around for hundreds of years, but when it comes to some business applications such as predicting market behavior, it’s actually a combined approach that reliably leads to the strongest results. Not only can computer models bring scale and speed, but they can help remove human biases from our decisions. Human beings bring insight, imagination and subtler forms of pattern recognition.
A well-known example happened in 2005 when two chess amateurs – Steven Cramton and Zackary Stephen – beat several grand masters as well as Hydra, the most advanced supercomputer at the time (more powerful than Deep Blue). They did it through a combination of their own knowledge and computer models, proving that humans working with technology can be smarter than either technology or humans alone.
So, blending these two decision-making mechanisms can actually lead to better results than either could produce independently. For example, automated models can actually measure and calibrate human analysts’ predictions to find the most reliable experts and adjust their estimates to account for their relative accuracy and unique biases. This “model of models” can be more accurate than the humans or machines alone.
Another example would be to allow an experienced human analyst to override certain key inputs to a predictive model, so that a model can take into account breaking news, something difficult to capture in a fully automated model. So, in the same way that amateur chess players depend on a computer’s insights, machines can do the same with human intelligence.
In this sense, automation and algorithms aren’t about technology replacing human beings or human intelligence; they’re about technology being used to assist and inform human intelligence, especially with decision-making. In fact, used correctly, predictive analytics not only doesn’t threaten human skills or destabilize markets, it may have the power to make markets more stable and predictable.
Consider commodities markets. They are notorious for price volatility and difficult to time. With the marketplace populated by more players than the actual producers or consumers of the physical commodity, it is a prescription for volatility. When you combine that volatility with the complexity of deciding among various processing options, it is easy to understand how predictive models might play an important role. The most profitable decisions can easily be missed when “gut instinct” is the decision-making approach.
On top of that, highly trained commodities experts often end up spending too much of their time chasing down data. Predictive modeling holds the promise of giving these experts more time to do what they do best: analysis.
Consider that in the 1970s and 1980s, American Airlines used predictive technologies to increase its market share even as nine other major airlines and hundreds of smaller carriers went out of business. They did this by introducing sophisticated new data-driven pricing strategies. This “revenue management” science used models to maximize capacity on each flight while also maximizing revenue per seat. This was pricing optimization for a highly perishable commodity, since when the plane pulls away from the gate the value of an unsold seat falls to zero. If predictive analytics can do that for an airline, what can it do for commodities markets? Most of these industries aren’t using predictive analytics yet. Could predictive analytics help commodities and maybe even economies become more stable?
One thing’s for certain: Computers are great at finding connections between data points, while humans are great at assigning meaning to those connections.
Human experts in any given industry, therefore, are well suited for applying predictive analytics to real-world decisions. Remember the amateur chess players? They beat the supercomputer and grandmasters alike by taking information from a computer then filtering it through their own human decision-making.
Nate Silver has argued that data-driven decisions aren’t actually about making predictions at all – they’re about probabilities. As human beings, we crave certainty; we over-weight a prediction, and we tend to under-weight the error bars around it. But when we depend on computers to give us certainty, bad things can happen, particularly when we try to apply automation and prediction on a grand scale.
Our human decision-making is limited by complexity and human biases, while predictive models are only as good as the assumptions that go into them. The key is to combine the best of both, and keep them in their proper place.