Whether You Call It Trading or Not, Your Company is Taking Market Positions

Introduction: Market Risk at Commodities-Driven Companies
When an investor buys a stock, she is taking a position, placing a bet on the stock’s value going up so that she can sell it later for a profit. In commodities industries like food processing and mining, many executives see their companies solely as producers of commodity products and believe that playing the ups and downs of the market is risky and best avoided. The reality is that every time one of these companies buys raw materials or sells their products, the company is taking a market position (this assumes that these companies have the ability to change the timing of input purchases or output sales). Many companies have responded by trying to offload risk, both purchasing raw materials and pricing their products on formula (setting price based on a published price index), but this carries its own risks. The more a company prices its products on formula, the more that company allows its competitors to set prices and becomes a passive price taker. Furthermore, it becomes impossible to beat the market, perpetuating mediocrity. The challenge for companies in commodity driven industries is to understand and monitor the net market position they’re actually taking with their purchases and sales. The latest advances in big data analytics allow companies to put tools in place to quantitatively monitor the market risk of their transactions. Avoiding just one or two market moves against you in a year will easily justify heading down this path.

Let’s take a quick look at another example of trading in finance before we explore its similarities to everyday activities at commodities companies. If an investor believes that the value of a commodity—let’s say coffee—is going to go up, he might decide to buy coffee futures. Futures are basically a bet between the buyer and seller over whether the actual price of a commodity will be higher or lower than its “futures price” at a certain date in the future. If the price of coffee rises between the date the futures contract is purchased and its settlement date, the investor makes money, but if it falls than the counterparty in the contract makes money, having sold coffee for a higher price than it is actually worth. Futures contracts are often used at commodities companies to manage risk, to lock in a price floor for sellers and a price ceiling for buyers. But futures are also used for speculatively, to place bets on the future price of a commodity.

Market Risk at Meat Processors
Imagine a meat packer that slaughters a set number of cattle each week, processing the animals to produce beef which it sells to customers. The packer could choose to buy all of its cattle in the spot market, taking delivery before immediately slaughtering the animal, processing the meat, and selling the beef. Packers know that if prices rise dramatically they may not be able to purchase enough meat at a price which will allow them to fulfill their orders profitably, so many choose to enter long-term purchase agreements with suppliers at a fixed price, or at a fixed offset from the spot market price. Every meat packer makes a choice between alternative buying and selling transactions and that choice is a form of speculation. This is true because market movements after the decision is taken will make one alternative more profitable than the other. Just like a futures trader, the company is making a bet on the price of cattle. If the price falls, the packer who chose to wait and purchase beef in the spot market will look brilliant; if the price happens to rise, the packer who purchased on a long term contract ends up getting a deal.

Market Risk at Mining Companies
A similar dynamic exists on the sales side of natural resources companies. Imagine a mining company deciding which offer to accept for a shipload of iron ore. Iron ore is normally priced on formula, meaning the customer agrees to pay the mining company the average price per ton for iron on a specified index over an agreed upon period of days, called the quotation period or QP. Different buyers prefer different QPs (in some cases, this is merely a product of a country’s business tradition.) A salesperson at a mining company may have the choice between a 1-month forward QP or a trailing 2-weeks QP. If the salesman believes the price will increase over the next month, he will want to take the 1-month forward QP so that the price he is paid is calculated as the index rises. If the miner believes the price will fall, he will want to use the trailing 2-weeks period where price is calculated before prices fall. Deciding between these two alternatives is by definition taking a market position. In other words, all commodity buyers and sellers are traders.

How to Get a Handle on Market Risk
Calculating and updating the financial risk of a portfolio of purchase and sales transactions is complicated and generally requires specialized software. Companies that have not adopted purpose-built analytics tools to monitor their market risk might be tempted to “ban trading.” Since we have already shown that all companies are in fact trading when they make a purchase or sale, let’s ask what the effects of these bans on trading are? 1) suboptimal performance as more deals are shifted to formula pricing, and 2) executives paying too little attention to their company’s actual market risk. It is difficult to calculate the impact of a market shift on a company with traditional tools (e.g. spreadsheet models), however recent software innovations can provide executives with a clear picture of how a market move would affect a specific deal or a company’s overall financial performance. By harnessing the power of simulation and cloud computing, a new breed of analytics tools enable companies to see how various probable and improbable market movements will affect a portfolio of transactions and the company’s bottom line.

For commodities companies to truly benefit from their proprietary market knowledge and their investments in forecasts and market intelligence, they need to recognize that they are active traders and begin rigorously analyzing their portfolio of purchases and sales to make smart “trades” which optimize profit subject to specific risk constraints. Trading bans are just words and they don’t actually eliminate risk—in fact, they increase it by lulling executives into a false sense of security and turning companies into price takers. All commodities companies are taking market positions, but companies that “ban trading” aren’t merely ignoring profit opportunities, they are choosing to ignore risks they don’t want to confront instead of understanding and managing them.

Millennial Meat Preferences: What am I Supposed to do with a Chuck Roast?

Millennials, those born between 1981 and 1997, make up the largest living generation in the United States, numbering 74.4 million. Many readers in the meat industry will be surprised to learn that millennials are spending almost 75% more than baby boomers on meat (Midan Marketing, “Millennials, Boomers and Meat: A Closer Look). Despite this increased spend, many meat industry experts are mystified by millennial meat purchasing habits. As millennials enter the peak of their purchasing power and start building their own families, it is important for people in the meat industry to keep a keen eye on millennial preferences so that they can perform well selling to this massive market segment going forward.

Something is causing an unprecedented number of millennials to eat out more than past generations, negatively affecting retail meat sales. Morgan Stanley found that 53% of millennials dine out once a week, compared to 43% of the general population. When millennials do purchase meat to prepare at home, they gravitate towards easy-to-prepare meals, creating both challenges and opportunities for the meat industry. Is this because millennials grew up with the conveniences of internet shopping and on-demand cell phone apps like Uber, simply making them lazy, or are millennials just less informed about meat preparation than past generations? Market research would indicate the latter. Only 36% of millennials rate themselves as “very knowledgeable” about how to prepare fresh meat, compared to 54% of boomers (Power of Meat 2016). According to a major study on millennial preferences by the Beef Checkoff, “Why Millennials Matter: A Research Overview,” 54% of millennials admit that it is hard to know what cut of meat to choose in the meatcase and 50% say they stick to buying the same cuts of meat but would diversify if they knew more about the different cuts. While millennials will likely become more knowledgeable as they age as a group, closing the knowledge gap with boomers to some degree, the massive lack of knowledge about meat preparation among millennial consumers presents a major challenge and opportunity for the meat industry as they aim to improve performance with this key segment.

Despite changing tastes and a preference for dining out, smart retailers can find ways to take advantage of the millennial desire for convenience. A 30% increase in in-store dining and take-out of prepared food at grocery stores since 2008 indicates that prepared meals present a growing opportunity for grocers (NPD, Millennials Are Driving the Rise of the Grocerant). Similarly, prepared meats account for 44% of millennial meat purchases compared to 22% for baby boomers (Midan Marketing). As millennials start families and establish themselves as the dominant segment in retail meat sales, the value-added meat category will become increasingly important. The retail meat industry should focus on providing more ready-to-eat food, semi-prepared food (e.g. marinated meats,) and instructional information going forward. If meat companies and retailers want to succeed with this new generation of consumers, it is important for them to prioritize two things: consumer education to reduce the current level of confusion among millennials, and product innovation, to create convenient products that millennials find obvious and less intimidating.

Humans or machines

Humans or machines: Which makes better business decisions?

(Spoiler alert: both)

 

Within the past year, a series of unrelated technical glitches brought the New York Stock Exchange (NYSE) to a grinding halt for more than three hours, delayed hundreds of United Airlines flights and took down the Wall Street Journal’s home page.

Experts agree the amount of damage done was relatively minor, but the episodes reopened the issue of whether we have grown too dependent on technology, specifically, automation that makes important decisions for us.

There’s no doubt automation is a boon in many respects. From next-day package delivery to ATMs to safer industrial workplaces, automation definitely makes our lives easier and safer in a number of ways.

Some even claim that automation can save local jobs by helping factories remain cost-competitive with cheaper overseas labor. In many cases, computers are able to outperform humans in tasks such as legal discovery, making medical diagnosis and evaluating the potential of a job candidate.

On the other hand, increased automation creates new risks for society. United Airlines, for instance, pointed to an “automation issue” to explain its recent system-wide shut down.

On a larger scale, many experts say it was faulty predictive modeling that caused the financial crisis of 2008. Statistical models said it was unlikely that a critical mass of homeowners would default on their mortgages all at the same time, but the models were built with one bad assumption, so they did default – and an epic downturn of mortgage-backed securities caused a global financial crisis.

So what is the answer? Should there be a limit to how much automation we accept into our lives, or should we hand more and more tasks to computers, willy-nilly?

It’s a false dichotomy.

Not only have automated technologies and predictive modeling been around for hundreds of years, but when it comes to some business applications such as predicting market behavior, it’s actually a combined approach that reliably leads to the strongest results. Not only can computer models bring scale and speed, but they can help remove human biases from our decisions. Human beings bring insight, imagination and subtler forms of pattern recognition.

A well-known example happened in 2005 when two chess amateurs – Steven Cramton and Zackary Stephen – beat several grand masters as well as Hydra, the most advanced supercomputer at the time (more powerful than Deep Blue). They did it through a combination of their own knowledge and computer models, proving that humans working with technology can be smarter than either technology or humans alone.

So, blending these two decision-making mechanisms can actually lead to better results than either could produce independently. For example, automated models can actually measure and calibrate human analysts’ predictions to find the most reliable experts and adjust their estimates to account for their relative accuracy and unique biases. This “model of models” can be more accurate than the humans or machines alone.

Another example would be to allow an experienced human analyst to override certain key inputs to a predictive model, so that a model can take into account breaking news, something difficult to capture in a fully automated model. So, in the same way that amateur chess players depend on a computer’s insights, machines can do the same with human intelligence.

In this sense, automation and algorithms aren’t about technology replacing human beings or human intelligence; they’re about technology being used to assist and inform human intelligence, especially with decision-making. In fact, used correctly, predictive analytics not only doesn’t threaten human skills or destabilize markets, it may have the power to make markets more stable and predictable.

Consider commodities markets. They are notorious for price volatility and difficult to time. With the marketplace populated by more players than the actual producers or consumers of the physical commodity, it is a prescription for volatility. When you combine that volatility with the complexity of deciding among various processing options, it is easy to understand how predictive models might play an important role. The most profitable decisions can easily be missed when “gut instinct” is the decision-making approach.

On top of that, highly trained commodities experts often end up spending too much of their time chasing down data. Predictive modeling holds the promise of giving these experts more time to do what they do best: analysis.

Consider that in the 1970s and 1980s, American Airlines used predictive technologies to increase its market share even as nine other major airlines and hundreds of smaller carriers went out of business. They did this by introducing sophisticated new data-driven pricing strategies. This “revenue management” science used models to maximize capacity on each flight while also maximizing revenue per seat. This was pricing optimization for a highly perishable commodity, since when the plane pulls away from the gate the value of an unsold seat falls to zero. If predictive analytics can do that for an airline, what can it do for commodities markets? Most of these industries aren’t using predictive analytics yet. Could predictive analytics help commodities and maybe even economies become more stable?

One thing’s for certain: Computers are great at finding connections between data points, while humans are great at assigning meaning to those connections.

Human experts in any given industry, therefore, are well suited for applying predictive analytics to real-world decisions. Remember the amateur chess players? They beat the supercomputer and grandmasters alike by taking information from a computer then filtering it through their own human decision-making.

Nate Silver has argued that data-driven decisions aren’t actually about making predictions at all – they’re about probabilities. As human beings, we crave certainty; we over-weight a prediction, and we tend to under-weight the error bars around it. But when we depend on computers to give us certainty, bad things can happen, particularly when we try to apply automation and prediction on a grand scale.

Our human decision-making is limited by complexity and human biases, while predictive models are only as good as the assumptions that go into them. The key is to combine the best of both, and keep them in their proper place.

 

Sources/References

1. http://www.forbes.com/sites/gregsatell/2014/10/12/the-future-of-marketing-combines-big-data-with-human-intuition/2/
2. http://www.planadviser.com/Combined-Human-and-Robo-Advisers-Show-Promise/

3. https://books.google.com/books?id=e9yPQOoK0gcC&pg=PT9&lpg=PT9&dq=Steven+Cramton+and+Zackary+Stephen&source=bl&ots=xdVvKc0H1u&sig=WBKWUVN5cK9ONjl-_bNcEKE8Z3I&hl=en&sa=X&ei=acSaVbe5M8uNyAT-ra7IBw&ved=0CE8Q6AEwCA#v=onepage&q=Steven%20Cramton%20and%20Zackary%20Stephen&f=false
4. http://www.retalon.com/predictive-analytics-for-retail
5. http://www.theatlantic.com/magazine/archive/2013/12/theyre-watching-you-at-work/354681/
6. https://practicalanalytics.wordpress.com/predictive-analytics-101/
7. https://hbr.org/2014/07/the-kind-of-work-humans-still-do-better-than-robots/
8. https://practicalanalytics.wordpress.com/predictive-analytics-101/

 
 

Bringing Home More Bacon: The Power Of Prescriptive Analytics In The Food Supply Chain

What do the Oakland Athletics, Safeway, Delta Airlines, and Citibank have in common? They each seized a leading position in their industry by adopting the next generation of analytics tools before competitors caught on. Prescriptive analytics is the inevitable next phase in the evolution of Business Intelligence (BI) software and early adopters in the food processing industry will see massive rewards.

Read the whole whitepaper by clicking below (downloads PDF).

DN_Whitepaper_Beef

Restructuring Sales Territories

Restructuring Sales Territories

During the past 20 years selling software, I’ve seen Territory Division Strategies come and go. When Sales Executives make a decision to change strategy, some sellers get creamed and others get famous. Over time, it doesn’t matter which strategy you choose– only that you leave it the same long enough for sellers to adapt, provide support that mirrors the new approach, and make final adjustments as feedback & results come in from the field.

Why change territories?

The Company’s aggressive revenue goals required it. 

Not long ago I had an amazing job in a fast growing pre-IPO company and crushed it starting in my first three quarters. I was hired while living in Utah to come aboard as the first Enterprise Sales Rep, with my target prospects being High Tech Sector with Annual Revenues greater than $2B. We landed enough new business for the Board to put a new, Pre-IPO CEO in place. His first declaration was that we’re going to an “NFL Cities Territory Model.” That is, we moved from a vertical-based, revenue banded sales territory model to a geographical model with reps based in these NFL cities. Hmm… I love Seattle, but moving was not an option!

Unfortunately, beyond the “NFL Cities Territory Model” approach there were few clear rules about transitioning accounts which showed a specific set of sales activities and were progressing from left to right in the forecast. And there were several opportunities that required decisions about revenue splits and commissions, which were delayed or not addressed altogether.

3 sales territory management pitfalls and how to avoid them

Sales territory management once consisted of taking a big map and drawing some boxes on it to delineate which rep got which piece of the map. Sure, there are still maps and boxes involved, but now there is far greater complexity to it. And there are some fairly consistent pain points when it comes to sales territory management. For example:

  • Resources are over-assigned to the same accounts
  • The territory target is too large and there isn’t enough capacity to reach it
  • Plans take too long to build and complete, and tend to be out of date
  • Assignment rules can conflict and result in coverage gaps

Poorly managed sales territories can result in negative client engagement, not to mention sales team frustration and high turnover among sales reps. Beyond that, there are most-common pitfalls to avoid when it comes to establishing sales territory management.

Pitfall #1: Having limited data– and a restrictive territory management tool

When you want to make a change, don’t set yourself up for difficulty down the road. If you don’t have a handle on your client and account data, it will be difficult for you to not only understand how well the territory (or rep) is performing, but also to make any potential improvements. Keep in mind though that you don’t want to make too frequent or too many changes to your sales territories, as it may impact client engagement, but you will want to at least have the ability to modify territories as needed.

Pitfall #2: Not involving the sales team

Don’t forget to include the sales team. This may seem like an obvious point, but you may be surprised at how often territory planning is done purely top-down with little inclusion from reps and teams. You want to create a territory plan that will take rep experience into consideration and help to motivate the sales force. That means having the right reps for the right territories, and making sure they can see that the plan takes their local business into account – ultimately, a plan the sales team can truly embrace.

Pitfall #3: Getting stuck in spreadsheets.

Companies and organizations who are winning in sales performance and territory management have adopted tools and applications that give them visibility into targets, schedules, and capacity; help them align the best reps to the right products and accounts; and reduce the sales assignment time. In this era of real-time data, many sales leaders recommend moving to the cloud for quota planning in real time and effective collaboration. With a sales application, you can also integrate a CRM system for territory rule deployment. There are sales analytics applications out there for just about any size of company, so you really don’t need to start with spreadsheets. Companies and organizations who are winning in sales performance and territory management have adopted tools and applications that give them visibility into targets, schedules, and capacity; help them align the best reps to the right products and accounts; and reduce the sales assignment time. With a sales application, you can also integrate a CRM system for territory rule deployment.

Stay away from these sales territory “don’ts” from the start, and you’ll already be on your way to greater sales success sooner than you thought possible

What is in a Forecast?

By Eugene Chen, Ph.D.

Forecasting as a human activity has a long history. In ancient Egypt, priests monitor the level and clarity of Nile River using the Nilometers. From the measurements, they predict how good the crops would be for the coming year. It is fair to say that these priests are one of the earlier forecasters. Fast-forward to 2015, Forecasts has become ubiquitous in our daily lives. An average Joe uses weather forecast to decide if he wants to take an umbrella to work. An executive in a mining company may use an iron-ore price forecast to make future business plans. The staff of Fed makes and uses various types of economic forecasts for policy-making.

Advances in Technologies have greatly improved the accuracy and stability of forecasts. Equipped with modern computers and large amounts of high-quality digital data, forecasters of our times are able to construct and validate accurate forecast models with unprecedented efficiency. In this article, I would like to briefly introduce our forecast models at DecisionNext. It is not meant to be a detailed account of our repertoire. The goal is just to provide you, the reader, a general idea of what we do.

A good deal of forecasts that we make at DecisionNext falls into the category of ‘Time-series forecast’. Namely, a quantity in the future is predicted based on a series of previously observed values. Concrete examples include Forecasts of copper price, gasoline production, and the average beef marbling in a slaughterhouse. While forecasting time-series, we usually ask ourselves three questions:

1. Does the series follow a deterministic trend or a cyclic pattern? A nice and quick forecast can be made if one is able to show that the time-series actually repeats itself. This may sound too good to be true. However, a lot of time-series do repeat themselves to a good degree. As an example, Agriculture data such as regional grain production tend to be highly seasonal.

2. Does the series and the associated noise affect its own future? Sometimes, the future value of a time-series is influenced by its immediate past. This is often true for the prices of entities whose intrinsic values are hard to evaluate. Examples include short-term prices of stocks and luxury goods (such as fine wine and premium tea leafs). Since the intrinsic values are hard to evaluate, prices tends to float further if it has an upward trend in the immediate past (they call it “momentum” in your daily financial news feed) and to sink in the opposite case.

3. Do external factors affect the time-series? It is common that a time-series is sensitive to external factors. A concrete example is the price of perishable goods. Vendors of perishable goods have the pressure to sell whatever they have in time for avoiding waste and storage cost. Insights into the future supply thus have strong predictive power over the movement of price (as vendors use the price to adjust the rate of sale).

More often than not, the answer is ‘yes’ for all of the three questions. A real-life time-series often (simultaneously) follows a deterministic trend or periodic pattern, influences its own future, and is affected by external factors—though just to a certain degree in each case. A linear Forecasting model that encompasses all these aspects is called an ARMAX model.

Forecasts are never perfect and error is an unavoidable component thereof. No scientific forecast can be completed without a proper estimation of errors. Validation is used to fulfill such goal. The idea of model validation is to purposely holdout a subset of available data, forecast that piece of data (as if it is unknown) with the remaining data, and compares the holdout data with the forecasted value (that is only based on the remaining data). The difference between the two can be used as an estimation of the model’s accuracy.

A detailed analysis of the errors can further elaborate the forecast. By characterizing the errors as a statistical distribution, the forecast can be cast into the form of a probability distribution function, rather than just a number. Thus, the user of the forecast is not only provided with a ‘best guess’ for, say, a future commodity price. He or she is also informed with the probability that the price is 10%/20%/30% below or over thence. This opens up the possibility of Scenario-based planning, where the user is allowed to develop business plans that are both profitable and risk-averse. The plan is risk-averse because all possible price movements are being considered. The plan can still be profitable because all risks are accurately quantified. At the user’s discretion, he or she does not need to sacrifice the profit for events that are unlikely to happen.

What is in a forecast? At DecisionNext, a forecast consists of prudent variable selection, judicious model building, and careful validation. We have built a scalable platform that can help you make the most logical decision based on the latest data. Are you ready to gain the analytic edge?

Bringing Analytics to Industry

By: John Crouch

In my experience, there are three crucial questions that an executive needs to be confident in answering, before committing their organization to analytics transformation – especially as it relates to working with an outside software partner.

1.”Does their math fit our business?”

2.”Do their people understand our business”

3.”Can our people consistently understand and interpret what the software is recommending?”

If any of these questions can’t be answered with a confident “yes”, then the likelihood of a successful project are low. Three strong “yeses” confirms that not only are the models appropriate to the task, and the people positioned for success, but that there will be applied, not just theoretical performance.

As an analytics provider I’ve learned that our success comes from foreseeing these concerns and designing a process for rigorously proving each of them to the satisfaction of our clients. These three tests drive client success — and there aren’t any shortcuts.

This journey begins by earning the confidence of our client’s leadership team, who in turn need to commit the time, resources and trust – and when they do this they reap big rewards.

 

Monte Carlo: Taming The Unknown

By: Omid Saremi, Ph.D

Not all left behind from the WWII was destruction and despair; it gave birth to the first electronic programmable computer. Nicknamed the “Giant Brain”, the Electronic Numerical Integrator and Computer (ENIAC) was born at UPenn under the supervision of Dr. John Mauchly an American physicist and Presper Eckert an Electrical Engineer. The year was 1946.

Before the advent of the first electronic computer, most statistical techniques reliant on generating a large set of random numbers were considered intractable. In light of those early developments in computing, a physicist intrigued by “random phenomena” and an avid poker player, Stan Ulam realized that the question of tractability of such techniques could be revisited. Harnessing the new computing capacity, he argued one should be able to estimate distribution function of quantities, which depend on (often many) random input parameters. The famed physicist John Von Neumann quickly realized the importance of Ulam ‘s line of thought and helped further develop/popularize the core idea. Nicolas Metropolis named the project “Monte Carlo”; he was inspired by Ulam’ s uncle who gambled in Monte Carlo, Monaco (random numbers, gambling? See the connection?)

The following scenario ubiquitous in data science (from risk assessment, portfolio management to scenario based forecasting) helps us understand Monte Carlo. You have a deterministic rule, which calculates an output based on an input. If the input variables are themselves random, each drawn out of known distributions, the output will be a random variable too. What would be the distribution of the output? This problem is usually analytically intractable and no useful closed-form mathematical formula describing the output exists except for simple output functions.

You do not need a close-form for the output distribution function however, if you have a powerful computer and an algorithm for generating good quality pseudo-random numbers! A computer can “generate” the output distribution function “experimentally” by drawing pseudo-random numbers and feeding them to the rule as input. Von Neumann himself had (re-) invented a simple method (called the “middle-square”) for generating (finite) sequences of uniformly distributed pseudo-random numbers. Monte Carlo algorithm was one of the first algorithms to be programmed on ENIAC.

One interesting instance of the above class of problems is forecasting based on historical time series data. Consider a time-dependent process in a given initial state. Suppose during each time step forward a decision about the future state of the system is made (according to a predetermined probability density). What would be the forecast for the future state? Monte Carlo method will not only forecast the final state (by averaging over the outcome of many possible histories of the system) giving a “point estimate” but also determines the entire distribution of the final state. Point estimates are popular but not meaningful. For one thing, they vary if another instance of the same data (obtained by re-measuring the data experimentally or sampling the existing data set) is used. One needs a methodology to quantify the “variability” in predictions. Monte Carlo method offers a systematic way and general framework to achieve this goal. Monte Carlo method is also used in sampling “difficult” multivariate distribution functions appearing in Bayesian inference and machine learning applications. Another variant of the method shows up in the form of a heuristic search algorithm suitable when search space has high number of dimensions.

The mission of DecisionNext platform is to help make better data-driven high-value decisions. In particular, scenario based forecasting and Bayesian inference sit at the heart of its data products. The family of Monte Carlo methods is certainly an invaluable data science tool in this journey.

Cloud Automation and Immutable Infrastructure

By Parker Pinette

Modern DevOps tools and practices allow our team to focus on creating our software products rather than managing the servers on which they run.
Back in my system administrator days, my team of 3 Unix professionals managed several dozen physical and virtual servers supporting student, staff, and faculty efforts at the University of Utah’s College of Social and Behavioral Sciences. Using Secure Shell to access each host, we would pore over 1000s of lines of software configuration directives. Changes were made by hand on each machine.
One of my early tasks at that position was to implement a system-wide monitoring solution to provide an inventory of all servers including tracking essential services. Even with this in place, tracking configuration changes was time-consuming. Changes to a configuration file could introduce issues, and the effort of tracking down the offending change and determining why it was made in the first place, before applying a fix, could be incredibly time-consuming. Live changes to production machines had the potential to interrupt user access.
Nowadays, there are a wealth of tools available which make headaches like this a thing of the past.
Immutable infrastructure is a concept rapidly growing in popularity. Container solutions like Docker and LXC provide isolated execution environments. These containers can be configured via code – code which can be tracked using version control, and reviewed and tested before deployment.
Running a complicated software ecosystem requires a number of supporting services, each of which requires its own configuration and deployment process. Ansible provides a simple, clean solution. Now deploying say, a VPN service can be as simple as running an Ansible playbook which launches a virtual server, installs the necessary software, starts the VPN service, and returns user-specific client configuration and encryption certificates.
Packer allows us to roll out new AMIs (Amazon Machine Image), Docker containers, and VirtualBox virtual machine images with a single command. In concert with an Ansible playbook, we can use a small amount of configuration and code to build a number of different machine image types that each run the same set of services but on different platforms.
Our current systems approach does not employ truly immutable infrastructure. Sometimes we want to make changes to a running server, or even a number of them. Ansible allows us to apply those changes all at once using the same playbooks and roles we used to create the machine images on which the running instances were based. A new image is then created, so on the next launch everything is up to date.
Using these and other tools, we’re able to save a great deal of time on operations. I personally have more time to spend developing our product, as well as creating tools and workflows that make life easier for our engineering and science teams.