Neither.
It’s actually a decision based on my ability to predict whether I am going to get a drinkable beverage from my local coffee shop. Here’s the problem: the manager just can’t make a good coffee (I promise I’m not being a snob – I’ve thought of suggesting that he contact NASA to see if they’ve got any LI-900 silica tiles left over from the Space Program to shield customers from the extreme heat of the contents). So for 2 days out of 7 I can achieve perfect accuracy/resolution in terms of forecasting. I can infallibly assign a 100% probability to the binary outcome of receiving a bad coffee over a good one (please note that my discrimination here is purely statistical).
Now the deputy manager makes a great coffee but she’s only there Wednesday to Sunday and if George is on deck, Hannah’s not at the machine but on the till instead. This introduces a degree of variability which requires me to adopt a calibrated approach: to give myself the best chance of getting a pleasurable caffeine hit I need to understand which factors influence the event frequency of George being at the machine, establish a mental calibration and then try to make sure I’ve given myself the best chance of hitting the shop when he’s not there.
This is where I run into trouble – as I start including factors into my calibration, my innate, hard-wired risk-aversion takes over. For example, George tells me he likes to go running in the mornings if the weather is good. However, what if it’s a warm but cloudy day? Is he more inclined to go than if it’s fine but cooler? The result is that I’m prone to display less confidence in my determination than a benchmarked, perfectly calibrated predictor. On a day when there are a few showers about I’ll probably settle for a hibiscus tea than bet my four dollars on a cup of Eritrea’s finest.
What I need in this situation is a means to boost my discrimination at the expense of my calibration – the two are after all a trade-off. Studies* have shown that a machine learning tool would not exhibit the same bias – although on the flip side it would likely be susceptible to more severe errors (getting a coffee from George when he’s hungover could be life-threatening). However combining both man and machine not only leads to increased accuracy in forecasting but also optimises the returns from being right, removing human hesitancy in situations that lack certainty.
So until I can persuade my colleagues to develop software to help me with this problem, I’m going to abstain. Of course, I could just tell George that he should heat the milk until it froths rather than boils…but that seems churlish.