Why algorithms beat experts (even in shipping)

Though many of us like to think we know a bit about wine, there are only a few hundred true experts in the world. For those who want to earn the right to use the title master of wine, it takes years of study followed by a notoriously difficult exam that includes a blind tasting test. It stands to reason therefore, that when it comes to predicting a future star vintage, we should listen to the noses of those experts who are masters of their craft.

Shipping is a bit like fine wine. Investing in it is a great way to lose money, talking about it is a great way to bore people at a dinner party, and expert predictions often turn out to be wrong

There is a whole micro-industry built around investing in wine based on the opinions of experts. These experts spend their time tasting new wines during their early years. They pick out the wines they believe will mature well and go on to become highly valuable. Based on these picks, each year thousands of investors plough millions of dollars into particular regions and vineyards, safe in the knowledge that they have given themselves the best possible chance of seeing a healthy return and having a well respected cellar in 10 years time.

Picking good wines is tricky, fine wines mature very differently across different vintages. Wine from the same vineyard bottled just 12 months apart can differ in price by a factor of 10. When making their picks, experts need to combine myriad complex factors, predicting not just the future quality of an individual wine but also how that wine will compare with thousands of competitors from the same year and how future consumer preferences will shape demand.

Despite the overwhelming complexity of the task, in the 1980s American economist and amateur wine fanatic Orley Ashenfelter set about developing an algorithm that could pick fine Bordeaux wines as well as the experts. The result of his labour was a simple calculation that took into account just three factors; the average summer temperature during the growing season, the amount of rainfall during the previous winter, and the amount of rainfall during the harvest season.

Ashenfelter’s fine wine algorithm:“∆ price = -12.15 + (β1 * Winter rainfall) + (β2 * Average summer temperature) + (β3 * Harvest rainfall) +  (β4 * Age of Vintage)”

In 1990 when he published his work, Ashenfelter was ridiculed by the wine tasting community, the Princeton professor became a pariah overnight. Commenting on his work in the New York Times, Robert Parker, one of the world’s foremost wine experts at the time described the formula as “ludicrous and absurd”. It is well understood that hot dry summers lead to grapes with a higher sugar content, which in turn leads to better wine. But the experts of the day could not comprehend that those three simple metrics could be better than their vastly experienced noses. Perhaps more importantly, the idea that a simple calculation run from a spreadsheet could accurately predict the price of a wine years before anyone had tasted it was a great threat to their livelihoods.

Fast forward 30 years and the explosion in available computing power means that algorithms have an increasingly large role to play in our daily lives and our decision making. Earlier this month we learned that Herman Bomholt and Torsten Thune, two Norwegian Masters students have created a machine learning algorithm that has been able to successfully outperform the cape markets by 10% in a simulation. In the same week, we learned that nine US-listed bulk operators have made an accumulated loss of $11.3bn over the course of a decade. Over the years, Ashenflelter’s wine algorithm has borne fruit and beaten the experts. The correlation between its predictions and the actual future price of a fine Bordeaux has consistently been above 90%. Shipping is a bit like fine wine. Investing in it is a great way to lose money, talking about it is a great way to bore people at a dinner party, and expert predictions often turn out to be wrong. But why is it that simple algorithms can make better predictions and decisions than human experts and when should we rely on a prediction from a computer instead of our own judgement?

Ask anyone who is an expert in their field, whether a sommelier, a leading cancer specialist or a master mariner if they believe a simple mathematical formula can make consistently better decisions than them in their specialism, you will almost exclusively hear a resounding no. In his seminal book on the subject of decision making, Thinking Fast and Slow, Nobel prize winning economist Daniel Kahneman cites the findings of multiple studies conducted throughout the twentieth century showing that simple algorithms can beat or at least draw with experts across a diverse range of fields. Algorithms beat experts in studies including predicting the outcome of cancer treatment, the success of a business venture, the likelihood of violent behaviour, and the winners of sports games.

According to Kahneman, these algorithms don’t necessarily require the collection of vast amounts of statistical data either. They can be rooted in data already commonly available (like weather information), or just rooted in common sense. Though there are many complex emotional and human factors involved in predicting the stability of a marriage, the simple formula of “frequency of love making minus frequency of arguments” is a powerful predictor of outcomes. If the number is negative, it’s not looking good.

We see use cases for algorithmic decision making across the shipping industry. Not just in the charter markets, but in operational environments too. Dynamic positioning is a great example of the use of algorithms that enable a ship to automatically hold station. Further, in June last year, researchers from the University of Genoa published a paper looking into the use of simple algorithms for vessel collision avoidance. Similar to Ashenfelter’s wine calculation, the system relies on only a small number of inputs. The research uses simple algebra based on vectors to determine whether risk of collision exists, and if so which action to take under the COLREGs. The prescriptive set of rules that make up the collision regulations lend themselves to mathematical processing. If the rules are applied consistently by everyone, the risk of collisions is vastly reduced.

But rules are not applied consistently by humans, and that is one of the reasons algorithms can be superior. We are influenced in our judgements by all sorts of different factors, many of which we aren’t aware of. These influences can be based on our own experiences of similar situations, but can also be based on circumstances that are less relevant to the outcome such as how fatigued or hungry we are. We all know that food shopping while hungry is not a good idea, but in 2019, researchers at the University of Dundee found that decision making is significantly worse when we are hungry, whether the decision is to do with food or not.

To be clear, computers get a lot of things wrong. But there are some key differences between algorithmic mistakes and human mistakes. With the exception of some disciplines of artificial intelligence, it is always possible for a human to reverse engineer the logic used by a computer and understand what went wrong.

It is almost impossible to reverse engineer the logic used by a human. Human hindsight is 20/20, we have all made mistakes in our lives and known instantly that we made the wrong call. But human memory is a fragile thing. We often misremember the facts and events that led to a decision and without realising it, rewrite logic into our memory of a decision that was fundamentally flawed by irrational thinking. No matter how good the algorithm is, it will get it wrong. But it will get it wrong consistently and won’t be swayed by a run of bad luck in the same way a human might be.

If algorithms can be consistently wrong, it makes sense that their role should be to support humans to make better decisions. Surely the sweet spot is a computer that provides information and support, but an expert human making the final decision? Unfortunately, this is not often the case.

In environments that are hard to predict, we turn to experts because they are knowledgeable. Part of what makes an expert opinion valuable is that a human can take into account complex combinations of factors that an algorithm cannot. The knowledge that humans possess often means that we add additional complexity where it is unnecessary. Several studies conducted throughout the last century have shown human decision makers to be inferior to simple algorithms, even when they have access to the algorithm’s results. This is because we believe we possess information and intelligence the algorithm does not, and should therefore overrule its decision. Generally speaking, it is often better that the final decision is made by an algorithm, not a human.

For many mariners, the idea of giving up control of collision avoidance decision making to a computer is heresy. But similar systems have been in use in aviation since the 1980s. Airborne Collision Avoidance Systems use vector based algebra in a similar way to the method proposed by the Italian researchers. These systems don’t just measure the risk of collision, they give orders for evasive manoeuvers too. In flight, when a pilot receives an order from the collision avoidance system, they are legally required to comply with it, even if it contradicts an order from air traffic control. The only caveat open that allows a pilot not to comply is if the manoeuver would put the aircraft in immediate danger. Though it seems counterintuitive, these systems do work. It is estimated that in Europe, aircraft carrying these automatic systems are five times less likely to have a collision.

So when should we let the machines take over and when should humans stay in the driving seat? Our brains are notoriously bad at understanding the relationship between risk, reward, and probability and this is a real sweet spot for algorithmic decision making. The Covid-19 crisis is a once in a century event. For many people that means that once we have found a vaccine or the virus dies out, we will be clear of major pandemics until some time around 2120. This viewpoint is bolstered by the fact that the last worldwide pandemic happened in 1918. But the chances of another global pandemic happening are just as high next year as they were this year and last year. The same false logic is often applied to gambling; on a roulette table, when the ball has landed on five odd numbers in a row, many people believe that an even number is more likely. But the probability of the ball landing on an even number is 48.6%, every time, without fail. It is extremely unlikely that you will see 10 even numbers in a row in roulette (about 0.07% probability), just as it is extremely unlikely that we will have a new global pandemic every year for the next ten years. But the odds of each event happening in isolation don’t change. This is where computers and algorithms have a natural advantage over humans and where we should leverage them for our own benefit.

Whether it is picking stocks, predicting the risk of an accident at sea, setting a rate for a charter, or avoiding a collision, a computer’s ability to consistently apply logic to a situation, particularly when the stakes are high, gives them the edge over us.

I deliberately haven’t touched on the power of artificial intelligence in this article. With the exception of the Norwegian students, every example discussed here only exploits basic maths and a small amount of computing power. This is a massive simplification, but artificial intelligence makes it possible for computers to automatically check the results of an algorithm against a desired outcome and tweak the formula to find improvements. We are at the very early stages of an incredibly powerful force being unleashed on every industry, shipping included. But better decision making doesn’t need AI, and so much can be achieved without it.

So will we see a fully algorithmic charter market or masters relinquishing control of their ships any time soon? Probably not. Despite all the promise of decision making rooted squarely in maths and logic, it has one major flaw. If you have read this far it was probably because you were initially drawn in by a story about fine wine and an eccentric professor. Humans love stories. An emotive story that makes no logical sense is far more likely to get a positive response than a logical argument without emotion. Research has shown that you are much more likely to be successful when asking for something if you say “because” and give a reason. It doesn’t even matter if the reason makes no sense, you just have to give one. This is why the wine industry still relies on expert opinions, it is why we a shipbroker can justify fixing at a certain price, and why we are all able to make any objectively poor decision seem like a good idea at the time.

Despite overwhelming evidence gathered over 100 years that algorithms have a great part to play in improving the decisions we all make, we tend to base our decisions on the strength of a narrative and our emotional response. Unfortunately for most of us, saying “the maths says so” doesn’t create a strong enough response and we ignore it in favour of more compelling, but often flawed reasoning. Learning to embrace the power of algorithmic decision making is key to unlocking massive efficiencies across the industry, in both commercial and operational domains. But my expert opinion is that it will be some time yet before we are truly willing to let go, ignore the good storytellers, and trust the machines.

Nick Chubb

Nick Chubb is the founder of maritime innovation consultancy Thetius.


  1. I don’t think Kahnemann (and others) conclusions would simply be “trust the machine”… It’s clearly much more complicated than that.

    If you look at the people in other areas (for instance investing, such as Ray Dalio, Renaissance Technology, Palantir, etc), who’s harnessed algorithms with great effect, it’s always as a feedback loop between systematising information (ie algorithms) and using subjective, human judgement. Subjective information, ie the inside view is highly important, particularly in hard-too-predict circumstances (kind of ironically). The risk with algorithms is that it eliminates small errors which are common with humans, but the errors are systematised hence larger when they happen. Note that risk is also usually highly subjective and difficult to quantify, and needs a number of inputs both qualitative and quantitative.
    Even in big disasters purportedly caused by a “small human error”, you always find out that the real fault is systematic. Algorithms will hopefully have a great impact in averting disasters, but not with the bean counter mentality of “taking the human out of the decision loop” (which is almost always simply a drive to fire people and save cost rather than save lives).
    And you can argue that in the hard to predict and highly nonlinear domain, both the machine and experts can be useless in themselves. It needs to be a symbiosis.

    TLDR: The “man vs machine” narrative of digitalization is a completely false dichotomy.

  2. Really good algorithms should introduce a factor called “human greed”. It is essential to do good business.

Back to top button