Think Narrowly

Vaguely Dissatisfied

It’s a beautiful Sunday evening and you’re watching a cricket match between India and South Africa with three friends: A, B, and C. You’re new to the sport so, while you’re enjoying yourself, you can’t seem to be able to predict what will happen. You’re just taking it as it comes.

As friends are wont to do, A, B, and C are eager to try to explain to you what is going on. However, each of them has their own theory about the Indian cricket team. During the innings break, you ask them for their predictions about the second half.

A says that Virat Kohli will score between 50 and 150 runs, since he’s apparently great at chasing, and also that MS Dhoni will score only 10-30 runs, since he’s not in good touch these days. B says that Kohli will score between 60 and 80 runs, again because he’s a great chaser, and that Dhoni will score 75 runs, because MSD rocks, that’s why! C scoffs at both of them and says that Kohli will score between 100 and 150 runs and that Dhoni will score only 0-20 runs because there won’t be too many runs left.

You really don’t know who to go with. First, you note that all of their hypotheses make falsifiable predictions. They forbid certain outcomes. Like, if Kohli scores only 20 runs, all of them will be falsified. Cool. So, that’s settled. Next, since you don’t really understand their complicated hypotheses (consisting of factors like form of batsmen, condition of the pitch, presence of Anushka, etc.) you decide to go with equal confidence in each of their theories - 1:1:1 odds.

At the end of the 40th over, Kohli gets out at 108 runs. You now update your confidence in your friends’ hypotheses. A and C predicted that Kohli would get something like this, so they’re pretty safe. However, B’s hypothesis is blown out. So, now you redistribute your confidence. As per Bayes Theorem, since C’s range was twice as narrow as A’s, C gets twice the confidence. So, your final confidence level is 33% confidence in A, 0% confidence in B, and 66% confidence in C.

Ah! Your confidence level has narrowed, with odds 2:1 in favour of C. So, if you guys watch three matches together, you expect C to be right twice and A to be right once. That’s better than before, but you still can’t make very precise predictions. When someone offers you a bet on how many runs Dhoni will score, you have to hedge your bets, saying 66% chance of scoring 0-20 runs, as per C, and 33% chance of scoring 10-30 runs, as per A. Either way you will lose money on the table some of the time, because you are uncertain.

Then, Dhoni scores 15 runs. Dammit. You still can’t fully distinguish between C and A’s hypotheses. In fact, since both their ranges are just as narrow, this won’t even redistribute your confidence. You still believe in C and A with 2:1 odds. You’ve watched a match for hours and got two separate pieces of evidence and you are still quite uncertain about what will happen. Not cool. If someone offered you a bet about who will win the match, you still have to hedge by asking both C and A and believing in their prediction 66% and 33% like before.

Narrow like an Arrow

Fast-forward to the next Saturday. Lo and behold, it’s another India vs South Africa cricket match, and this time you’re watching it with your other friends, X, Y, and Z. Miraculously, the game plays out the same way as last time. It’s the innings break, and you still don’t know diddly squat about cricket, so you ask them for their opinions on what will happen. Again, this is India, so you only care about the scores that Kohli and Dhoni make.

This time, however, you realize that these friends are rather more opinionated than previous ones. X says that Kohli will score 102-106 runs and Dhoni will score 33-34 runs. You blink at the narrow ranges. Anyway, you accept his prediction. Y says Kohli will score 109-110 runs and Dhoni will score 20-27 runs. Z says Kohli will score 117-131 runs and Dhoni will score 10-24 runs.

Again, you don’t know how to distinguish between their complex theories and whatnot. They are still falsifiable theories, so that’s fine. Not knowing any better, you assign equal confidence to each of them.

Now, the match plays out. Kohli scores yet another hundred - 108, exactly the same as last time. However, this time, you rule out both X and Y because of their wrong predictions. They have been falsified. So, your entire confidence gets redistributed to C. You believe in him nearly 100% (not fully, of course, because something totally unexpected may happen and you would get screwed).

This time, when you’re offered a bet about Dhoni’s score, you confidently say without hedging your bets that he will score between 10-24 runs (as per C). As things go, Dhoni scores the same 15 runs, and you win that bet. Similarly, when you’re offered a bet about the outcome of the match, you follow C’s advice completely and win the jackpot and go home smiling.

With just one piece of evidence, you were able to narrow your confidence level fully and thus be quite confident. (Ignore the fact that this was a made-up example with just three possible hypotheses.) Earlier, even after the same two pieces of evidence, you couldn’t get much confidence in any hypothesis.

Destroy Vagueness

What is the moral of the story? The narrower your predictions are, the faster you can eliminate hypotheses and get to a state of confidence in a hypothesis about the future.

When a hypothesis makes a narrow prediction, then if it is right and the other hypotheses are wrong, it immediately gains a lot of your confidence (and the others lose your confidence). You can trust in that hypothesis to make your future predictions. And if its prediction is wrong, it is out of the race instantly. It is decisively falsified. Whatever happens, you become confident one way or another and you do so at speed.

For example, when the law of gravitation predicts that bodies will be attracted towards the Earth at the rate of 9.8 m/s2, it forbids that there will be bodies attracted at 100 m/s2 or 0.147 m/s2. And because it has been found to be that way (roughly) in a variety of conditions, we accept it. It is an example of a falsifiable theory that is not false, i.e., something in which we have high confidence.

When a hypothesis makes a vague prediction, however, regardless of whether it is right or wrong, your confidence in it shifts only a little. You still remain unsure regarding it. The more vague your hypothesis is, the more evidence you need to falsify it and move your confidence to more worthy hypotheses.

Ok. So, how do we get narrow predictions? Well, if you say one thing will happen, you automatically say that all of the other possible things will not happen. You don’t have to go around specifying individually what won’t happen. Just by predicting that one outcome will occur, you forbid all the other outcomes and thus get a narrow prediction.

So, we get vague hypotheses when we don’t say precisely which outcome will occur. We say this might happen, or that might happen, or something else might happen. We may be falsifiable, like C who said that Kohli will score 100-150 runs and not 20 runs, and still not be very precise.

Or take the self-help fraudsters authors who give recipes like “follow your passion and you will be happier”. Well, precisely how much happier will you be? If you become insanely happy and start singing at the top of your voice, does that falsify the hypothesis? If you only smile a little, does that falsify the hypothesis? This isn’t an unfalsifiable hypothesis, note; if you became sadder, that does falsify it. But, it doesn’t help you decide what to do if you want to become very happy tonight - play video games for an hour or follow your passion? It is too vague to be useful.

Confidence, boss, Confidence

We all start in a state of uncertainty about any domain. We have a bunch of hypotheses that seem more or less equally plausible. For example, do you get lung cancer due to heavy smoking or due to some genetic factors? Is it that people who are from certain families get mental illnesses or is it due to their negative experiences or is it because of the food they eat? We really didn’t know in the past.

The aim of the game is to redistribute our confidence from the poor hypotheses to the truly accurate ones. If smoking causes cancer, we want to know it, yesterday. We want to believe in the hypothesis with the most predictive power. And it is the narrow, falsifiable hypotheses that help us do that quickly.

Created: October 28, 2015
Last modified: December 19, 2015
Status: in-progress
Tags: think narrowly

comments powered by Disqus