Optimization Process Notes

Optimization?

Taboo the word “optimization” in “optimization process”. I suspect that it doesn’t pay too much rent.

Say you want to breathe oxygen. Then, finding a hypothesis that lets you do that is not too hard on earth (at least once you have other adaptations). Still, if you successfully get oxygen much of the time, I will conclude that you have a hypothesis that lets you do that.

But if you want to have a sword made out of diamond, that’s pretty hard. Not many configurations of reality conform to that. So, if you can somehow reliably get diamonds, I will conclude that you have a hypothesis that lets you do that.

However, the hypothesis that lets you get diamonds is, according to me, much more complex than the one that lets you get oxygen. Why? Because, given the initial conditions (an oxygen-rich atmosphere and a diamond-free surface), it’s harder to find diamonds.

Let’s look at this mathematically. Assume there are only 40 binary variables. So, we have 2^40 possible states that this mini-world can be in. You have to distribute your probability mass over those states, with the constraint that their sum is 1. This gives us 2^40 - 1 degrees of freedom. I don’t know exactly how many possible hypotheses that affords. But it’s probably a lot.

Then, if we believe that a particular outcome (having oxygen) is true in several of those states, then we will judge that getting a hypothesis that lets you pick precisely one of those states doesn’t need too much information. Conversely, if we believe that an outcome is rare (having diamond), then we will judge that a hypothesis that lets you pick precisely one of those states will need a lot of information. The trick is that the hypothesis must concentrate its probability mass mainly on these states and not on others. Otherwise, you would waste your resources on false positives, like digging for diamonds where there is none.

In other words, taking up a course of action is like betting. You only have a finite amount of resources and the more of it you put on the winning outcome, the more you win back. You don’t even have to think about the choices consciously. Your quality of thinking is determined solely by the quality of your actions - how much you win. The ritual of cognition you used doesn’t matter if you fail anyway. We assume that you want only a few outcomes to occur - you want a diamond, not a piece of coal. So, the more you bet on the right outcome (digging in a diamond mine as opposed to a coal mine or a cabbage patch), the more powerful we can surmise you are.

However, we have to remember to separate our model of a mind from the actual material. It’s easy to talk about what a human “wants”. But, what about natural selection, something that also reliably hits narrow targets?

Think about it in terms of our model of the object. If an organism, like a dog, keeps hitting the same targets again and again (like getting bones to chew), then it raises our confidence in the hypothesis that (a) this organism prefers to get this type of target and that (b) it is capable of doing so. We infer (a) because of all the things it could be getting, it allocated its resources so as to get this one thing (a bone). We get strong evidence by changing the circumstances - maybe offering it a variety of “foods” - and seeing if it still chooses only the bone. So, we infer that the object has certain preferences because it intervenes in certain systems to make a narrow range of things.

I’m not fully clear about this.

Notes

These are my notes on Eliezer’s sequence of posts on optimization processes.

Created: December 1, 2015
Last modified: December 1, 2015
Status: in-progress
Tags: optimization process, notes

comments powered by Disqus