Variables
Summary
A category is a cluster of things in Thingspace. Thingspace is nothing but the infinite-dimensional space formed by observables. A good category is one that has low entropy (?). You want to group objects that have mutual information about each other (why?) and hopefully get a short key that points to the whole category. I think you want to compress the message needed to describe the category.
The question is: what about the other remaining things? How do you describe them? What is the global property you’re trying to minimize? Ah. You want to partition Thingspace so as to minimize the length of messages you expect to send. If you think you’ll talk about “biped, mortal, featherless, …” creatures a lot, you should have a succinct word for them - “humans”. You cover the whole Thingspace using your categories (with a default singleton category for obscure objects) and you aim to minimize the entropy of your probability distribution over categories, which happens as you combine things with mutual information.
After you’re done categorizing things, you only need to observe variables that matter - those that are part of the diagnostic key of categories. The rest can be inferred.
Wait. The main benefit categories give you is a short diagnostic key that lets you infer other characteristics for free. However, the idea is still the same as getting a short message to describe this category - the message is just a small set of features instead of some obscure code.
I think it’s the same idea as that of Bayesian networks - you’re factorizing the uncertainty. Thus, you reduce the number of states the system can be in and have to do less work with your causal model (?).
Question: Why do you want categorize the world? Yes, you can infer other properties given a small key. So what? What can you do with “categories” that you couldn’t do with raw probabilities?
I think that certain observables are conditionally independent from each other, given our goals.
You categorize the possible configurations of a set of observables so that you reduce their entropy. So, with 5 binary variables that are strongly correlated, you only have to deal with 2 possible states instead of 32.
It helps that certain variables are conditionally independent of others, so you can break up Thingspace into smaller spaces (?). If you have conditional independence information, why not create a Bayesian network with it?
Notes on Words
(from Eliezer’s sequence on a Human’s Guide to Words)
Note: Keep in mind that a multi-level model exists only in your map, not in the territory. There is no Boeing 747 by itself, there is a configuration of quarks that meets the observational tests you apply to the 747.
Definitions exist to help us answer queries about observables. Whether Barney “is a man” depends on whether you want to feed him hemlock or want to marry him or something else (and thus want to know if he will die, make you happy, etc.).
You shouldn’t take out more information than you put in. That is, if you say that noticing the features “big”, “yellow”, and “striped” lets you identify something as a tiger, and further infer other features like “dangerous” and “fast”, then you must have previously observed that correlation for a sufficient number of animals. Else, you are making unjustified inferences, and might be wrong.
So, a category is a hypothesis and you need sufficient evidence to raise its posterior probability. In this case, you should see lots of instances where “big”, “yellow”, and “striped” animals had the other properties and very few instances where they didn’t (assuming that big-yellow-striped uniquely identifies the tiger category; otherwise, you need to add more diagnostic features, like whiskers and tail).
There’s nothing more to a category than the observables it can constrain. Once you answer that a blegg is blue, egg-shaped, etc. there’s nothing more to be said. There is no more information about a blegg from that category.
Compression
How faithfully can we represent the real world in our mind? The universe is very big and our mental capacity is very small. So, it seems we have to accept a loss of detail when we compress the territory into our map.
What are the limits of our mental representation? One limit is the amount of evidence you can get. To have predictive power about a lot of things, you need a corresponding amount of evidence. I’m not sure about how many bits of information we can get (or even how to calculate the number of bits of information from some source). More prosaic limits are those of brain frequency, memory input rates, etc.
Basically, are there states of the world among which you cannot distinguish? Well, we’re limited by the precision of our senses. Our eyes cannot distinguish between two specks that are within nanometres in size of each other. We can’t see X rays, I think, or hear dog whistles. The events where someone blows a dog whistle or doesn’t will appear the same way to me if I can’t see them whistling and there’s no dog or other auditory instrument. The statement “the room is quiet” doesn’t differentiate between them.
Defining Categories
How should you define categories? Look at Thingspace and see which things are close to each other and draw a boundary that covers only those things and none others.
Wait. What is “Thingspace”? Take all the observation tools you have and you observe different configurations of quarks. Get their height, weight, colour, etc. by using your tools. Now, create an infinite dimension space with those features as dimensions and locate each “object” as a point in space. This is Thingspace.
I still feel this is not quite canonical. What do we mean by an “object”? Why take a whole object? Why not just one half of it or 2/15th?
Reductionism and Variables
In other words, I feel we’re still doing some unspecified, implicit mental work here. The key question is: can you teach a computer to do this, even in principle? We may not have the processor speed or memory to host a sufficiently powerful AI, but can you write a program that works even on a small part of the universe?
Better still, as a proof of concept, can you show that your program works for some made-up universe, something much simpler than our own universe? What are the questions you would like answered? What are the sources of evidence available to the program?
Here’s an unrelated question: why does Bayes Theorem work? If the universe is an automaton running the laws of physics, why does Bayes Theorem and the rest of math work? In fact, what does Bayes Theorem help us do?
In short, I’m not able to reconcile the ideas of reductionism and probability theory. I’m struggling to understand how we can use variables and probability theory (which talks in terms of those variables) in a universe that has only one level of reality and where things proceed as per some fixed rules (the laws of physics).
Personally, though, what is the precise query I care about? My question is: what variables should I use when trying to make a causal model that represents the knowledge in some textbook or research paper? Isn’t that the sensible thing to do - if causal thinking is the bee’s knees and very efficient and all, then shouldn’t you try to represent as much of your knowledge as possible in a causal model? Anyway, that’s what I plan to do. Why? Because I believe causal models will help me answer queries about the world more accurately than other forms of knowledge representation (including just the default mental maps we have). I believe they will also help me solve problems faster, by using Bayes Theorem to decide which experiments to run or what to observe so as to get the most information.
Pointers to Mutual Information
You want to convey information to another person (or yourself) and you want to use the shortest message you can to do so. The way to go is to use a binary string whose length is proportional to the expected frequency of that event. So, if you expect it to rain with 25% probability, then assign a string of 2 bits (since -log2(0.25) = 2 bits); since you expect that event to happen 25% of the time, the contribution of this event to the expected message length is 0.25 x 2 bits = 0.5 bits.
The diagnostic properties of a word (“mortal featherless biped” for human) should have mutual information about each other or about other properties (one heart, two lungs, etc.).
Exercise: With all this knowledge about mutual information and Thingspace, how would you create a language from scratch?
Wait a minute! F*ck! If words describe clusters in Thingspace, then you had best ensure that your words are as succinct as possible! You want to capture exactly the things you care about and no others. PG was right:
Maybe I’m excessively attached to conciseness. I write code the same way I write essays, making pass after pass looking for anything I can cut. But I have a legitimate reason for doing this. You don’t know what the ideas are until you get them down to the fewest words.
– Paul Graham, Persuade xor Discover
Hmmm… I think it’s not just words that hint at categories. Phrases and entire sentences probably do the same thing too. If the word “conciseness” helps you answer queries about the length of an essay or a speech, then PG’s statement “maybe I’m excessively attached to conciseness” helps you answer questions about the length of his average descriptions and about his unwillingness to publish something that isn’t concise and his favourite works and such, especially compared to other authors.
If you claim to talk about a bunch of important properties that are related to each other, i.e., they tell you something about each other, then you must have words or a short combination of words that captures it exactly. If they are important, they are likely probable and so must have short words or phrases assigned to them, even be it technical jargon. Also, if these properties have mutual information, then you probably have a short key set of diagnostic properties that lets you infer everything else, the way yellowed skin lets you infer jaundice and the appropriate treatment. You should be able to do better than explaining each concept in isolation. Instead of belabouring the ideas of “featherless”, “biped”, and “mortal”, just say “human”; your reader will catch these properties and a lot more.
(I suspect that giving one concrete example, like that of “human” above, can obviate a whole bunch of words simply by conveying richer information, especially if it is highly representative of that category.)
Basically, your writing should work by specifying only the diagnostic key to the category. That’s how you compress messages. The reader should be able to infer all the other properties using the few words you specify. Those ideas should follow naturally. If you’re repeating ideas ad nauseam, you haven’t hit upon the few characteristics that uniquely identify this concept.
So, first of all, usage of a word is falsifiable. You really can’t just use words any way you want. A word refers to a category, which in turn gives you membership tests you can use on a part of reality. Each category constrains observable variables in some way; it forbids certain outcomes.
Next, a category is only as fine-grained as its membership tests. You decide whether a thing belongs to a category by a diagnostic test or by its similarity with the other things in the category. You can only infer things about a new member that all or most of the existing members have in common, like mortality in humans or flying ability in birds. As you get better tools, you can find out more things they have in common, like the 23 chromosome humans have. Or even things they don’t, like blood groups, in which case you split the category into subcategories.
In other words, you get different partitions of Thingspace when you choose different property-sets to form categories. These need not even be hierarchically organized like animal, vertebrate, feline, etc. When you choose “lives on land”, humans and snakes seem close together and whales seem far away; but when you choose “give birth to live babies”, humans and whales come close and snakes are far away. Both let you infer different properties (in the first case, can move on land and can deal with land predators; in the second case, is warm-blooded and has hair).
Bootstrapping your Thingspace
More precise observation tools like microscopes let you distinguish among more states than before, while inventions like X-ray machines let you observe completely different variables thus adding a new dimension to Thingspace. So, more precision means you form smaller, tighter clusters and find more similarities in each one, like, what looked like the same microbes to the naked eye might have some crucial differences in size and shape that let you distinguish between them and find specific cures. And new dimensions let you form totally different categories using the new variables, like diseases that cause bone damage.
So, more precision means splitting existing categories; more variables means forming totally different categories or just splitting old ones.
This is important because you either get information from your existing tools, or you get new or more precise tools. When you get information about an object from your existing tools, you just add it to your Thingspace and use similarity based on existing variables to decide where the category boundaries are. When you get a more precise tool, you update your similarity measure and thus redraw your category boundaries. Finally, when you get a new tool and thus a new variable, you measure where existing things stand with respect to the new dimension and maybe create new categories or just extend old ones.
So, each similarity measure defines one category. And the similarity measure depends on inputs from a set of observation tools. Take Eliezer’s example of choosing five properties like colour, shape, luminance, texture, and interior; you get one category - the Blegg-vs-Rube category. You may not need all five properties to test for membership; the colour and shape may be enough to let you infer the others. However, if you get new tools or ones with more precision, you can create new similarity measures that include them, and thus get new categories.
Earlier, you had plain-old categories, using imprecise tools or only a few features; now, you can have finer-grained, feature-rich categories. Say you get a new tool measuring elasticity. Earlier, even if you could predict the luminance from the colour and shape, you were still clueless about the object’s elasticity and thus could not decide whether to play tennis with it. Now, even if the new tool is primitive and imprecise, telling you only whether you can bounce the object but not how much, you can make your decision more confidently than before. You are now less uncertain than you were, which is the point.
Shouldn’t you use all the variables you have when creating categories? No, only when they have mutual information. What does that mean? If knowing the output of one observation tool tells you something about the output of another, then you should probably put them together in one category aka use them in the similarity measure of that category.
Your work is not done after you define similarity measures, though. You still have to decide on each category’s diagnostic features: the smaller set of properties that is mostly sufficient to let you infer the rest. How do you do this? (TODO)
The diagnostic features are nothing but the concept rule that tells you whether any object is in or out of this category. We want to create rules that are as simple as possible, that is those that distinguish the members of this category with the least amount of information. To describe any given category, we need a message length equal to its entropy. Therefore, to minimize our descriptions, we want categories with minimum entropy.
More generally, I think the idea is to capture things you have seen into the fewest bits. The way to do this is to put together things with mutual information. A poor category covers things that don’t want to belong together, like a map fractured into a hundred tiny blobs instead of one clean, simple entity.
Current Conclusion
So, how does all this help answer my question about variables and reductionism? Well, you use a category as a variable. That’s the answer. You are free, of course, to choose a random set of observational tools to form your own category, mutual information be damned. But, because those properties won’t be correlated with each other, you will not be able to get small keys with which to infer the rest. The results you get could be more succinctly expressed by splitting that category apart.
Corollary: To become less uncertain about the world, figure out categories where the properties have high mutual information. Now, knowing just a few properties about a member, you can learn the rest for free. You get a lot of information for a little.
Note that you never have a floating “variable” that is free of any connection with an observational tool. This variable would not depend on the state of the world in any way, and thus knowing its “value” wouldn’t help us decrease our uncertainty.
I was concerned earlier because people seemed to be using high-level concepts that seemed to not be fully defined in terms of obvious features. Like depression, for example. It wasn’t a straight-forward observable (no depress-o-meter existed) and it connected different ideas like explanatory style, mood, physiological behaviour, etc.
The most worrying part about such high-level concepts, however, was that there always seemed to be something more that the researchers had in mind. The diagnostic symptoms were just the tip of the iceberg. If they said you were depressed, they could also predict how likely you were to quit your job or commit suicide or be a female or be older than 35. My mind boggled at such facts. I kept failing the acid test of understanding: can I make a computer do this? How could I? They never made all of these other facts explicit. It was all just understood. My (hypothetical) computer program would just stare blankly if asked to predict the rate of suicide. That wasn’t in the experimental data at all.
Now, I think I understand. “Depression” formed a category in their mind. I suppose the scientists in a field gain this knowledge by reading a lot of other papers, some of which will tell you the suicide rate among depressives. The “depression” category itself isn’t explicitly stored anywhere. It’s all in their heads, and implicitly in the mass of research papers and books that makes up a field.
However, remember that a category is explained completely by the observational tools you use to decide its membership. There is no hidden essence of a category that remains even after you read off all its member properties. Once you know the colour, shape, luminance, texture, and interior of the Blegg-vs-Rube members, there is nothing more to know. So, yes, researchers talking about “depression” may have a lot of other properties in mind, but we can track those down, and they all come down to things you can observe. There’s no magical “understanding” they have that you can’t have.
But it’s not all good news. Just because you can technically track down all the properties doesn’t mean it’s easy or painless. There’s no single database (that I know of) where you can read off the properties for some category like “depression”. It’s scattered over several books and important research papers that you are assumed to have read.
Ah! This tip-of-the-iceberg nature of categories might be why you need to have a lot of background knowledge in any field. To infer all the consequences of a new finding, you need to know what other properties are implied by the category. Similarly, to flag down discrepancies, you must know the properties implied, like when a depressive matches every other property but is more productive at his job - something is wrong. So, there’s a lot hidden behind the high-level variables in a field (like “depression” or “tiger” or “functional programming”).
I’m still not convinced, though. Do you really need to know all of those properties? Also, isn’t a small diagnostic key enough to let you infer the rest of the properties? Do you need to store the entire category in mind? Can’t you refer to it when needed?
Summary about words: Words are more than just their dictionary definition. They imply a whole lot more.
Concept-space and the Limits of Knowledge
Say you have 40 binary variables. By choosing different values for each variable, you have 2^40 (roughly one trillion) possible objects you can see. Now, a concept is a rule that includes or excludes examples. A concept may include or exclude any particular object. So, the number of concepts is 2^(number of objects) = 2^(2^40)
concepts!
To predict what will happen in the future, you assert that all the data you’ve seen so far is explained by a particular concept and that all future data will be explained by it as well. However, for this you need to pick out exactly the right concept out of two-power-trillion concepts (in the above toy example with just 40 binary variables)! That means you need to observe log2(2^(2^40))
= 2^40 = one trillion examples to come to the right answer. Each example gives you one bit of information - whether it obeys the rule or not. And remember, this was a toy scenario with just 40 binary variables aka just 40 bits or 5 bytes.
So, here in the real world, where objects take more than 5 bytes to describe and a trillion examples are not available and there is noise in the training data, we only even think about highly regular concepts. A human mind - or the whole observable universe - is not nearly large enough to consider all the other hypotheses.
From this perspective, learning doesn’t just rely on inductive bias, it is nearly all inductive bias - when you compare the number of concepts ruled out a priori, to those ruled out by mere evidence.
The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace.
The probability distributions come from drawing the boundaries.
What does that mean? I think your probability distribution is over the concepts within the concept space you’ve chosen. So, if you restrict your scope to just concepts that obey a certain format, like a decision tree, then your probability mass is divided over concepts that look like decision trees. This set of all decision trees is a subset of the much vaster general concept space, and I suppose this is what Eliezer means by drawing boundaries to shrink your concept space.
Still unanswered questions
How do we figure out the diagnostic key?
Crucially, how do categories overlap with causal models? If categories are based on mutual information (which is conditional dependence), won’t causal models cover that? If you know A causes B and C, for example, then B and C will be correlated. Isn’t it more parsimonious to let causal models contain that information instead of representing it in your categories too (like having A, B, and C in a category with A as the key)?
Wait. Now, what does “A causes B” mean, given that A and B are categories?
Why talk about categories at all? Why not just deal in terms of causal models? Aren’t they richer and more compact? Can’t you do with a causal model everything that you do with a category? Why do we even have “words”? If it’s about succinctness, why not transmit causal information? Is it that we don’t always have causal information? When would that be? Maybe categories are for places where you still haven’t broken down the causal structure.
Is the trillion bits analogy valid at all? Don’t we know that the universe runs on the laws of physics? (How uncertain are we about that?) Isn’t it just a matter of logical deduction from there on?! But you have bounded resources, right?
What is inductive bias? Does Bayes Theorem incorporate inductive bias?
Doesn’t causal thinking and locality of causality factorize our uncertainty? Won’t that bring down the trillion-bits uncertainty down to something manageable? Put another way, how many bits of information do we need to get a causal model that can predict everything we want? What are the assumptions that help us compress the concept space so much? Is it the causal assumptions like Causal Markov Condition and Faithfulness and such?
I think the reply to this is that we artificially limit the number of variables when creating our causal models. A perfectly predictive model of a system would have as many variables as the quarks in it. This is the hidden piece I was missing so far. I was underestimating the number of variables. I kept thinking the models we have, like “smoking causes cancer” or even just “a = F / m” were sufficient to answer any questions we would have. Far from it. There is a critical assumption we make in causal modelling known as causal sufficiency: we assert that we have captured all the variables that might be causally linked to the ones we care about. And this goes out of the window once you realize that any of the quarks in your system might be manipulated into states that overturn the conclusions you draw, but you won’t predict that because they’re not part of your model.
In short, what is the best path ahead for making the most productive decisions and knowing the most about the world? Should I focus fully on applying the causal model idea everywhere? What are its weak points?
Note: You don’t always need a causal model. If all you want to know is where the treasure is buried, just construct a Bayesian Network, do some PGM magic, and go get the treasure. You’re not intervening along the way. Similarly for inferring someone’s characteristics after hearing about their backstory: you don’t need a causal model per se.
Answer the reductionism questions. How does Bayes Theorem work, etc. I suspect they’re about updating your beliefs, about coming to accurate maps of the territory. And yes, if you have categories backed by impoverished tools, you will have an uncertain map of the territory, like humans with our models over “the wings of a plane” and not the quarks comprising it.
Give me explicit examples of categories, especially variables as categories. It’s fine when you have a physical object whose properties you can measure. What about hidden variables like the neurological changes that comprise depression? I think they are completely described by the observables you can measure right now. But, what if they predict certain things about observables you don’t have access to, like the probability of depressives committing suicide (if you haven’t measured that)? I think that you consider each such hypothesis as a separate point in Thingspace. Like the normal theory about depression plus a 10% rate of suicide vs another one with a 15% rate of suicide, and so on.
Test: Show that this whole idea of categories made of properties with mutual information actually lets you compress your knowledge. Show me concrete examples. (Biological taxonomy is one, I think.)
One Simple Example
Best of all, just give me one fully-specified structural causal equation. Just one. It can be as simple as you like. All it should do is represent X = f(A, B, C) and show how if you “manipulate” A, then X changes as per f. To make it even easier, maybe do this first in a simpler made-up universe or in a programming context. Then, move on to the real world.
Pragmatically, why do I object to abstract variables, like “depression”? Because I don’t feel it’s sufficient to cause its effects. But, why does it need to match my physical intuitions? Why can’t it just be mathematical? Because it has to work under intervention. How can you change the value of “depression”? There’s no single physical handle. I feel it violates locality of causality - how can it affect the other thing if it’s not even close? Maybe the category “depression” contains the other necessary conditions, like “is human”, “is alive”, etc.
In other words, I accept that you can observe an abstract variable (by consulting the observation tools for its category). What I don’t understand is how you can change its value. And I don’t see how that change propagates to produce the effects (which are also abstract variables).
So, here’s the challenge: find one simple example of a causal model, with abstract variables no less, and show how you can observe and manipulate the parent variables to change the child variable.
Let’s take an example from the field of programming. Say, a Haskell function f x y = x + y
. We can legitimately say that the value of the output of f depends on the values of x and y. We can “change” aka provide x and y as arguments. Here, f is an indivisible mechanism (a pure function in Haskell). You can observe x and y by using them in other functions or inspecting their values in the Haskell interpreter. Of course, you have to do the whole thing in a Haskell interpreter or executable program, otherwise it won’t work. That’s the base assumption. Wait, for that you need to assume that you have a computer and it has sufficient RAM and processing power to run the interpreter and that the computer is switched on and running.
Hmmm… so we work with several layers of abstractions. We assume that the Haskell interpreter is working fine, which in turn depends on the operating system working fine, which depends on the physical computer working fine, which depends on the power source and other factors.
In fact, we don’t really care too much about the layer below the Haskell interpreter. It wouldn’t matter if the computer were made of a shiny-new vegetable-based hardware, powered by cookies. As long as the interpreter let me do operations like 2 + 2
to get 4
, I would be fine.
Still, how am I confident that things will work? I haven’t tested out each Haskell operation to see if the interpreter hasn’t somehow been corrupted today. I haven’t peered into my laptop’s inside to see if there are any fuses. But, I predict that everything I try will work as usual… and it does! That sequence of successful narrow predictions gains my hypothesis a lot of confidence, as per Bayes Theorem.
So, the hypothesis that my Haskell interpreter plus operating system plus laptop plus power supply all “work correctly” is just one out of all possible hypotheses. I predict that when I open my Haskell interpreter and enter 2 + 2
, it will spit back 4
. Of all the configurations of matter that my laptop could be in, a very small percentage would lead to that particular outcome. The hypothesis saying that my laptop battery is broken, for example, predicts I won’t even be able to boot up my OS. Similarly, the hypotheses saying my hard drive is erased or that my keyboard is stuck predict that I won’t be able to do any work. So, my specific “all is well” hypothesis makes a highly confident, narrow prediction that pretty much no other hypothesis makes. And when I actually press those keys and see the result 2
, Bayes Theorem boosts my confidence in this hypothesis sky-high and downgrades nearly all others to oblivion.
Notice however that my “all is well” hypothesis isn’t even a very detailed one. I have no clue what’s going on inside my laptop battery. I don’t know the intimate details of the motherboard chips. To fully specify what I believe is the state of my laptop, I would need a huge amount of memory - proportional to the number of atoms inside it. I don’t know exactly what my laptop looks like inside, but I’m still able to make accurate predictions, which is all that matters.
Actually, that’s not fully true. I’m able to make accurate predictions for a certain class of questions. I can talk about whether my Haskell program above will work or not, which is still a non-trivial unique prediction, mind you; most other hypotheses say it won’t work. But I can’t really talk about exactly how many milliseconds it will take or how much memory it used. Wait. No, I can use :set +s
to ask the interpreter to print exactly that information:
Prelude> 2 + 2
4
(0.02 secs, 3811272 bytes)
Still, I can’t talk precisely about other things like the exact temperature in the battery or total weight of dust inside the casing. They are part of my laptop and could potentially interfere with its functioning (like if the battery temperature were a thousand degrees celsius). So, all I predict is that the battery temperature isn’t at catastrophic levels, or even further, that it is probably around the normal functioning temperature for a laptop battery, though I’m damned if I know what that is.
Coming back to the point, I can make narrow predictions only for a small class of queries, specifically those related to my laptop doing laptop-stuff. I want to browse the web on my laptop, so I care about whether it will be able to do so. I want to run Haskell programs, so I care about whether it will be able to do so. I don’t particularly want to use it to reheat pizza, so I don’t care whether it will be able to do so. Similarly, I don’t care if the motherboard will be safe enough to play frisbee with or whether the touch-sensitive mousepad will be suitable for use in a smartphone. These just form the tip of the iceberg. I haven’t even begun on what you can do if you get really creative and start rearranging the raw material of the laptop till it’s unrecognizable. It doesn’t matter.
I care about using my laptop for a specific range of activities and that’s all I will bother forming hypotheses about. I can’t help it. There’s just too much stuff in this world and I have only so much time and energy. I can’t look at every aspect of it. All reasoners with bounded resources will do this in one way or another, I suspect.
So, I will learn just enough about my laptop to help me predict whether it will suit my purposes. For a brand new laptop, I need to know very little about its innards, I can assume it will just work the way I want. With an aging laptop like mine, however, I need to be careful about how I use it. Over time, I have come to learn what applications slow it down, what specific actions trip bugs in the operating system, and how often I need to clean the fan. I have had to, otherwise my laptop becomes unusable.
Remember though that I am quite clear about what factors can possibly affect my laptop’s functioning or, more precisely, what factors can’t. The position of the moon, I’m fairly certain, doesn’t change my laptop speed; neither does the name of the current President of France. The direct causes of my laptop’s operation are the things that make it up and the inputs to it, like the internet cable (and thus the data that pass through it) or the power cord.
The Smoke Vanishes: Value of Information
So, I have a rough idea of the important players in my laptop’s functioning, but I start with a relatively simple hypothesis about how they combine. I begin assuming that they will work just right (i.e., let me use my OS and applications). Over time, when my laptop defies my expectations, maybe by taking a long time to load an application or by making a loud noise, I add more detail to my hypothesis - like “a clean fan matters”, or “12 GB Bluray movies are just not happening”. Eventually, when enough things break, I won’t try to figure out how to reshape the atoms to get it to work again; I’ll just buy a new laptop. That way I don’t have to waste too much time trying to understand the physics, computer architecture, and other myriad arts needed for the job. I simply compare the cost of buying a new laptop with the cost of spending time doing something so hard.
That’s another insight: when you’re a bounded reasoner, you have to choose where to spend your resources. Studying something is never a purely intellectual decision. Principle of Economics #2: People face tradeoffs. Every action you take has an opportunity cost; you could be doing something else with your limited time on this planet. So, if you want to get something done, you have to trade off the time needed to figure out a devilishly complicated system vs the cost of a replacement that will do it.
In short, you need to consider the value of the information that you’re aiming to get. Decision theorists have worked out ways to calculate the precise Value of Information. Essentially, you look at how much better off you expect to be after you receive the information, and if that exceeds the cost of getting that information, you’re golden. The key is that information is useful only insofar as it can potentially make you change your decision. Only your decisions decide how much value you get and so information can add value only by potentially guiding you to better decisions. For example, if you haven’t any money left in your budget, there’s no point looking at the price tag of a shirt you like. Whether it is $20 or $2000, your decision is made: you’re not going to buy it, so you might as well save yourself the effort.
Goals dictate Abstractions
That’s great, but it still doesn’t answer the question of how you go about improving your hypothesis. No, the point is that you only care about your terminal goals and things that help or hinder achieving them. So, you don’t simply create a hypothesis involving all the possible variables you can see in a system. You just look at those variables that matter to your terminal or instrumental goals.
If you just want a car that will get you from point A to point B cheaply, you ignore the paint colour, air-conditioning, upholstery, or cup holders. You build a hypothesis involving only the price and mileage and other such sensible properties. You figure out that if you drive at a steady speed of 60mph (or whatever), you save the most fuel. You work out the optimal number of tune-ups the engine needs to run efficiently without costing too much in mechanics’ fees. You note that inflating the tires well saves you money and so on. All the while, you ignore that if you wash your car every week, it will look shinier and prettier, or that an air-freshener will make it smell better; it simply doesn’t matter to you.
Your goals dictate your abstractions. They tell you what to ignore and what to emphasize. Your way of thinking is tightly coupled with your economic conditions. It has to be - you’re an agent with limited resources, you can’t just act without caring about the cost.
Goals prescribe your Research Directions
Actually, goals do more than just shrink the hypothesis space to a manageable level. They guide your attention to factors that you can observe or manipulate cheaply. This is contained within the idea that you need to study a system so long as the value of information is worth the cost of research. For example, if you realize that human spacecrafts are almost certainly not venturing beyond Pluto in the next few hundred years (maybe because of speed limitations or just economic infeasibility), then you shouldn’t continue to study the chances of finding useful raw materials in objects beyond Pluto, not when you can focus your time and money on more urgent useful concerns. I’m not saying there’s no use in studying those things ever, just that there are far better other uses right now. The value of information about raw materials beyond Pluto is nearly zero in 2015; it can’t change any current decision of yours.
Similarly, say you know that a rare drug can fully cure a non-fatal, not-very-inconvenient disease in 98% of patients. That’s great, but if each dose costs millions of dollars, and the chemical is needed in another industry, and you have another drug that cures only 60% of patients but costs only a few hundred dollars per dose, and it takes you millions of dollars to study the rare drug in your research lab, and your best prediction is that you can only bring down the drug’s cost by a factor of 5%, there’s practically no point in studying the rare drug further (right now). It will still not be worth it to the patients and there are far more lucrative areas in which you can invest your research money.
Always ask yourself, “How will this change my decisions?” If knowing more is not going to lead you to do something different and create more value, it’s not worth doing.
No Fully-General Models
I went astray because I assumed there was some one-size-fits-all way of thinking that would work the same way on any subject matter. I thought that once you understood something called “the scientific method”, you will be able to have more predictive power about any domain in the world.
Well, that is still true, but here’s the catch: you won’t want to! At any point, there’s some information that will provide you the most value for your time, and learning anything else is probably a waste of time by comparison. You likely won’t want to understand dozens of complicated topics in depth, because you can always focus on one field and gain from your scarcity power and simply purchase whatever else you would have done in those other fields.
My folly was that I hoped to come up with a fully-general model of a system, one that could answer as near any question I could have. That is impossible given the resource constraints, not just of our puny human minds but the entire universe! You have to create abstractions, and I suspect we do that fruitfully by aligning them with our goals.
Science vs Rationality
The aim is to get utilons, not knowledge. If there were some trick to achieve your goals without having to spend time acquiring the knowledge, you would probably take it. You should. Keep your eye on the ball.
But yes, once you decide on your goals, you can definitely use the scientific method to create hypotheses using the right abstractions and be off on your way. And yes, I do think it will work on any domain. The methods for reducing uncertainty work as well on figuring out engineering problems as they do for solving romantic ones.
And sometimes, it may not involve using the scientific method at all. You may just decide that it’s cheaper to just buy a new laptop instead of doing the research to fix your old one.
Hmmm… then it may not be so wise to have a self-image as a scientist. As a “scientist”, you would focus on getting to the truth about things, on getting predictive power about a system. But that goal should be subordinate to actually achieving your terminal goals. In particular, you must refrain from attacking a deliciously complex intellectual problem simply because your scientist-sense is tingling, when you can easily meet the same end by buying a cheap commodity and moving on to more valuable problems.
On the other hand, we humans do seem to have the constraint that “to do something well you have to like it” (PG). And to fan the flames of love for science, you would probably need to seek intellectual challenges for the thrill of it. I don’t know how to balance these two pulls.
Anyway, science is about getting predictive power. Rationality is about achieving your goals. What you want is to achieve your goals. Everything else is incidental.
Values are Complex
A warning: remember that your terminal goals are complex. There is no one variable alone that you want to maximize (like happiness or virtue or whatever). You need a lot of things to go well in life. This suggests that your abstractions will be quite complex too. (You can’t ignore the fumes coming out of your car just because it doesn’t affect your costs; other people’s quality of life matters to you too.) Unless… you can partition your decisions such that you achieve different values independently of each other, to the extent possible. I don’t know about this. Maybe this means specializing in one area and having powerful, predictive, complicated hypotheses about that narrow domain and trading with other specialists for mutual gain.
Useful Thinking needs Constraints
Maybe resolve the pulls of the aesthetics of science and the value of economics by specializing in solving intellectual problems and put your services for hire. This may be what academia is meant to do.
However, to create good abstractions and simplify the problem space, you need to have goals. You can’t just build useful models from the safety of your ivory towers.
Thinking requires the constraints of your goals. Else, you will be overwhelmed by the complexity of even a single leaf.
A corollary is that there’s no point in “studying” something without a goal. There’s nothing to inform your abstractions there. You will probably waste time barking up the wrong tree.
You want to become skilled enough at scientific thinking to be able to solve pretty much any problem placed in front of you, but you will still need a goal to orient your thinking. No fully-general models, remember. You have to carve reality at the joints somewhere, and your goals will help you do so (that’s my hypothesis).
If your aim is to know everything about everything or even something about everything, you’ve failed before you’ve even begun. You have bounded resources. Bounded resources mean that your thinking is bundled up tightly with your economic condition. In other words, information theory becomes bedfellows with economics.
World in a Bottle
You say you will only care about variables that matter to your goals. But, what about other variables that you can measure? Do you ignore them? I can see you ignoring the colour of your car, but that’s because it’s not correlated with the mileage or speed. What if some variable is correlated? Well, if it causes something I care about, then of course I will model it, like the aerodynamics of the car’s shape. But what if it is only a side-effect? It may help me measure the value of hidden variables. For example, take the composition of my car’s exhaust fumes - I don’t care about it directly, but it can help me detect if my engine is burning fuel inefficiently.
The real question is whether you will use these other observations to update your model’s posterior probability. Technically, whenever you observe a variable, you’re supposed to make predictions and then update your posterior probability based on the actual result. But, then your hypothesis talks only about a small aspect of a small part of the world. It makes no useful predictions about anything else. If you were to apply Bayes Theorem strictly, your hypothesis would be judged as making random predictions and thus go down in posterior probability.
So, I guess we restrict our use of Bayes Theorem to a compressed, simplified model of the world. Maybe if we discover that some other variable is correlated with the modelled variables, we extend our model to cover it. In general, we only have so much time, so we neglect the world outside our model.
Hmmm… so this is what gives rise to the Platonic fold that Taleb warns about in The Black Swan. He claims that it is precisely those things that lie outside our model that come to bite us later. Still, there’s nothing to it. You have bounded resources and you’re making the best guesses you can, so you suck it up when you go wrong.
In short, use your goals to abstract the system into a tiny universe and apply Bayes Theorem to come to your best guesses of what will happen within that universe. You will probably not achieve complete accuracy, but it’s the best you can do.
Goals and Abstractions
How exactly do goals help you create your abstractions?
I think you want to model only variables that are causal ancestors of the variables representing your goal. And maybe some others that are correlated with its causal ancestors, so that you measure them indirectly in case they’re hard to observe.
But what is a “variable” here? Remember, there is no actual variable in reality. You are uncertain about some part of the world and you label it as a variable.
Let’s take one simple example and look at it in detail. Moreover, let’s take a system that wasn’t designed by humans (unlike my laptop in the last example).
Here’s one: dropping a ball causes it to fall down.
What is my goal here? Why am I studying this? Well, without abstracting away anything, I would have to model a vast number of quarks. I can’t do that, obviously. In any case, I don’t have measuring instruments that precise. Say all I have is the naked eye.
You can’t even proceed until you’re given some abstractions. Science, or more precisely Bayes Theorem, works only after you’ve decided on the variables. Why is this? Because the process is to have a bunch of hypotheses about the system you’re modelling, get evidence about certain variables, and update your posterior probability for each hypothesis. We’ve already seen that for even a simple system of just 40 binary variables, we have 2(240) possible hypotheses that predict whether any given example is IN or OUT (or possible or not possible).
We can’t handle the number of hypotheses in the actual world. So, we have to narrow down the space of hypotheses and this means neglecting certain variables or restricting their scope. We need abstractions to shrink the space of possible hypotheses to a manageable one. And you get abstractions based on the decisions you want to make. You want to know only those things that might make you change your decisions.
So, this means you generally use the scientific method only after you decide why you need this information and how badly you need it. In my ball-dropping example above, I had no specific use for the information; I didn’t mention whether I wanted micron-level precise information about the ball’s position or its temperature or the change in its colour or any number of other things. These are all perfectly legitimate things to study and if I tried to study all of them, I would be doing it till the end of time.
Note that you don’t need a goal per se. Maybe you just create a model universe arbitrarily and set yourself the task of predicting how the variables in your universe cause each other. That is a perfectly valid task on which you can use causality.
What there has to be is a set of variables with means of observing and manipulating some of them. The level of granularity of the variables help decide how complex your hypothesis will have to be. You always have the option of extending the model universe if you find some more interesting variables.
One (Not So) Simple Example
Continuing with the example above, let’s say the only two variables we have are the distance of the ball from the ground and whether or not you spread your fingers. (Remember that time and the mass of the ball are not even part of our model right now.)
What does the joint probability distribution (JPD) of those two variables look like? What do we observe?
Distance Above Ground vs Are Fingers Spread Or Not
--------------------------------------------------
0m, Yes
0m, No
1m, Yes
1m, No
...
The problem is that I can control whether or not my fingers are spread, which makes it an intervention, not a passive observation.
Let’s just try to get a causal model by asking whether Fingers-Spread (s) can be a direct cause of Distance Above Ground (d). We manipulate s and test if d changes. Now, we get somewhere. If d is below 2m (where my hand is), manipulating s has no effect (because the ball isn’t even in my hand). Same if d is above 2m. But, when d is 2m, sometimes manipulating s changes d (this happens only sometimes because the ball might be beside my hand not in it - model universes are tricky). Spreading my fingers can sometimes make d change (because the ball falls) and closing my fingers can sometimes make d remain at 2m (because I hold the ball).
This means that d depends on the value of d a second ago. If we don’t know the previous value d, then our blind guesses about d based on just s become very wild. Note also that we don’t know the previous value of s; if we knew that our fingers were closed earlier, then opening has a higher chance of changing d (the ball might have been in our hand and we might have released it); similarly for the case where we knew that our fingers were spread earlier, we might have caught the ball by closing our fingers, thus fixing d.
Right now, knowing only s, all we can say about d is that if we spread our fingers, there’s a minute chance that d might start decreasing (as the ball falls), and if we close our fingers, there’s a minute chance that d might remain at 2m (as the ball gets in our hand). It’s only a small chance because the ball may not be anywhere near our hand as we spread or close it and we have no way of observing the previous value of d or s.
So, we have a very tenuous causal link between s and d. The reverse is not true: changing d doesn’t cause our hand to spread or close (or so we observe).
Given the abject failure of our paltry model universe, we decide to add two other variables: the “previous” values s and d (call them s’ and d’). Note that we still don’t have time or ball mass as variables. Now, we can write a slightly better causal function for d in terms of s, s’, d’, and u.
(1) Say, d' is not 2m. Then, d = d' + g(u).
That is, we find that d is usually close to d’, but not always by the same distance.
(2) Say, d' is 2m.
If s' was open and s is closed, or if s' was closed and s is closed, then d = h(2m, u).
If s' was closed and s is open, or if s' was open and s is open, d = d' + g(u).
Basically, if you close your hand, then the ball remains at 2m assuming that it is in your hand, which we’re uncertain about and thus have to use u. If you open your hand, then the ball will move just like before. There’s still a lot of uncertainty even if the ball is at 2m because you still don’t know whether the ball is in your hand or beside it. Also, you don’t know how much the ball moves between a “previous” reading and a current reading.
Again, you find that you’re very uncertain and you’ve used all the data you have, so you decide to extend your model again to include the mass of the ball m. This now removes the need for g(u) in both statements (1) and (2). You observe from the data that d = d’ + c x m, where c is a constant.
Now, you’re only uncertain about whether the ball is in your hand or not i.e., the first part of statement (2). For that, you bring in a binary variable v that tells you whether the ball is vertically in line with the hand or not. Now, you can predict even (2) correctly.
Now, you can talk about d completely in terms of d’, m, s, and v. And changing those variables does change d as per your function. More importantly, those variables don’t change each other.
So, now we can say that d is caused by those four variables. Phew.
Lessons
The idea is to use the JPD to get as many correlations as you can and thus eliminate a vast number of causal models. However, the JPD is not enough to get a causal model. You have to add some causal assumptions or causal data. So, get the results of some interventions and assume (or show?) that the causal markov condition holds.
When do you choose to bring in more variables? I think you do this when you’ve used all the data so far and still can’t get a confident hypothesis.
What variables do you choose to bring in? Those that you suspect are correlated with existing variables.
Creating Order out of Chaos
Maybe the human brain’s superpower is that it can figure out where to carve reality in order to get a reasonable model. Maybe it’s good at creating abstractions.
From this perspective, learning doesn’t just rely on inductive bias, it is nearly all inductive bias - when you compare the number of concepts ruled out a priori, to those ruled out by mere evidence.
So, you use your inductive bias and other magical instincts to cut down the space of hypotheses you want to consider, and then use the scientific method to pick a hypothesis out of the narrow set of hypotheses left.
But… you have to rely on magic to do most of the heavy-lifting? Doesn’t that determine the quality of your final hypotheses? Doesn’t that leave most of the quality of your results out of your hand? Yes. But what else can you do? The only alternatives I can think of are to build an AI to do the heavy-lifting instead or to half-ass it all the way.
Maybe this is the best you can do. Maybe the last mile is all you can optimize. Maybe that will be enough to get much further than you are now.
In other words, much of the work is done in getting to the structure of your hypothesis - first, saying that a causal model will capture the system accurately and next, that these are the only variables that matter, and finally, that some variable is directly caused only by these few variables, no others. From the point of view of information theory, much of the work was already done in getting to the point saying that acceleration depends only on force and mass. Figuring out whether a = F/m or a = 2F/3m + c or whatever took only a little more information by comparison.
This is probably what Eliezer means when he talks about “handling situations of confusion and despair, scientific chaos”. Structure learning is the much harder part of getting a good hypothesis. And for good reason: that is where you face the most uncertainty! Structure learning factorizes your uncertainty by several orders of magnitude - a variable could, in principle, depend on every other variable in some complicated way. By asserting that it depends only on a few, you drastically shrink the hypothesis space. Parameter learning or figuring out exactly how some variable is determined by its direct causes is much easier by comparison. You’re working in a much narrower world, like a programmer debugging one specific problem: he can ignore everything other aspect of the system.
Most problems in the scientific community involve parameter learning, I suspect, figuring out exactly how a few well-known factors relate to each other. And that’s probably why he says most scientists never get the opportunity to practice the skill of creating order out of confusion. Only when you’re opening up a new field of inquiry, like evolution or quantum physics or positive psychology, do you have to think really hard about the structure of the domain. (I could be wrong about this; maybe such opportunities are abundant and we just don’t notice them.) You have to eliminate a vast majority of factors as not pertinent and narrow it down to a few possibilities.
Keeping scientific knowledge a secret, as he suggests half-seriously, just for the purposes of training, could help solve this problem by forcing apprentice scientists to stumble through the mistakes of the ages and burn those lessons into their brain, never to repeat them. They would know from firsthand experience how it feels to resolve a major confusion: how mysterious and daunting the problem seems at first, and how mundane the answer always turns out to be. They would learn the valuable skill of distinguishing a bogus structure from a legitimate one, and thus be able to shoot down nonsense like homeopathy or spirituality or behaviourism without batting an eyelid.
Corollary: Train yourself in creating order out of chaos by withholding the correct answers in different domains. See how you deal with the confusion of just having no idea how things work. (TODO: Figure out exactly how to do this in practice.)
Could PredictionBook be a good way to practice “reasoning in the absence of definite evidence”? You don’t know too much about who will win the US Presidential elections, so maybe make your best guess based on whatever little evidence you have. Improving your calibration is one thing, improving your discrimination is another. Use PB to test your accuracy not just on trivia but also unfamiliar fields of study.
(It would be great to have an empirical list of scientific problems where you had to do structure learning vs those where you had to do parameter learning. You can refine your thesis using the data.)
It is much easier to train people to test ideas, than to have good ideas to test.
– Eliezer Yudkowsky, Faster Than Science
He says the core of the problem is “reasoning in the absence of definite evidence”. Is that the same as structure learning vs parameter learning as I have hypothesized or something different? I think the idea is that when you don’t have enough evidence, a large part of your model is still shrouded in mystery. And the test of your scientific ability is whether you go instantly wrong and posit some intuitively-pleasing “explanation” (like elan vital or phlogiston) or whether you limit yourself to parsimonious mechanisms that actually make narrow, falsifiable predictions and reduce your uncertainty quickly. So, it does seem like the structure of your hypothesis is the problem - whether you have vague explanations that appeal to humans but just don’t work out when written mathematically or you have a reductionistic, causal structure, with each part supported by evidence.
If goals dictate abstractions, then is our inductive bias decided by (or somehow correlated with) our goals? We can’t represent the world atom-by-atom. So, do we come with inductive bias for radial categories and such that help us build models that will give us valuable information (that actually matters to our decisions)?
But, isn’t information-processing independent of goals? Maybe it’s not so independent after all. In theory, yes, you need a certain amount of bits to hit the correct hypothesis and then you can use that hypothesis to achieve any goal you wish. However, the way we actually do it is by only looking at those parts of the world that can substantially affect our goal variables. We don’t try to build a fully-general model of the world. That’s not worth it. Knowing the position of one atom in my hand is not going to matter to any of my decisions.
So, it’s not just about hypotheses and information. You have to consider your goals when designing your hypotheses. That probably shrinks the general hypothesis space the most. There is information, and then there’s valuable information. You only want the latter.
Causal Thinking for Humans, not Computers
Don’t feel too squeamish about using your brain to make leaps of intuition. You’re not looking to write a computer program to do the causal thinking for you; that requires you to be precise in how you model and reason about data. You’re going to use your brain to fill in some of the gaps. Deal with it. Don’t hold out for a perfectly well-specified solution. Just understand the process of causal thinking well enough so that you can repeat it in the future.
What will you Ignore?
I think locality of causality is key here. You start with things that are close to the goal variables and move backward from there (and forward, if you want to measure their effects too).
I suspect that each field has some special knowledge about what variables they can safely ignore. If you’re an ice-cream vendor, an economist will blithely ignore the details of your romantic life or your car or your taste in music when talking about the supply and demand curve in the ice-cream domain. Why is that? Because economists care mainly about the allocation of resources (ice-cream and cash, in this case). To predict and control the allocation, they may study a lot of other things, like the supply and demand curve or its elasticity, but their main goal is to understand the allocation of resources. And they assume that certain things don’t cause the allocation of resources. So, they ignore those.
Goals
Ok. We’ve seen that given a set of abstractions - a model universe with some variables - we know how to determine what causes what.
That was the easier part. Now for the case when you just have a complex system and a goal and you have to create a set of abstractions by yourself. You have an embarrassment of variables, but if you chose to use all of them, your model would collapse under the weight. You wouldn’t be able to do any useful inference. You must use your goal to carve the system at its joints such that you can figure out the causes and effects of the stuff you care about.
A system here is nothing but a set of variables that you can observe and possibly manipulate. And a goal is a special subset of those variables. You want to find out the variables from the system set that will cause the variables in the goal set. You don’t really care about the rest.
What sort of (preferably dirt-simple) example can I use to develop and test my understanding here? I need a reasonably simple system and goal. Note that if you want a causal model, you’d better plan to do some intervening with it. Else, you could have just used a Bayesian Network without any causal interpretation, and made your life a lot simpler.
Let’s take Louis Pasteur’s experiment to disprove the spontaneous generation theory.
What variables could he observe? Physical measurements, and also whatever chemical and biological concepts were known back then (like distinguishing between various common organisms, etc.).
What was his goal? Or what variables did he care about? I think he wanted to understand “life”. He could tell whether some lump of matter was alive or not (call this variable “aliveness”). What that meant was he could observe a set of correlated variables for any given object: like ability to grow, tendency to move, ability to reproduce a form like itself, etc. If you could see something with the ability to grow and reproduce itself, it could probably move some parts and respond to stimuli and whatnot.
So, you can observe aliveness. But can you manipulate it? I guess one way is to kill the organism somehow. All the other attributes change too.
(TODO: I’m not sure of the exact details and the internet is down right now.)
Pasteur wanted to know what “caused” life. In his position, how could we have modelled the world? Initially, our model will contain just the variable “life”. Of course, we don’t have any other variables, so there are no causal links. What variables do we bring in? Should we bring in mass or height or temperature? But those don’t seem too correlated: you have light organisms and you have heavy organisms; you have tall ones and minute ones, and so on.
I think we should look at a variable that is correlated with some variable already inside our model. Ability to grow is correlated with life, but this seems like cheating. Ability to grow is part of the “life” category. I can’t think of anything right now.
Other people were claiming that life could be caused simply by contact with air that contained “life force”. But first, is that even a variable? Can you observe different values for it? When will air not contain “life force”? Still, let’s say you can have a case where there is no air at all (and thus, no “life force”).
As prime evidence for their “hypothesis”, the other scientists touted this experiment: when broth was exposed to air for several hours, you could observe maggots on the surface; but when the broth was covered, you would see no maggots.
So, the food item is a variable; it could have just been water or some other liquid or solid. Then, exposing to air is a binary variable. Finally, maggots vs no maggots is another variable.
First off, if all that is needed to create life is air with life force, why did they need to use a food item (broth)? Why didn’t water suffice? Or just the floor or any other random object. This means their hypothesis is incomplete.
Moving on, the controlled experiment above shows that life (in the form of maggots) is caused, eventually if not directly, by a food item and exposure to air. We assume that when you have no food item, there is no life created at all.
What alternative hypotheses can you have? Now, these are the only three variables we have inside our system and we know food item and exposure to air don’t cause each other. So, we have exhausted the links, I think.
Note that the spontaneous generation theory doesn’t give a detailed mechanism by which food item and air exposure combine to create maggots, of all things. It’s a very vague, high-level causal function - if food item and air exposure then maggots; otherwise no maggots
. It doesn’t say how exactly this works out at a microscopic level or even just at the granularity of individual morsels of food and a thin volume of air.
We need to either increase the precision of our variables or get new variables. We know that food item and air exposure are causal ancestors. So, the only thing that could disprove the hypothesis that they are direct causes is to find some intermediate variables that screen off their effects.
So, unpack the variables “food item” and “air exposure”. What specifically about them is causing maggots?
Or, in another direction, what is known to cause maggots? We have probably seen from other experiments that only maggots cause maggots - they reproduce or they move from one place to another. So, either there were full-grown maggots present in the broth or air (H1), there were baby maggots in the broth or air (H2), there were parent maggots present in the broth or air (H3), or we have some new cause of maggots somehow present in a combination of air and broth (H4), or some combination of all of these (H5).
First, ask these hypotheses to make predictions for the case where broth was covered. Why were there no maggots then? H1 has to say that the maggots were in the air. H2 has to say that the baby maggots couldn’t physically reach the broth. Similarly, H3 says that the parent maggots couldn’t put their babies in the broth. H4 says it was because the broth plus air magic couldn’t happen. In any case, H1-H3 have to admit that the maggots or their parents had been in the air, not the broth.
How can we differentiate these hypotheses? A fine sieve should keep out full-grown maggots but may not stop baby maggots or eggs and shouldn’t stop the magical spontaneous generation. So this distinguishes among H1 and H4 and maybe also H2, and H3. (Won’t you be able to see full-grown or parent maggots in the air?)
Since H2 and H3 are talking about baby maggots and the final maggots were grown-up maggots, it will help to know how long it takes for baby maggots to grow. If it was just maggots falling in as H1 says, why couldn’t you see them until after a few hours?
If it’s just about maggots falling or parents dropping baby maggots, why don’t maggots form on water or some other inorganic substance? If it’s about maggots in the air seeking out food items, then how come baby maggots can do this? Do they have abilities enough to do this?
To distinguish between H3 and H4, we need a case where we don’t have parent maggots in the air. Then, H4 will still predict that air and broth will produce maggots and H3 will predict that they won’t. Find out how to remove maggots from air. Maybe heat the air - but you don’t know in advance that there are maggots in the air. How do you observe maggots in the air? Look for other effects of maggots.
Can maggots fly? If they are just swept around by the wind, then pass air through a long horizontal pipe. They must fall before they reach the broth at the end. This way they will never grow in the broth. Or the number of maggots will decrease.
Also, if it’s baby maggots falling into the broth, why don’t they form elsewhere? If you enclose a certain volume of air, shouldn’t they drop down sometime and grow? Do they need the broth to grow? What if you “poison” the broth somehow?
Do the maggots in the air die eventually? They’re alive and we know from correlation that alive things usually die after some time and that dead organisms can’t reproduce. So, what if you enclosed air in a box, waited to let the maggots die, and then used that maggot-free air on broth? Will maggots spontaneously generate now?!!! Ha!
Pasteur’s answer was to use gravity to make the maggots fall down before they reached the broth. He did this using a U-shaped neck for his beaker so that air got to the broth, but not the maggots in the air.
(Also, why does the spontaneous generation take so long and no longer? What are their predictions for the time variable?)
TODO: Figure out what I needed to solve the problem and whether I could have done it with just a limited, formal knowledge of living organisms (like alive -> can reproduce, etc.). For one, once I realized I needed to remove maggots to distinguish between H3 and H4 (direct vs indirect cause), I should have asked “what causes maggots to be removed”. No, it’s more subtle that that. I should have asked “what prevents maggots from producing alive maggots in the broth” since this also allows solutions of making them infertile. Also, maggots have to be alive in the final state so you can kill them beforehand to falsify that prediction. Wow, there are so many possibilities once you start unpacking the categories.
Thoughts
Humans seem to be born with (or otherwise reliably have) the ability to perceive certain categories: like objects (things that move as a whole, I guess), aliveness (which lets you infer other characteristics like reproductive ability), other humans, etc.
Use locality of causality to help find intermediate variables.
Time is an important variable.
When you’re stuck, increase the precision of your variables (unpack your categories, maybe) or find new variables. Ask other questions like how many maggots you expect there to be.
When trying to find alternative hypotheses to X <- A and B, think about the other known causes of X.
A controlled experiment is actually two pieces of evidence: intervene one way on the treatment variable and see the result; intervene another way on the treatment variable (keeping all other variables the same) and see the result. Bayes Theorem doesn’t treat controlled experiments specially; that’s just us humans.
To get a differing prediction between two causal models, set their values such that they predict a common variable will have different values.
To do
Test whether all the causal models you’ve seen so far match this idea - do they match some specific purpose? What about the different subjects in the world, like biology or architecture? Are their abstractions in line with their goals?
I want a canonical way to formulate variables given a goal and system. Wait, what are some uncanonical ways to formulate variables? Why are they bad? Is it just a matter of standardization or are you worried about using the right categories and specifying them explicitly?
Why not just use the scientific method intuitively, especially now that we know we’re approximating things anyway? Where are my causal intuitions wrong or lacking? Perhaps we need to do some subtler detective work: ask if the two “causes” are different. What if Pearl’s notion of a “cause” is different from my own intuitive idea of a “cause”. What do I expect when I say “A causes B”?
Release the scientific method: how to reduce uncertainty, from start to finish.
Next, study how people convert goals to abstractions. Are there any other constraints (like maybe the characteristics of the system) that also determine the abstractions or is it just your goals?
Simultaneously, identify the abstractions used in different concrete examples (like willpower or depression or whatever) and see what variables look like. Do they always correspond to observation tools or can you have a compound variable? What does that look like? When and how do you create such a variable? How do you manipulate such a variable? Is that maybe chalked up to human ingenuity?
Hypothesis (transcribe this from your notes): Instrumental goals don’t exist in nature; you’re forced to create them because of your inability to make the system just achieve your terminal goals directly (for every possible scenario).
Study identifiability: it seems to be about whether you can answer an intervention query using only “conditional probabilities involving observable variables”. (PGM book)
Learn how to dissect, dispute, and diagnose fake explanations: like spontaneous generation or even “my gas stove is causing all this mould”. Pinpoint exactly what is wrong with them - say it mathematically. You must note immediately that they’re starting off with too complex a hypothesis off the bat. They probably don’t have evidence to support every part of their model. Ask for all the evidence they have, get the alternative models yourself, and then simply find the differing predictions. Ask for their evidence, not their hypothesis.
Calculate a variable’s value using its observables. You should have a function that goes from the set of observables for that category to whether a particular object is a member or not.
TODO (phone notes): Look at what queries you expect to answer and use them to build your abstractions.
TODO: Continue with Real-world problems - how far can you get with a bare-bones causal model?
Skill: Does this “explanation” make you feel less confused? This is what Harry used in Chapter 104. Does everything fall in place or are there still some cloudy areas?
To do later
My revised aim: Now that I realize that studying something without goals is just futile, I need to figure out what kinds of goals I will specialize in and what sorts of abstractions they need to use. This will help me study those topics better.
Lesson
Dammit, work only using concrete examples. It’s not just recommended, it’s mandatory! See how I was flailing around above before I took one narrow, specific, concrete, not-even-very-imaginative example and started digging in earnest. Just like Robert Pirsig’s advice of zooming in on one specific brick and writing down what you see.
And, my word, it really is a different ball game once you write for more than an hour or two at a time. Ideas start popping out of nowhere past that magical mark. This supports PG’s hypothesis that it takes you so long to simply load the details of your program or essay into your head and explore it with ease.
Look back and take in just how much your campaign was moved forward because you started working on actual empirical evidence instead of just theorizing in your armchair. Look at what the Willpower Instinct experiment taught you. Plus, all the other stuff you tried to summarize. Further, look at what you learned from taking on a simple biology textbook and trying to create a causal model of it. This is what it feels like to actually test your theory. You’re forced to see its limitations in clear daylight. You realize that it can’t even handle the simplest of simple examples. The crucial data point is that of the near-trivial 2 + 2 = 4 example. That kickstarted the landslide of ideas about value of information, the complexity of the world, and how you need goals and cannot do science in a vacuum. You realized that you only want to answer a small set of queries, not every possible question about your laptop.
Notes on Eliezer’s Magical Categories
A labelled dataset may not contain enough information to narrow down what you want. Say you want your algorithm to recognize smiling faces. The dataset you give it may be consistent with faces that we consider to be smiling but also with molecular smiley faces. You need an exponential amount of data to locate the correct concept in superexponential conceptspace. So, much of learning is inductive bias and unless we can impart tantamount information to the algorithm, we can’t expect it to match faces like we do.
This is what inductive bias does: it chooses among the concepts left after updating on the data.
But the real problem of Friendly AI is one of communication - transmitting category boundaries, like “good”, that can’t be fully delineated in any training data you can give the AI during its childhood.
We must communicate information about our complex wishes to the AI.
Inkling
Back in that digital theory and communication course, when I first heard of information = surprise, I thought “wow! Then, we should focus on getting only pure information all the time. Just keep looking at things that are surprising; don’t bother with stuff that you already know. This is obvious.” But then, nobody else seemed to be as stoked about it and neither the professor nor the textbook gave any hint (that I saw) that this was a huge deal. It was just some important but mundane thing used in the area of communication, not that it has something to do with thinking or intelligence. I had some inkling that this could be revolutionary, that this could optimize our thinking a lot. But, I just let it go. Big mistake.
comments powered by Disqus