Wrong but useful models
Similar idea: the tradeoff between precision and usefulness
The crux: ways of thinking about the world don't always have to be exactly right to be highly useful; things can be wrong and very interesting.
Sometimes it's okay to be incorrect about things, since "having exactly correct beliefs about how the world works" is not the only goal that a person pursues.
For example, using heuristic models of the world for inconsequential decisions allows you to focus your attention on things that are really important and expend cognitive effort on actions that bring you more value, even if that means that your inconsequential decisions will be wrong more often.
Some simple examples of this might be "if the clouds look all dark and lumpy and big it's probably going to rain" or "if someone is yelling aggressively at you, they are mad at you." These will not always be true. Sometimes the clouds won't rain on you. Sometimes the person will just be really trying to convey important information in a noisy environment. Generally, though, these oversimplified models of the world are helpful for you.
Some simple models might be based on highly load-bearing ideological systems that make them totally wrong in their explanations but correct in their predictions. For a quick example that I just made up, imagine a seafaring people who hold lemons to be holy; they might use this citrus in a consistent ritual when at sea, helping prevent scurvy; their explanation of why they don't get scurvy and others do might be that "the gods are curing us through our consumption of their divine essence and driving out the demon spirits which cause disease" which might be kinda true if you stretch that reaaaaalllly far as a metaphor, but really the most accurate explanation is way more mundane than that. Either way, their prediction (they will get way less scurvy if they consume the holy fruit) is correct, but their explanation (holy fruit is divine and as such protects the body from demons). This is often how I think about tradition: often-good ideas embedded in systems that perpetuate them, at the cost of losing the truth of why they work, and potentially causing harm down the line because of that ignorance.
If you're trying to be more correct about the world, simple heuristics might also be helpful as stepping-stones for finding more accurate models. Hmm, why do those clouds that are all dark and lumpy seem to rain more? That question can be a starting point for a budding meteorologist's investigations. Then one can build on the idea: well, darkness and lumpiness seem to predict rain — darkness implies less light gets through. if clouds are made of water, maybe that means there's more water in the cloud, scattering more light... so maybe the more water that is in a cloud, the more likely it is to rain. Your heuristic picks out a trend in experience — in empirical data. So investigating or reasoning about the trend can help you get closer to how things work.
Sometimes more complex models of the world can provide useful frames on experience, giving you space to maneuver conceptually and base hypotheses to mess with, even if the models are fundamentally incorrect about how things work deep down. For example, Freud had complex psychological models. Karl Popper might tell you that his models don't make meaningful predictions about human behavior as a whole — it's not really science, it has no predictive validity, and in a sense no claim to truth — and yet thinking about my motivation in the sense of, say, the mediation between id, ego, and superego, can point me to useful insights about myself. For example, I might frame something I'm conflicted about as a dialogue between superego and id — to speak very roughly, "my sense of morality" and "my unmediated, fundamental desires" — which can help me recognize things about myself. Or, perhaps, by thinking about the distinction between "want" (id) and "should" (superego), I might recognize that there are some things that I want to want to do but actually do not really care about — second-order wants. For example, I might want to want to get better at some specific kind of math, but not actually be interested in it. This is a meaningful distinction that has helped me adjust my behavior.
For an example of a more complex model that one might continue developing:
There are four things people confuse all the time, and use the same sort of language to express, despite them meaning very different things:
1) I want to do X.
2) I want X to be done, but don’t want to do it.
3) I want to be the sort of person who does or can do X.
4) I want to be seen as the sort of person who does or can do X.
-from Daystar Eld (though note that this model was probably not something he came up with by studying Freud)
Sometimes you can even use a model as a ladder to some higher insight, and then discard the ladder entirely once you've climbed the wall.
The following example is going to be the least clear of anything on this page, sorry; if you haven't read the work I'm referencing, feel free to ignore the technical details and just skim parts you don't understand. Also, if you think we are treading similar intellectual paths and you haven't read the Tractatus, I would even say skip this entire example and just go read the book. It's probably one of the few books of philosophy which one can meaningfully "spoil" by giving details of the "plot."
Ludwig Wittgenstein's Tractatus Logico-philosophicus describes his "picture theory of language," a model of meaning in language wherein only sentences whose content describes (or rather reflects, "pictures") logical relations of objects in the world are meaningful, and everything else is in a sense meaningless.
The precise content of the Tractatus' theory of language doesn't matter that much, and in fact I don't remember much of it (lol); what did matter to me was how it served as a ladder for me to recognize, or at least make explicit in my mind from previous passing thoughts, various important ideas:
- Complex philosophical systems may be obscuring the fact that they're not actually talking about anything at all — anything that exists in a meaningful sense.
- Ethics and epistemology are ~entirely different domains.
- Logical and mystical ways of thinking are fully distinct — they aren't incompatible, they are simply entirely separate modes of thought. Puzzling over the logic of ineffable beauty that you experience is , uh, incoherent.
- Ethics and aesthetics are the same thing.
Hopefully these ideas mean something to you, and you can see why they might be useful as higher realizations outside the context of philosophy of language. In any case, I think the picture theory of language is very wrong (or at least highly incomplete); Wittgenstein himself later in his career seemed to change course in his ideas entirely, and by the writing of Philosophical Investigations held views about language that were essentially antithetical to those he seemed to express in the Tractatus.
The Tractatus helped me recognize other things that just aren't currently coming to mind immediately; it might well be the book (if you can call it a book) that has had the most significant impact on my thought. Wittgenstein in fact seems to have intended this ladder-like, use-and-discard method; though the precise details are not clear to me as to what exactly he intended for the realization to be.
(I won't quote it to avoid spoilers, but the ladder is explicitly a metaphor he used.)
Sometimes wrong-but-useful models are all we have. That's kind of what science1 is doing, isn't it? It's just progress from wrong-but-useful models to less-wrong-and-more-useful models of the world.
I think about consciousness this way: there are lots of ways to think about it, and we have lots of models and theories and all of them are probably wrong but some of them might be useful for some object-level purposes like noticing things about your own perception and distinguishing qualia.
I think about living this way: there's no systematic theory of living life well; all we can do is create wrong-but-useful models of it, and learn from our experience how to live better — update towards less-wrong-and-more-useful models of the world.
Footnotes
-
(Here I mean not the modern material institutional science that has a replication crisis and struggles with poor economic incentives but the ideal, truth-seeking science) ↩