Jan 03, 2025
First attempts at a philosophy of coordination
My early drafts in support of the question, "how do we better coordinate with others?" The first part is an unedited braindump-style 'essay' attempting to just get everything down. The second is an initial outline of a more in-depth work that I haven't had time to write.
3442 words
Against Moral Philosophy
TL;DR: Human progress seems to come from coordinated action towards commonly beneficial ends. This progress makes more people happier, and that is the thing that we actually want out of philosophy or something. Instead of endlessly debating the question of “what is good?” we can ask a different and more concrete question: “how do we coordinate with one another to achieve our basic goals?” By asking this question instead of “what is true?”, we make the problem concrete and actionable; we make it one we can actually solve and get better at.
I. A draft of an argument
When you do philosophy, you frequently end up at a dead end. Arguments become increasingly minute, detail-oriented – definitions, precise meanings, inhibition by language – or they become increasingly fundamental. In the process of questioning your beliefs, you find out that, if you think about beliefs in any formal sense, there is no basis – there is no ‘first cause’ in beliefland. You operate on axioms, many of them: “experience,” whatever that is, is a mostly-reliable way to access some external reality. Other people have experiences too, in a similar way to you. You should take actions towards being “happy” in some sense, or being “righteous”, or being “loving”, or even being “powerful”, for some fuzzy definition of those things. (You will notice that I will take for granted a lot of axioms here because I need to in order to function.)
There is no way to choose between those definitions other than by consulting our own preferences as individuals. Attempts to construct ‘objective truth’ are pointing at something that isn’t… there? I think people naturally come with the intuition that ‘truth’ is a meaningful concept, especially in the moral sphere, since we have an idea of ‘correspondence to reality’ or ‘predictiveness’ for descriptions of the world – we unthinkingly apply that abstraction to moral statements, as if “should” statements are describing some “thing” that is “out there in the world”, in the way that “the earth is round” describes the ‘shape’ of some ‘thing’ we can make observations about.
But I make these critiques not because I am a nihilist; I am pointing out these fundamental difficulties because I think there is a better way to think about the same things – better, as in more conducive to the ends we actually care about. When we ask “what is good?” I think we are asking many things at once, and at the same time nothing at all. We want to know how to live in communities with other people. We want to know what choices will make the world better, or at least, our world. We want to know how to behave in order to satisfy ourselves and others. A moral theory is supposed to provide a guide for decision-making as a conscious agent who feels free. It is supposed to help us make our lives and the lives of others better in some sense. It seems that often, moral theories in fact do the opposite – when they become ideological they end up oppressive, regressive, or exclusionary.
Instead of asking “what is right for me to do?” – and taking for granted that that question has any meaning at all – I claim that we should ask instead, “how can I coordinate better with other people?”
This is a difficult argument to make – especially if I want to make it rigorous or systematic in any sense – because of its complexity, but this is a first attempt.
If you look at history, great advances have happened when humans act together in systems. Standards of living have skyrocketed worldwide over recent centuries because we have been building systems that allow us to coordinate with one another better – it turns out, using money is a really good way to run a system. (This is not an essay about capitalism.) Civil rights advances occur through large-scale efforts coordinating thousands or even millions of people. Scientific progress occurs through long-term, large-scale endeavors by society’s smartest people. It is by working in communities with others that humans have begun to build a world that is far better than the one that our predecessors lived in. It is by working in communities that we change the world for the better.
We see in history that coordinated human effort makes life much better, in the long term, than selfish, antisocial individualism. The rule of law, for example, makes life way better – yes, it is in theory an infringement of our freedom to have any law at all, but life is much better when we coordinate to follow laws instead of ignoring them. Thus, instead of trying to reach for some abstract standard of justification for our preferences, we should take them for granted, and ask the question, “how do I coordinate with other people more effectively to satisfy both of our wants?”
Thus an abstract question, one that is slippery and, as I see it, genuinely meaningless – one whose definitive answer has evaded many great minds through history – becomes a concrete question. More importantly, it becomes thus a question we can make progress on. By understanding other people better, by having a broader perspective, by doing better science, by understanding psychology and human desire and system-building and economics and sociology and history and so many other things, we can get better at coordinating, in ways that satisfy the people involved.
I think this is possible because humans seem to generally want similar things – security, love, pride, fulfillment, things like that. Cultural differences exist and are hugely significant – and ideology can hijack a value system – but at base, it seems that most people do genuinely want similar things, and it is only the instruments of culture that cause those things to be manifested differently.
What I am aiming at is this: people thinking about their decisions with others in mind. Moving beyond debates about abstract principle and paying attention to coordination on mutual goals. Understanding themselves and others better, understanding how humans operate in communities better, and then applying that knowledge to build healthy, functional systems. The nature of a democratic society, one with free speech and free exchange, is that we can build communities, create subcultures, start neighborhoods, build houses. The better we understand the actual machinery behind human coordination, the more resistant we will be to social threats – ideology, manipulation, etc. The better we understand ourselves, the better we can make our lives.
II. Further considerations
In setting this out, I am ignoring a bunch of hard questions. I have some ideas about how to answer some of them, though there are others to whom a reasonable approach is not visible to me at the moment. Luckily, these thoughts have been thought by many people other than myself (lol, by Rationalists™); I don’t actually have to derive everything on my own principles. But I am placing this out here as a landmark for myself to begin building on. Some of the important questions that remain to be answered:
- General fleshing out of the ‘from first principles’ approach to these ideas; need to better-explain my thoughts on the impossibility of non-axiomatic belief, engage with other ways of thinking about
- What really are preferences? I have some intuitive sense of “things I really deeply care about”; psychology seems to have reasonably good ideas about this (Maslow’s hierarchy, ideas about self-efficacy, etc.) and I think many of these are obvious on minimal introspection but I want to have better-articulated thoughts if this whole idea is based on ‘helping one another mutually satisfy preferences’
- Who gets to be an agent/who is worth coordinating with? My naive take is something like, ‘if acts as if it has a world model, preferences, and acts to satisfy those preferences, it’s an agent or it’s the result of an agent” and thus any living thing, as well as humans, as well as probably language models that are sufficiently advanced(!) should have its preferences taken into account – but then I was like, wait, actually if we’re basing this on productive cooperation, I’m not sure how much we can actually cooperate with ants or cows or coyotes; maybe in the case of there being little way to coordinate, the best strategy should be ‘live and let live’ or something in order to minimize destruction? Should intelligence (size/complexity of world model that an agent can entertain) come into play in how we weigh an agent’s preferences? This requires more thought. Since consciousness is non-observable, I think an agent-based model of moral personhood is the way to go, but it also might have weird consequences. I might also be able to avoid the question by asking a better one somehow.
- How do we actually go about learning from history/sociology/psychology and applying it to decision-making?
Some other nice features of this way of thinking are also:
- It’s much simpler and much more useful than doing the mental gymnastics of explaining precisely how ‘moral truth’ exists; avoids metaphysical questions that tend to be dead-ends. Essentially, all we need to do is accept the basic idea that there are other people in the world or something (this should be emphasized more as a justification in future drafts)
- It fits in nicely with my intuitions that ‘democracy is good’ and ‘human freedom is good’, and is quite compatible with my sense that America (for all its great struggles) has something deeply right at its philosophical core
- It nicely justifies other things that I think are important, such as connecting with our ancestors, preserving and creating great art (art creates united societies or something; having a canon and a ‘culture’ helps societies coordinate better), having good aesthetics (okay this is a little more nebulous, but good design, good aesthetics are important for shaping human behavior in systems) – not to mention there is a concrete, immediate moral imperative to understand the world better
Generally this way of thinking about the world feels very good and natural (to me). That could be because it is ‘more correct’ in some vague sense, and it’s uniting things that my intuition understands but my reason doesn’t, or it could be because it’s playing to my prejudices/sense of salience. For what it’s worth, I think there are also many ways these ideas can be packaged. This way of packaging them is focused on philosophical/utility-based attractivenes or elegance, but you could also package it in a lens of empathy and communitarianism (?) for people to whom that framing is more attractive.
An important or perhaps unimportant caveat is that it seems to be better to act as if people have free will, but I’m not sure the idea actually makes any sense either. I’m really confused about freedom and agency. More on that some other time, I guess.
II. Outline of a more systematic essay
- The limitations of present moral philosophy
- Concieved axiomatically, there is an essential unjustifiability at the bottom of the system
- There are many axioms we must take in order to function, so this is not inherently wrong.
- The problem is that no one can in good faith argue for their moral basis over another’s, unless we have agreed on some mutual goal. Belief in true ethics is in its essence of the same kind as belief in God, just with a smaller range of epistemic consequences.
- Moral justification attempts to describe value or good itself, though – no moral system can justify itself.
- This makes choices between moral theories matters of taste
- Worse still, the focus on “the good” places the center of human moral thought in an abstract principle outside of ourselves; it is a theory without regards to reality, and by the nature of abstractions, on the margins it will lead to out-of-distribution results.
- Utilitarianism specifies for weird resource-accrual behaviors that we probably don’t want; Deontology is rigid and inflexible; Virtue ethics is essentially self-centered and shard-y?
- Probably want to justify this rigorously
- And we can be moral agents, under some systems, with extremely minimal understanding of the other people with whom we interact.
- By basing action on an abstraction we are making our moral decision-making exegetical, in a sense – interpretive. We are outsourcing the responsibility to make decisions to some external framework.
- Utilitarianism specifies for weird resource-accrual behaviors that we probably don’t want; Deontology is rigid and inflexible; Virtue ethics is essentially self-centered and shard-y?
- Concieved axiomatically, there is an essential unjustifiability at the bottom of the system
- What do we really want from a moral theory?
- I claim that it is this: we are agents – we bear models of the world, preferences/values we wish to satisfy, and the capacity to take actions to satisfy those things.
- We want a guide to decision-making in the presence of uncertainty; an optimization strategy for making the world better in some sense, whether that is for ourselves, the people we care about, or literally everyone, to some varying extent.
- We want a theory that is flexible enough to accommodate new circumstances, but one that still follows some organizing unity and has coherent order
- We want something that will not produce outcomes that are “intuitively wrong” or which seem anti-social; we don’t want our moral intuitions to be violated.
- This is problematic because we have many moral intuitions (look into Haidt research) and they can come into conflict; we have value “shards” that are incoherent; we do not have systematically/logically-specified preferences, so a self-consistent, logical system will – inevitably? – come into conflict with those intuitions
- I think these criteria are satisfied by formulating an answer to a different question. In fact, I think asking the “question of the good” at all is to be mislead. The question we should be asking is, “how do we coordinate with other agents productively?”
- Humans in general seem to have a set of things that we near-universally agree are good: food/water, health/cleanliness, stability, shelter, some level of individual choice/freedom, family, meaningful work, spiritual satisfaction. (psychological inter-cultural research? Make sure this is actually substantiated)
- We do not need a guide outside of this: coordinating to universalize the fulfillment of these preferences will produce the kinds of outcomes we consider “morally good” in almost any definition of such a thing
- This does not require any special epistemic status, any mental gymnastics or metaphysical propositions.
- Good features of asking this question instead of the other one
- We don’t have to do complex epistemic justification for it; it is minimal and simple, elegant. It takes what humans are naturally given and orients it towards good.
- You can say that “this is just another arbitrary choice of value,” call this just another proliferating standard (xkcd), but doing so misses the point: this is not saying “satisfying everyone’s moral preferences insofar as that is possible is abstractly good,” and taking that axiom and going from there to a societywide ‘moral framework’; doing that would essentially be creating a variant of utlitarianism. This is not attempting to create a systematic, abstract, person-independent moral system. It is reorienting the way we think about action, from accounting ourselves to an external universal, to focusing on the concrete people that exist in community together. I have these goals, how do I achieve them in ways that help other people achieve their goals?
- I am saying that “what is good?” is either a meaningless question (asking about objective moral truth) or an individual one (asking what your preferences are). We don’t need to set out this systematic account of “what is good.” We can just get better at doing human things, and this will cover all our bases.
- The whole point is that we are shedding the paradigm of “the Good” entirely – that is not a frame we are using anymore. We are changing the way that we think about action, asking a different question than “what is good”, because answering this question will give us the outcomes we have essentially presupposed (explain this more concretely? Try critiquing and then adjusting in response)
- This is an empirically studyable question – we can get better at answering it. We can bring science and mathematics to bear on it. The dynamics of game theory, decision theory, institution building.
- But it is also not abstract; it places the focus on understanding others; we must understand others in order to coordinate with them.
- Emphasizes individual moral agency in community organization, leadership – effective action takes place on the superorganismic scale, and focusing on coordination
- Justifies other things I care about? (e.g. creating and preserving works of art, building culture, creating coherent societies)
- Will function well as a cultural paradigm (community-oriented living seems to be highly meaningful)
- Has many options for aesthetic packaging that appeal to many different people
- We don’t have to do complex epistemic justification for it; it is minimal and simple, elegant. It takes what humans are naturally given and orients it towards good.
- Difficulties of this question
- Accounting for non-anthropomorphic values – animals, nature, also AI. Who gets moral personhood?
- Possible criteria: agents (world model + preferences + freedom) or subjective experience (may actually be observable or falsifiable) or ability to suffer
- Accounting for conflict with other agents
- How do other moral theories respond to conflict and self-propagation?
- How do we respond to agents that have goals that are antithetical to our own?
- Humans seem to share basic ideas, but culture is extremely powerful; it can create superorganisms that are impossible to coordinate with
- How do we relate to violence? Inherently against conflict, but only under the circumstances of there being no way to coordinate and an active threat can violence be justified
- This is I think the most likely way that this theory goes wrong: people misinterpreting what preferences mean (thinking it emphasizes superficial values as opposed to deep preferences) or turning coordination mandatory in some sense – losing the inherent libertarian communitarianism and just keeping the latter.
- Attitude towards Proseletization – obviously you want other agents to be better at coordinating in order to coordinate with you better, but this does not require them to shed their moral values (unless their moral values prevent them from working together with others or something)
- What questions does this actually need to answer? Does Christian moral philosophy, deontology, utilitarianism, etc. don’t actually give answers to, like, how should you respond to your HOA or whatever.
- Should we extend it to game theory? Credible commitments, threats, etc. See Moral Reality Check for some leads
- There is well-developed math and such around this. (see text in Yudkowsky Planecrash review)
- Perhaps the devil is in the details: does this theory actually lead to weird outcomes?
- Dath ilan did it first lol. Don’t want to just take Yudkowsky’s ideas, but he got there first I guess (and I think the ideas are good)
- Accounting for non-anthropomorphic values – animals, nature, also AI. Who gets moral personhood?