My Intellectual Project

Describing what I'm up to: studying crises, ideas, and AI.

Sep 18, 2025 • 1668 words • #life-updates #

On the vanity of artists. — I believe that artists often do not know what they can do best because they are too vain and have set their minds on something prouder than these small plants seem to be that are new, strange, and beautiful and really capable of growing to perfection on their soil. That which in the last instance is good in their own garden and vineyard is not fully appreciated by them, and their love and insight are not of the same order. Here is a musician who, more than any other musician, is master at finding the tones from the realm of suffering, dejected, tormented souls and at giving speech even to the mute animals. Nobody equals him at the colours of late autumn, at the indescribably moving happiness of a last, very last, very briefest enjoy-ment; he knows a tone for those secret, uncanny midnights of the soul, where cause and effect seem to have gone awry and something can come to be 'from nothing' at any moment; more happily than anyone else, he draws from the very bottom of human happiness and so to speak from its drained cup, where the most bitter and repulsive drops have merged, for better or for worse, with the sweetest ones; he knows how the soul wearily drags itself along when it can no longer leap and fly, nor even walk; he has the shy glance of concealed pain, of understanding without solace, of taking farewell without confession; yes, as the Orpheus of all secret misery he is greater than anyone, and he has incorporated into art some things that seemed inexpressible and even unworthy of art, and which could only be scared away and not be grasped by words in particular - some very small and microscopic features of the soul: yes, he is master at the very small. But he doesn't want to be! His character likes great walls and bold frescoes much better! It escapes him that his spirit has a different taste and disposition and likes best of all to sit quietly in the corners of collapsed houses - there, hidden, hidden from himself, he paints his real masterpieces, which are all very short, often only a bar long - only there does he become wholly good, great, and perfect; perhaps only there. - But he doesn't know it! He is too vain to know it. (Aphorism 87, bolding my own.)

In the past year my interests have begun to cohere. This is a blog post explaining what I'm up to, intellectually speaking, particularly considering that on their face it's non-obvious how AI and philosophy/intellectual history fit together.

The exigence is this: I think the world is going to change a lot in the next decade. Probably the most important source of this change is going to be the further development of AI. Even the AI we have right now is powerful and philosophically interesting; if AI progress stalled right now we'd be left with something fascinating and economically valuable (although somewhat limited in scope to tasks that can be worked with digitally). But I don't think AI progress is going to stop anytime soon, and I think that this progress is going to have profound societal, economic, and philosophical implications.

At my core, my deepest drive, is an impossible-to-shed desire to read, to think over, to re-evaluate ideas, to search for firm ground in the churning of the world. This fulfills some deep longing within me, and it can take many forms — historical, philosophical, mathematical, technological, etc. I hope not to be too vain to paint, as Nietzsche calls it, my 'real masterpiece.' So seeing as I'm drawn to the kinds of questions that studying AI allows me to ask, and that AI provides more grounding for those questions than we have had than any time before in history — involving things like intelligence, agency, language and meaning, preferences and moral worth — I have come to a project with many arms and shapes but with the same essence, centered on AI and the change that it has and will bring about.

In a sentence: I study AI technological change in the technical, historical, and philosophical aspects. I want to contribute to the human project through my technical research, my scholarship (providing meaningfully useful understanding of dynamics of ideas and power in times of crisis), and by through competent use of that knowledge by me and others down the line.

As perhaps suggested by the tripartite phrasing of the pitch above, there are three core pillars to my efforts:

  1. I want to understand "what AI is", how it works, what kind of philosophical status it has as an object or entity, how we can keep it in check; this involves significant technical understanding. I produce technical research that aims both to inform answers to these questions and to provide actual utility for the world, for science and building better, more aligned models.
  2. Ideas matter a lot in times of crisis. I want to understand how ideas in philosophy have historically shaped responses to crises and structured how people cope with technological change. This is the "intellectual history" part of my project; I want to put myself on firm historical grounding, insofar as that is possible in an unprecedented situation. I want to reflect this historical study on the present: which ideas are right now shaping, consciously or not, the ways that we talk about and think about AI? How do ideas from the academy like cybernetics or accelerationism influence decision-making in AI companies and legislation? Whose water are we swimming in? This extends into other disciplines — international relations, for example — though of course I must keep the scope limited enough to be actually tractable.

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. (John Maynard Keynes, 1936)

Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable. (Milton Friedman, 1982)

Thanks to Prof. Jennifer Burns for these quotes, they have stuck with me.

  1. I seek out the right vocabulary for talking about humans and the human condition in the age of intelligent machines, to figure out which kinds of philosophy — if it is indeed philosophy at all which we need, as opposed to mathematics or economics or social science or something else — can provide the most powerful and attractive interfaces for our position as humans in changing relationships with our own power, nature, and cognitive labor. This involves extensive reading (which I have only barely made a dent in so far), including classical and early modern political theory, theological treatises on human nature, various strands of 20th century continental philosophy and social science, philosophy of mind, cybernetics, transhumanism and posthumanism, philosophy of mathematics, and the philosophy of language, to name a few. (I'm also interested in the political theory and philosophy of East Asia, though this is not a developed interest by any means.) This search is informed by my technical understanding and by my historical study; I'm interested in fitting formal and qualitative descriptions into one another. I want to create life-affirming, vitalizing new narratives for human life in and after the scaling era, (if we survive it, of course).

Interesting philosophy is rarely an examination of the pros and cons of a thesis. Usually it is, implicitly or explicitly, a contest between an entrenched vocabulary which has become a nuisance and a half-formed new vocabulary which vaguely promises great things. (Richard Rorty, Contingency, Irony, and Solidarity, p.8)

What we require for an entity to be ‘dispensable’ is for it to be eliminable and that the theory resulting from the entity’s elimination be an attractive theory. (Perhaps, even stronger, we require that the resulting theory be more attractive than the original.) We will need to spell out what counts as an attractive theory but for this we can appeal to the standard desiderata for good scientific theories: empirical success; unificatory power; simplicity; explanatory power; fertility and so on. Of course there will be debate over what desiderata are appropriate and over their relative weightings, but such issues need to be addressed and resolved independently of issues of indispensability. (Stanford Encyclopedia of Philosophy, Indispensability Arguments in the Philosophy of Mathematics)

(These two quotes point to the same idea: that your ontology influences the way you predict and take actions in the world, and you can choose a different, better ontology. This is a big part of what I'm trying to do in the third part: find the right terms which carry the right ontology for this age.)

Now, note that I cannot fit everything in my life into one project. There are simply other things that I'm interested in that aren't about AI or technological change, the I simply love for themselves, like music, backpacking, triathlons, literature, Minecraft, and my friends. Those that do not fit here tend to end up on the Aesthetics page.


P.S. I'm looking for funding for any or all of these aspects of my research, to enable me to pursue this instead of focusing on more immediately lucrative but less long-term substantial work; the point of taking intellectual risks is that it's not going to immediately bear fruit, and in the meantime I need to pay for my college tuition and living expenses and work towards financial security. If you are interested in some of this project and want to support it, or know someone who I should reach out to, please contact me.