30 Comments
User's avatar
Neural Foundry's avatar

Your journey really captures the value of what you call eclectic thinking. The way you integrated Hayek's knowledge problem with continued egalitarian commitments shows intellectual honesty that's rare. Most people either fully embrace market fundamentalism or reject economic constraints entirely. Recognizing that centralized planning fails while also acounting for brute luck and arbitariness of birth circumstanes feels like the right synthesis.

Rafael Ruiz's avatar

Thanks! I still sit a bit uneasy, because not a lot of people share my view. It takes a long time and a lot of reading to develop such a view, and most people don't work in political philosophy as their job!

Manuel del Rio's avatar

This was a very interesting post! I more or less had your political coordinates localized - I mean, EAs mostly fall into what... was it Tyler Cowen?... someone once called 'the reasonable and good left'. The right equivalent is yet to be found...

Your piece also made me think for a while on my own evolution, which is quite different from yours but, per the Intermediate Value Theorem, there must have been a moment when our views met at f(c) = L. I started in my teens as a really dogmatic Marxist-Leninist, and this continued to be my gig until at least my very late 20s-early 30s. One book that really forced me to update my beliefs was Alec Nove's The Economics of Feasible Socialism, although my change was more of a little trickle-down effect for many years. I also remember a book that really surprised me in my early 20s and that was actually giving me intellectual tools for rejecting the Marxist vision of class, and that landed in my hands by a misunderstanding: Frank Parkin's Marxism and Class Theory: A Bourgeois Critique.

Right now I'd say I've reached a stage of very robust centrism, with intelectual inclinations towards more liberal and pro-capitalist views but more pragmatic and perhaps self-serving alignment with a big chunk of our European welfare state policies. I really liked The Road to Serfdom, and am eager to read more Hayek (got The Constitution of Liberty next to my bed). And I learned a ton this year from reading Mankiw. I have reached a state in which I am actually hostile to equality as a terminal goal and an end in itself. I've also always been very anti-woke (even when I was a Marxist!), which is an area where we don't overlap at all. I might be joining these days Richard Ngo's 21st Century Civilization curriculum and discussion groups, if I get them to fit my timetable.

Rafael Ruiz's avatar

RE: Marxism and our journeys.

Very interesting! You must have read Joseph Heath's article on the Death of Western Marxism? It shows some of the struggles that marxists had trying to update their theory. Jon Elster is the analytic marxist I've read the most, and will never get tired of recommending his "Introduction to Karl Marx" (which is pretty advanced, not really an introduction...) or "Making Sense of Marx", among his other books. I completely skipped over that phase in my writing, but I read a lot of that in undergrad. It was pretty brilliant, but it also contributed to me moving away from leftist dogmatism. Not a lot of people (that I know of) have picked up the mantle from that analytic marxist generation into trying to bring the "let's update marxism" torch into the 21th century. Make of that what you will.

I do think that there is a reasonable right, covered by people like Hayek, Popper, Sowell, and Edmund Burke. The core idea is that we should be slow and precautionary about enacting social change. Another wing are people well-informed about economics that are much more liberal and less egalitarian. But I'm more skeptical about other people on the right, particularly outside of academia and out on the street.

RE: Equality.

Perhaps against the woke, I don't think pure equality should be the goal on itself, because it falls prey to what Parfit calls "the levelling down objection" in his article "Equality and Priority". You probably heard of it, but I'll do a quick rehash. The core idea is that you have a rich guy and a poor guy, one way to reach equality is by making the rich guy poor. In fact, this is often easier, just burn the money or resources rather than creating more wealth. Problem solved, equality achieved!

Egalitarians are kinda forced to say that equality is a pro tanto good, but might not be the only good, and that the loss here is higher than the gain. But that's a much weaker position! I seem to remember that Gerald Cohen (the analytic marxist) had a bit of an existential crisis when reading about this problem.

Parfit's solution is to basically give up on the *relational* aspect of equality. What we should do, instead, is what he calls "prioritarianism". One way of understanding prioritarianism is start from utilitarianism, and then "bend" the utility function, so that people that are worse-off have more moral weight when they are benefitted. (The technical details are spelled out by Adler Chapters 2 to 4 of "Measuring Social Welfare", where he compares all these views. I'm fortunate that I get the chance to teach this stuff and discuss it here at the LSE!).

The difference is that this compares *absolute* qualities of welfare, so that raising people's quality of life is always good, lowering it is always bad, and we care more for the worse off.

But it can mean that some of "the woke" are wrong at pursuing pure equality, particularly if the idea is degrowth, deindustrialization, or some pro-indigenous and anti-modern society ideas. Having seen how some huge proportions of people in Latin America live, I think pursuing that goal actively would be a huge blunder of civilizational proportions. This is where I think "the woke" and "right wing populists" need to learn economics, read some Steven Pinker, Hans Rosling, browse Our World in Data, Gapminder, etc., otherwise we're fucked.

Manuel del Rio's avatar

Really insightful comment, as usual, but then I've been spoiled and expect no less of you.

I haven't read Joseph Heath's article, but a priori, I feel any attempt to update Marxism is misguided anyway. Its economics are all wrong, it theory of history is all wrong, its political proposals have been either inefficient and naive of horribly efficient but for dystopian ends. Its sociology is wrong. Maybe the philosophy, as in the 1846 Manuscripts? But the discourse on alienation is an indictment more properly of modernity and complexity, not of capitalism, and better done by others (perhaps Durkheim?).

I agree with the reasonable right list of voices you mention. What I meant to say was rather that, just as we have a group that encapsulates what, mimicking Tyler, I'd call 'the good left' (i.e., EA), I don't quite see a group encapsulating 'the good right'. Rats don't quite fit that. Maybe Ngo?

As long as I feel anti-realism is the basic ground truth of ethics, I can't really feel I can speak about the good, but rather, of my preferences (which overall, are quite what you'd expect from a nerdy, WEIRD person). To the degree I grasp my utility function, I'd describe truth-seeking and a rejection of deceptions (including to self) as my strongest, terminal value, possibly followed by 'life, freedom and the pursuit of happiness'. Like, I've come to see most inequality as 'evil', i.e., envy and zero-sum mentality and a product of malfunctioning evolutionary traits. I am assuming as a base rate that humans, whatever our similarities, are sufficiently different that such differences have very big effects in very different outcomes in complex societies, and that as long as they don't ossify into some hereditary caste or excessive barriers to some degree of equality of opportunity, they are okay. Much as I dislike Rawl's Veil, I do think a society I'd like to live in (emphasis on the arbitrary subjectivity of my likes and dislikes) is one that should go a long way to creating a big enough safety net so that the quality of life of those less-well off is always being raised and lowering it is always bad. But everything boils down to trade-offs. In Europe we go heavy on the latter at the cost of really high taxation and stunted or non existent growth, which I don't feel is sustainable in the long run.

Rafael Ruiz's avatar

Ah that clarifies your position a bit better, yeah. Sorry I had misunderstood the point about Ngo being a reasonable right-winger.

RE: Marxism.

The Joseph Heath article is on his blog. It's pretty enjoyable to read and got a lot of online traction in the philosophy community some months ago, even if some of the historical details are out of order. It explains why analytic philosophy pretty much abandoned marxism: https://josephheath.substack.com/p/john-rawls-and-the-death-of-western

I do agree that marxist theory is seriously flawed. I think Jon Elster did a valiant effort trying to update marxism, but had to abandon half its tenets anyways. The last chapter of his book on Marx outlines it quite well: https://www.cambridge.org/core/books/abs/an-introduction-to-karl-marx/what-is-living-and-what-is-dead-in-the-philosophy-of-marx/1DED32594AD69F73AB31DC4284080A22

Here's my summary of Elster's chapter, kindly formatted by ChatGPT:

What's dead in Marxism.

(1) “Scientific socialism” There are no laws of history with “iron necessity”.

(2) Dialectical materialism. It adds nothing serious to philosophical materialism. The “laws of dialectics” are either trivial or vague.

(3) Strong teleology and functionalism. History conceived as a process with an intrinsic goal (“humanity” or “capital” as subjects). Functional explanations of the form “this exists because it benefits capital” without any mechanism or underlying laws.

(4) Marxist economic theory (almost all of it). Labor theory of value: poorly defined concepts, with no real theoretical use. Theory of the falling rate of profit: logical errors and empirically false. Other crisis theories are formulated so vaguely that they cannot even be assessed.

(5) The theory of productive forces and relations of production. The idea that property relations rise and fall according to their effect on the development of the productive forces lacks microfoundations. It is more plausible that institutions track the maximization of surplus, not of innovation. Moreover, it is inconsistent with Marx’s own historical descriptions.

(6) Parts of the theories of alienation, exploitation, class, politics, and ideology

Tainted by wishful thinking, unsupported functional explanations, and arbitrariness. They are not completely “dead”, but they are seriously damaged.

What is still alive in Marx (according to Elster)

(1) A version of the dialectical method. Analysis of “social contradictions” as paradoxes of collective action and unintended effects (fallacy of composition, social prisoner’s dilemmas).

(2) The theory of alienation and conception of the good life. But as an ideal of individual self-realization, not of “Humanity” as a collective subject. Aristotelian view of developing one’s “species powers” as opposed to mere passive consumption (although it requires qualifications and limitations).

(3) The theory of exploitation and notion of distributive justice. Exploitation as working more than is necessary for one’s own consumption due to physical/economic coercion or necessity. Underlying principle: “to each according to their contribution”, with corrections for special needs. Exploitation is not a final moral concept, but it is a good heuristic indicator of injustices.

(4) Theory of technical change. Fine-grained analysis of how technology is used to discipline or fragment the workforce. The organization of work and workers’ resistance affect costs, wages, and incentives to innovate.

(5) Theory of class consciousness, class struggle, and politics. Good insights into when a group moves from being a “class in itself” to a “class for itself”. Attention to obstacles: spatial isolation, turnover, cultural heterogeneity. Valuable ideas about class coalitions and the relative autonomy of the state, although it underestimates non-class conflicts (nation, race, religion, etc.).

(6) Theory of ideology (to be revised) As it stands, it is functionalist and somewhat “magical”. But Elster thinks it can be salvaged if given microfoundations, drawing on cognitive psychology, biases, and the formation of beliefs and preferences.

RE: On the latter point on WEIRD Values and anti-realism.

I feel myself going more in an anti-realist direction too, although not as far as you. One of the people that pushed me in that direction (aside from you!) was Tyler John. I used to think we would reach a more universal moral convergence under very idealized conditions. For example, full information, reasonable values without obsession on a particular value due to something like mental illness... basically a long list of what Rawls calls relying on "considered judgements" instead of just "intuitions". Like the view espoused here by Eric Schwitzgebel: https://eschwitz.substack.com/p/a-metaethics-of-alien-convergence

But lately I’m more skeptical. Even with full information and rational reflection, different beings might land on different equilibria. The quick reasons are

(1) different positions might reach coherent reflective equilibrium without necessary convergence into a single point, and

(2) very different beings, such as rational aliens or artificial general intelligences might converge into different values, because their biological makeup is very different, so their weighing of values (e.g. valuing their offspring vs moral impartiality) will be very different. With AI this would be even more, since it won't even have a carbon-based biological nature, and won't have evolved by natural selection. This different weighing and behavior due to different biological makeup is already the case with animals like eusocial insects, who are *much* more willing to sacrifice themselves for the collective due to Hamilton's rule, since they're all diploid daughters of a single queen (and the queen bee has two sets of chromosomes rather than one), so all (female) bees share 75% of their gene with each other, as opposed to 50% in human siblings.

So this leads me to a more pluralistic picture, I guess, not total anti-realism, but no guarantee of a single rational endpoint for ethics either.

Phoenix's avatar

I almost entirely agree with you on the issues, yet don't identify as left. Maybe we just have different perceptions of what the 'left' represents?

Rafael Ruiz's avatar

Might be! Like how much we emphasize issues such as wealth redistribution (e.g. Rawlsians and many leftists on economic issues) vs cultural issues (e.g. cultural appropriation) vs other considerations (e.g. precautionary principles about changing the status quo too much, like Hayek and Popper).

I would probably be seen as very leftist on wealth redistribution, for example. But substantially less on culture war things, which I think can often get out of hand online and become "mobbing".

Darby Saxbe's avatar

"Nuancepilled" is really good.

Joel da Silva's avatar

Nice article.

Two points. First, your claim:

“Ideal moral theory is the idea that if we find the perfect moral principles, people will adhere to them.”

is false. Ideal theorists aren’t committed to (and don’t generally hold) the view that if we find the correct moral principles everyone will comply with them.

Also, Valentini has a much more nuanced view about ideal theory’s value than you let on; she’s a moderate supporter of it, not a Huemer or Mills style critic.

Rafael Ruiz's avatar

There are different interpretations of what ideal theory means. For Rawls, "ideal theory" means "perfect compliance theory". For other authors, it doesn't.

Point taken on the Valentini, I was simplifying and pulling from memory.

Joel da Silva's avatar

It’s debatable that ideal theory = strict compliance theory for Rawls. But rather than get into why that is, I’ll just point out that the ‘strict compliance theory’ reading of ideal theory is pretty much never interpreted in the way you said it was: as a prediction about what will happen if we find the correct moral principles (specifically, the prediction that everyone will comply with them).

It's a claim about which principles are correct - specifically, the claim that they're the ones that would produce the best kind of society/world IF everyone were to comply with them.

Strict compliance is not something that ideal theorists predict will magically happen if we discover the right principles, it’s a background assumption they claim we need to make in order to identify which principles are right in the first place.

Rafael Ruiz's avatar

Yes, that's correct. I'll make some edits to fix that

LV's avatar

This is a great reading list. Unlike you, I didn’t have the benefit of studying philosophy at a young age, but over the course of my life, I took a similar trajectory of becoming more liberal.

When I was about 18 years old, I began studying economics. The theoretical efficiency of competitive markets convinced me of the general advisability of market libertarianism. (My view was untainted by knowledge of market failures and complications for some reaskn.) At the same time, probably as a reaction against my religious upbringing, I also became very much a social libertarian who believed that anything that didn’t directly harm another person should be legal.

I had a weak view of morality as a general concept. I thought that because human beings were literally animals, there was no morality, nor any objective good or evil. My view of morality was strictly teleological or consequentialist. I believed that if you wanted to have a prosperous and secure society, then you had to have x or y, in the same sense that if you wanted a peanut butter and jelly sandwich, you had to have peanut butter, jelly, two slices of bread, and a knife to spread the condiments. I didn’t think there was anything good or evil about it. I thought that shoulds could only be derived from clear goals that everyone agrees on.

I recognize that this is the kind of belief system of a reasonably intelligent teenager who hasn’t thought through a lot of contradictions. (It also seems to resemble some of what I hear from young folks on Substack.) If I had started studying philosophy at an earlier age and read a little more, I would have recognized early on that many of my beliefs were punctuated with holes like Swiss cheese.

Rafael Ruiz's avatar

I think we had a similar journey on how our views evolved over time!

I spent a really long time (my undergraduate and master's thesis) working on these issues of metaethics, moral epistemology, etc. Of course, I don't have the answers, but at least my views on these things have become a bit less naive.

Though I am still troubled by problems of ethical anti-realism or relativism, but I think we can at least attempt to reach some degree of intersubjective consensus on some issues, given some shared facts about human nature. e.g. not killing for fun, some basic stuff about basic equality and liberty, some basic stuff about avoiding suffering and disease... and then building from there.

Also, improving our knowledge about non-moral facts can lead us to flourishing even if there is disagreement on morality. For example, many moral views would approve of the world becoming better educated, improving quality of life, improving the moral virtues, etc. We should take advantage of that!

I published on my views a while ago. if you're interested, you should be able to translate it to English with ChatGPT or Google Translate. The paper actually starts with that analogy of whether ethical views are like "I prefer chocolate ice cream to vanilla"! https://www.academia.edu/45065609/La_refutaci%C3%B3n_de_la_%C3%A9tica_a_trav%C3%A9s_de_la_biolog%C3%ADa

Substrel's avatar

Thanks for sharing your journey. I think you should look into David Graeber and others for adding a lot more nuance to the „most of human existence before industrialization has been poverty and misery“ trope. And then I think it’s sad that you took the technocratic flavour at the end - Hayek and system theory could have emboldened the potential for what happens if we collectively and dynamically make true direct democratic decisions (like in Switzerland) and don’t only vote for parties or succumb to technocrats.

Given your academic trajectory up to PhD I think there is a natural tendency to be inclined to technocracy (after all, these studies of yours should elevate you intellectually over the mass, no…/s) so maybe check against that and give genuine egalitarian swarm intelligence a chance, not only prices. (Best pointer in terms of authors for this is perhaps Audrey Tang atm)

Rafael Ruiz's avatar

Thanks, this is helpful!

I've read a bit of the Graeber book (I even went to the book launch!) but there was an overall sentiment by the reception of the book that it was badly argued and that they go against the mainstream consensus. (There are plenty of Substack reviews about it)

It is true that I am going more in a technocratic direction. Though there's some stuff where I concede democratic methods are better than leaning on experts (e.g. Condorcet Jury Theorem). But right now I'm in the middle of writing a chapter on moral expertise (e.g. relying on the consensus judgement of moral philosophers rather than your own judgements to make moral decisions), so I'm in a bit of a technocratic kick. I also see plenty of content on social media from people who are *really* ignorant (e.g. I recently saw a video of people that don't believe in dinosaurs but believe in dragons, because dragons are in the Bible and dinosaurs aren't…), and I know some people like this in real life, which I think sadly has the effect of polarizing me towards technocracy, even when I try to avoid it.

It also true to an extent that doing a PhD can make you more epistocratic / technocratic. Once you've been reading for years, it makes you realize that people can have very naive but confident opinions about your area of expertise. ;(

Nevertheless, I'll try to finish Graeber’s book and try to engage with the authors you mentioned when I get the time. I think it's good to have my worldview challenged!

Steve Smith's avatar

Completely WEIRD post !

Imagine making policy for this vision.

Rafael Ruiz's avatar

I don't think my worldview going to become mainstream anytime soon, although I do think it's the best path forwards. Effective Altruists and transhumanists are at the forefront of these discussions, as well as some philosophers and people in tech.

I do talk about WEIRD values in my PhD thesis. My main line of argument would be that there is an assymetry between WEIRD values and non-WEIRD values. Similar, although weaker, than that how there is an assymetry between the better epistemic standing of WEIRD science against non-WEIRD and traditional science or pre-scientific worldviews.

Here's an excerpt from the thesis, regarding the worry that the theory of "the expanding moral circle" is an unjustified expression liberal western ideology. Other authors go more in-depth at some aspects than I do, and it's an avenue for future work, so I will also provide a bibliography:

"This raises a serious relativist objection. If defenders of the expanding moral circle simply assume that factors such as skin color, gender, or sexual orientation are morally irrelevant, then their account seems circular. They appear to presuppose, without independent justification, the very liberal egalitarian standards they later invoke to argue for moral progress. The theory, then, may end up merely reflecting widely shared contemporary values among the western liberal academics writing about it, rather than providing an independent foundation. In that picture, the theory risks just being a report of the moral views of contemporary WEIRD (Western, Educated, Industrialized, Rich, and Democratic) analytic moral philosophers, rather than offering an independent justification (cf. Henrich 2010; 2020; Inglehart and Welzel 2005; 2008; 2018; Welzel 2013).

The way out of such dilemma is to either (1) use standards of evaluation that are maximally non-controversial, arguing for the abolition of slavery and similarly uncontroversial progressive events and remaining agnostic about any other cases of progress, or (2) providing a theory of error that defuses the peer disagreement, perhaps by pointing out that others are not actually our moral epistemic peers. I think we should aim to do some of both, since providing a theory of moral error would let us be more ambitious with our theory of progress than what we could achieve otherwise. That means we should show that moral dissenters are not approximate epistemic peers given problems of mistaken empirical beliefs, systematic bias, upbringing in bad environments which leads to unreliable moral intuitions, or failure of higher-order consistency, among other potential problems.

The worry with proceeding with providing such a theory of error is a worry about “modern western smugness”. Jesse Prinz illustrates this case when we are looking at past instances of moral progress, and thinks we might be too self-congratulatory. Focusing on past moral changes after our moral revolutions took place is to use the perspective of our current modern values to assess past values as “defective, corrupt, or otherwise worse than our own”, and that we endorse them “for the trivial reason that we embrace our present values and no longer embrace our past values” (Prinz 2007, p. 289; Sauer 2023, p. 89).

Furthermore, a defender of relativism concerned about smugness could borrow from Jonathan Haidt’s Moral Foundations Theory, which says that many communities grant weight to moral foundations like Loyalty, Authority, and Sanctity, and not just to the enlightenment values of Care, Equality, and Fairness. (Haidt 2012; Graham et al. 2013) If so, perhaps traits tied to purity, status, or in-group identity really are morally relevant within those moral ecologies. Haidt’s conclusion is to say that western liberals should attempt to re-acquire these moral intuitions in order to be able to understand conservatives and find centrist political ground (Haidt 2012).

But Moral Foundations Theory is a descriptive psychological theory, it maps which intuitions different groups report and how strongly they feel them. It does not show that those intuitions track sound moral reasons. In fact, when we stress-test some of the canonical cases driving those intuitions (disgust reactions, deference to authority, ingroup favoritism) they often crumble as morally unreliable. So the mere fact that some cultures treat sex roles, caste, or ritual purity as salient does not yet give us a reason to treat them as normatively sound differences. That is, they are debunking explanations rather than vindicatory ones (as shown, ironically, by Haidt 1993; 2001; see Sauer 2015 for examination of this contradiction).

(...)

So we may want to say that modern values are historically reliable and vindicated, while traditional values are debunked. Putting forward a full case towards the process of modernity as a self-reflexive process of elevated epistemic status towards both moral and non-moral truths would be too ambitious and complex. It might require a large, detailed analysis of changes in our circumstances in terms of material, intellectual, and communication goods that promoted a social learning process, yielding an epistemic superiority of modern societies over traditional ones, which is far more than what I can do here. Still, there is a line of theorists of a vindicatory genealogy of modernity in this direction (Smyth 2024; Blunden 2025a; forthcoming; Welzel 2013; Habermas 1984; 1990; Williams 2005, Ch. 7. For critical pushback see Allen 2016; Zhou 2024)

Though here, to avoid confusion, it is worth keeping in mind the distinction drawn by Joseph Heath (2004) between modernization (the functional differentiation and institutionalization of science, law, markets, bureaucracy, and public spheres that improves collective problem-solving), and westernization (the uptake or imitation of specifically Western cultural beliefs and behaviors). Even if some people in modern societies are epistemically superior to traditional ones because of modern science that yields better empirical beliefs and methods, and also has better environments for moral learning, that would only support modernization as a process of social learning, not westernization as cultural mimicry. In other words, the claim is not the superiority of any Western form of life, which is compatible with a theory of multiple modernities, local path-dependencies, and criticisms of western imperialism. Nothing here licenses a conflation of epistemic gains with a moral mandate to export Western culture. At most, it just might provide a pro tanto reason favoring building the (scientific, free press, democratic, etc.) institutions that supports this process of social learning and contestation of social norms."

The mentioned bibliography, in the order it's mentioned in, is the following. I would probably start with Smyth (2024), just because it's a short paper rather than a full book.

Henrich, Joseph, Steven J. Heine, and Ara Norenzayan. 2010. “The Weirdest People in the World?” Behavioral and Brain Sciences 33 (2–3): 61–83.

Henrich, Joseph. 2020. The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous. New York: Farrar, Straus and Giroux.

Inglehart, Ronald F., and Christian Welzel. 2005. Modernization, Cultural Change, and Democracy: The Human Development Sequence. Cambridge: Cambridge University Press.

Welzel, Christian, and Ronald Inglehart. 2008. “Democratization as Human Empowerment.” Journal of Democracy 19 (1): 126–140.

Inglehart, Ronald F. 2018. Cultural Evolution: People’s Motivations Are Changing, and Reshaping the World. Cambridge: Cambridge University Press.

Welzel, Christian. 2013. Freedom Rising: Human Empowerment and the Quest for Emancipation. Cambridge: Cambridge University Press.

Prinz, Jesse J. 2007. The Emotional Construction of Morals. Oxford: Oxford University Press.

Sauer, Hanno. 2023. Moral Teleology: A Theory of Progress. London: Routledge.

Haidt, Jonathan. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon.

Graham, Jesse, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P. Wojcik, and Peter H. Ditto. 2013. “Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism.” Advances in Experimental Social Psychology 47: 55–130.

Haidt, Jonathan, Silvia Helena Koller, and Maria G. Dias. 1993. “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology 65 (4): 613–628.

Haidt, Jonathan. 2001. “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108 (4): 814–834.

Sauer, Hanno. 2015. “Can’t We All Disagree More Constructively? Moral Foundations, Moral Reasoning, and Political Disagreement.” Neuroethics 8 (2): 153–169.

Smyth, Nicholas. 2024. “A Vindicatory Genealogy of Emancipative Values.” Inquiry: An Interdisciplinary Journal of Philosophy.

Blunden, Chris T. 2025. “Moral Prosperity: Basic, Instrumental, and Vindicated Moral Progress.” PhD diss., Utrecht University.

Blunden, Chris T. Forthcoming. “A Vindication of the Value of ‘Choice’.” Inquiry: An Interdisciplinary Journal of Philosophy.

Habermas, Jürgen. 1984. The Theory of Communicative Action, Volume 1 and 2. Boston: Beacon Press.

Habermas, Jürgen. 1990. Moral Consciousness and Communicative Action. Cambridge, MA: MIT Press.

Williams, Bernard. 2005. In the Beginning Was the Deed: Realism and Moralism in Political Argument. Edited by Geoffrey Hawthorn. Princeton, NJ: Princeton University Press.

Allen, Amy. 2016. The End of Progress: Decolonizing the Normative Foundations of Critical Theory. New York: Columbia University Press.

Zhou, Jinglin. 2024. “Moral Progress and Grand Narrative Genealogy.” Inquiry 67 (7): 2197–2236.

Heath, Joseph. 2004. “Liberalization, Modernization, Westernization.” Philosophy and Social Criticism 30 (5–6): 665–690.

SZ's avatar

I really appreciated someone identifying as “left” who still appreciates the insights about human nature, meta ethical anti realism, limits of knowledge, and power of markets that most leftists do not seem to understand. As a libertarian/classical liberal working in academia, I see my left wing colleagues as all being incredibly uncritical about their egalitarian/progressive/social justice intuitions.

It does seem to come down to intuitions, with egalitarianism being central to this author. I, for example, have just never been swayed by egalitarianism. I don’t see it as anyone’s duty to try to make the world a better place, to create utopia. I have more conservative intuitions and embrace what Sowell calls “the tragic vision” of life. (Also why I find effective altruism and transhumanism to be just more Utopianism!) I suppose as a result I was deeply persuaded by Nietzsche’s critique of altruism when I was an undergrad philosophy student.

Still, moral luck is the major challenge for someone with my classical liberal/libertarianism position. Ultimately one who values individualism (and does not believe there is any objective morality aside from individual self interest) must say “such is life” because while people may not deserve much of what they have—no one else does either! So we are back to Nozicks critique of distributive justice and Hayeks critique of “social justice” as being ultimately most persuasive.

skaladom's avatar

> I really appreciated someone identifying as “left” who still appreciates the insights about human nature, meta ethical anti realism, limits of knowledge, and power of markets that most leftists do not seem to understand.

I guess there are different ways of getting to "left".... I never cared for the Left's classic ideological basis, i.e Marxism and all its derivatives, and I find the insights you list above much more interesting and real. Yet I've usually voted left, because unchecked capitalism produces wealth, inequality and bad externalities all in considerable amounts, and the last two need to be counterbalanced somehow.

Justin Mindgun's avatar

I would consider myself part of the "new right" mainly because I became a hereditarian. I would still consider myself to be an egalitarian, but I accept the limits that genetics places on individuals and groups. I really feel that that what is holding back leftwing thought right now is the radical belief that 100% of differences between groups is environmental and 0% is genetic (or, to put it more accurately, that groups don't exist).

I hear so many stories of people becoming rightwing due to genetics but no similar stories of genetics leading people to become more leftwing. Why is that?

Rafael Ruiz's avatar

I think there are plenty of leftists that admit that there are >0% genetic differences. And I seem to remember Peter Singer writing a book about "A Darwinian Left", although I haven't had the time to read it, to be honest. I think many people would probably argue that genetic differences are pretty distributed, so they aren't specifically tied to groups. Without being an expert, I'm skeptical that genetic differences explain most facts about the differential development of countries, rather than those differences being mostly due to a mix of culture, institutions, technological breakthroughs, geography, resources, etc.

I think people like Koyama and Rubin in "How the World Became Rich" make a good summary of the different theories. Acemoglu and North have good stuff on institutions (compare Mexico to the Southern US, or North Korea and South Korea...), Joel Mokyr (the most recent Nobel Prize winner in Economics) has good stuff about ideas and technology, Ian Morris and Jared Diamond talk about the impact of geography and resources in old societies that yields historical divergence, again Ian Morris and and Vaclav Smil talk about energy capture over the centuries and how it sets societies up towards hierarchy or egalitarianism, Joseph Henrich has the massive "WEIRDest People in the World" book on modernization and cultural evolution, and how there's a complex process between growing market economies and greater moral impartiality, etc.

The story is complex and I don't know how to weigh which factors are most important, but I find a combination of such factors as more powerful than genetics in determining the long run of history. Though I admit that genetic factors aren't zero, while being a leftist.

Marios Richards's avatar

To be honest, I was also probably a bit annoyed with socialists

I have a lot of sympathy with these ‘political journey’ posts, although I personally did not grow up particularly political (beyond the context of growing up lower class/on a council estate).

Nonetheless, I still went through a ~5 year period of A Tree Doesn’t Fall In The Woods Unless Someone On The Liberal Left Has Acted Improperly.

The worst part is that I was kind of aware of it and the thinness of the excuses.

I don’t think there’s anything deep to it beyond “obsessing about the flaws of the liberal-left is fun and easy, merely not-actively-filtering-out the flaws of the authoritarian-right is miserable and hard”.

But there comes a time when you have to kick away the crutch of “ah, but you see I’m merely reacting to-” and just develop a perspective that doesn’t always to begin with an excuse.

Erek Tinker's avatar

I had to stop at refusing to pay for Substack.

That kind of tells the whole story of "more left wing".

Micah DenBraber's avatar

So, great article. Here from Dan Williams’ restack as someone who’s figuring out where I stand on the left. Moved further left recently via the Doomscroll crowd and I’d call myself a tech-skeptic. Also helped found The Hague EA freshman year and now passionately skeptical of utilitarianism and the movement (though I gleaned great relationships and heuristics I still draw on—always love chatting with EAs). Always curious and love discussion.

I resonate a ton with the mention of luck egalitarianism and against moralizing wealth, because I’ve lived it. But I disagree that there are no arguments against transhumanism in principle — I think that gives technology an empirical and normative pass it doesn’t deserve.

Not saying all enhancement is inherently bad, but I don’t think we can separate principle from practice here. We’re always embedded when making models, and I don’t think we can rationally calculate the *most* good we can do because of the problems of nuance and immeasurability you touched on. Who’s to say extending our lives won’t make us bitterly miserable? Does’t life only mean something because of death?

More practically: who decides what counts as good? Innovation already trends toward extraction and manipulation — data / experience as the new capital, attention economies to manipulate behavior to extract data, parasocial behavior and failing associational bonds bc of siloed algorithmic information ecosystems etc etc. Without major change to the extractive incentives underlying VC/finance capitalism, data brokerage, digital advertising, and the Californian Ideology, etc., I don’t see how the system churns out bioenhancements that ultimately benefit *all of us* rather than ingratiating *most of us* at a biological level with technics embodying an extractive logic that stymies human agency.

Technology isn’t neutral. The assumption that maximizing “innovation” will benefit human wellbeing feels very Andreessen techno-optimist to me — and I’m sure he believes funding AI startups for cheating tools and social media troll bots benefits humanity too. The question is always: who decides what good means, and for whom? And why would transhumanism prove any different?

What are your thoughts on the Techno-Optimist Manifesto?​​​​​​​​​​​​​​​​

Rafael Ruiz's avatar

I really need to write a full academic paper or a substack post on transhumanism at some point! Because I have written several times that I think transhumanism in principle doesn't face any good objections, yet I haven't said why I believe that.

I think it helps to separate two kinds of worries in the enhancement debate. First, there are practice worries about imperfect technology, side-effects, inequality, exploitation, and foreseeable social pathologies. Second, there are in-principle worries that would remain even under ideal conditions, for example claims about the sanctity of “human nature,” dignity, giftedness, or the wrongness of altering ourselves. A lot of the academic exchange is basically: suppose enhancement were safe and fairly accessible. Would there still be a decisive moral objection? If an objection survives such idealization, it’s a deeper objection to transhumanism, if not, it’s more of a policy question. (That framing is pretty standard in the literature.)

On the in-principle side, I’m not convinced by the bioconservative line associated with people like Kass or Sandel. Their objections trade on a picture of the human body or human nature as having a special inviolability, and on moral intuitions about “playing God.” I think those intuitions are pretty shaky and can be explained by familiar biases like status quo bias and essentialism. Here's a reference of what I'm thinking of: https://ndpr.nd.edu/reviews/the-ethics-of-human-enhancement-understanding-the-debate/

And here's an except of some of the stuff I say on this, in a draft from my PhD:

"I think their view [Kass and Sandel] faces difficulties. Here I will present two objections that I believe are decisive. One is that their view appeals to the status quo as if it were morally justified, when it is not. The second is that their view of moral autonomy is romantic and mistaken.

First, their "do not tamper" objection assumes that our current unenhanced moral psychology is morally privileged. It treats our evolved dispositions as the default that must be respected. I see no good normative reason for that assumption. Our current moral psychology is the product of natural selection plus cultural drift, not the product of moral justification. Evolution built us for small-scale, kin-biased, often violent societies. It did not build us to be fair, impartial, future-regarding agents in a global, interdependent world. Here is a simple way to see the point. Imagine a veil of ignorance in a Rawlsian sense. You must choose, without knowing your position, which moral psychology humanity will have. You know some options lead to high levels of sexism, racism, and parochial cruelty, and other options lead to lower levels. You do not know if you will be in the advantaged group or in the targeted group of society. Would you rationally pick the current package of human moral biases as the baseline you are willing to lock in, indefinitely?

I heavily doubt it. Under conditions of impartial choice, we would almost certainly select improved moral dispositions. We would want less aggression, less vindictiveness, more cosmopolitan concern, more ability to take the standpoint of vulnerable others, etc. The fact that evolution handed us something worse and more arbitrary is not, by itself, a reason to keep it. We currently live under the tyranny of biology and historically contingent institutions. If we lived under a powerful veil of ignorance and original position that freed us from biases, I don’t think we would choose the current human form, with its particular psychology and biases, as the normatively justified default. If we were to choose under ideal moral conditions, we would not endorse, say, giving zero moral weight to people across the globe, fully dismissing far-away even beyond a minimum of sufficiency.

If that is correct, then if we forgo available measures to reduce prejudice or aggression, we are accepting to leave our biases and prejudices intact, either forever or for far longer than they would otherwise be, along with all the injustice and violence they cause in our current world. The result is that avoidable discrimination, cruelty, or even large-scale harms (like those born of tribalism and indifference to global issues) might continue for centuries. Sticking with an unenhanced moral capacity, one designed for life in small groups rather than a globalized world, can be grossly inadequate, even disastrous. Given our flawed world, we could use further help overcoming selfish tendencies that education, social pressures, laws, or moral reasoning struggle to overcome, even when combined. Making the choice of foregoing bioenhancement or AI assistants guarantees that avoidable injustice, cruelty, and large-scale harms (from sexism, racism, xenophobia, neglect of global poverty, etc.) will persist far longer than they would otherwise. So I think their stance should be characterized as a form of status quo bias and just world bias, an unjustified normative reverence for the way things happen to be (cf. Bostrom and Ord 2006). It treats the unenhanced state as the default good, and any change as a dangerous deviation, without offering a principled reason why. Normatively speaking, I don’t see why this should be the case.

The second objection appeals to autonomy. The idea is that moral agency only counts as genuinely valid if it proceeds from our "own" unassisted capacities, unaided by biomedical tweaks or AI support. External scaffolds are thought to corrupt this authenticity, making our choices somehow inauthentic or shallow.

This view of autonomy is, in my view, romantic and misleading. Human moral agency is already scaffolded, everywhere, all the time. None of us starts from a neutral, self-chosen moral standpoint. We are shaped by biology we did not choose (including aversion to pain, a sense of empathy, and a parochial and nepotistic bias to favor our family and friends). From childhood, we are socialized by parents, peers, schools, religions, media, laws, myths, rituals, and threats of punishment. We are praised and shamed into holding particular moral views. We internalize norms, often without argument. We are pressured to conform and punished if we do not. So our initial moral values and worldview don’t spring from a vacuum, or from a place that we should take as autonomous and initially morally justified. Offloading certain moral tasks onto external structures is just a continuation of this age-old interplay between person and context. The fact that we might use technology instead of social institutions doesn’t render it illicit.

If anything, carefully chosen moral offloading could improve our autonomy by correcting these biases. Just as we wear glasses to see more clearly, we could use tools to overcome moral failings that we recognize, such as narrowness of concern, cognitive biases, or weakness of the will. These aids could broaden our moral perspective and strengthen our resolve to do good. Most transhumanists also argue that whether we should take enhancements should be optional, so far from destroying autonomy, such technologies seem to usually increase it.

None of this is to say that moral offloading is a panacea or without any risks. Any new moral technology or institution might have trade-offs and unintended consequences that must be analyzed. The key is that we should weigh those trade-offs honestly in every specific case. The point is that refusing new scaffolds does not leave us in some pristine state of nature, it simply leaves in place the older scaffolding of our biology and childhood, which can be morally and cognitively inferior to what we could attempt to design. A person who rejects moral offloading isn’t an “independent” moral thinker standing above all influence, rather, they are leaning on the unexamined influences of their biology and culture, with all the moral arbitrariness that entails. The future of morality might be like Neurath’s boat, replacing the planks of human nature and our normative theories in a piecemeal fashion, where morality becomes detached from current human nature. There might be good precautionary principles for why this should be done with care, but their arguments don’t prove why this shouldn’t be done at all."

There *could* be better arguments for why we shouldn't bioenhance ourselves, but I think that the interlocutors against transhumanism so far have been pretty weak.

Rafael Ruiz's avatar

(Continued)

Now let me reply, although a bit quickly, to your other points:

RE: “Who’s to say extending our lives won’t make us bitterly miserable?”

Most defenders of radical life extension endorse something like indefinite voluntary extension. That is, not trapping people in existence. If someone’s life becomes bad by their own lights, they can stop the medical intervention, and a humane society should allow assisted dying under appropriate safeguards. That makes the objection about policy and welfare oversight of euthanasia (probably after some mental and physical wellness check, similar to the ones done for euthanasia), not about life extension being intrinsically wrong.

RE: “Doesn’t life only mean something because of death?”

I think this is completely wrong. I don’t think death is a necessary condition for meaning. The thought seems to rely on sliding from “death can *intensify* urgency or prioritization of life projects” to “without death nothing can matter.” But enhancement is not the same as removal of value, even in an open-ended life, people can still have relationships, projects, moral aims, aesthetic and intellectual pursuits. Those look like the standard sources of meaning, and they don’t obviously depend on a fixed endpoint. At most, finitude *contributes* to the sense of urgency.

Of course, more could be said here about particular pathologies we could develop when having longer lives. Yet we don't currently argue that we should euthanize people when they get old, even though the number of years that people are currently living is morally arbitrary. We're purely at the whims of biology.

I honestly think this line of argument is adaptive preferences (sour grapes) from people who have no other choice but to die of illness or aging. It feels like an insect trying to argue "why do humans need more than one year of life? One year feels like plenty to me!". (Similar to when there were no good weight-loss drugs, many people argued about fat positivity, and then when they were made available, many of them might have jumped into such weight-loss drugs and abandoned the ideology.)

RE: "Who decides what counts as good?"

The idea is that the people themselves can make the choice. That is, the choice to enter into transhumanism tends to be put forward as opt-in.

I do think that just opt-in by itself isn’t enough, because indirect coercion is possible. We should design institutions so that enhancement remains genuinely voluntary and doesn’t become a positional arms race or a social requirement. But this is a governance question.

RE: The variety of bad technologies that you mention at the end of your post.

I don't see why we can't choose technologies one by one. Blanket statements about technology seem always problematic. There will be biotechnologies that will be bad, some that will be good.

I don’t see why we should treat “technology” as a package deal. We evaluate interventions one by one, under moral and risk management. Blanket statements about technology seem like they will always be problematic. There will be biotechnologies that will be bad, some that will be good. There will be enhancements that are reckless or unjust, and others that are straightforwardly beneficial.

RE: My opinion on the techno-optimist manifesto.

As it should have become somewhat clear, none of this commits me to naive techno-optimism, which I think is Andreesen's position. He blocked on Twitter for years, and I think he's simply... not very smart. I have another half-written draft called "Effective Accelerationism is stupid and evil".

I think you can be pro-enhancement while also being strongly pro-risk-mitigation, in roughly the way Yudkowsky and other cautious transhumanists argue. So I am an techno-optimist, yet against Andreesen's Techno-Optimist Manifesto. I made another Substack post about why I think there's like a 10% chance that AI makes humanity extinct within the century: https://themoralcircle.substack.com/p/my-pdoom-or-death-by-transformers

I also have another post, arguing that whether you're pro-tech or anti-tech, should be considered a central aspect of your political orientation: https://themoralcircle.substack.com/p/pro-tech-vs-anti-tech-the-third-axis (Sorry for so much self-promotion!)

Cinna the Poet's avatar

This needs some proofreading. Chat GPT can help you with that too.

Rafael Ruiz's avatar

I'll ask it. But what stood out in particular as needing improvement?

(EDIT: Just corrected like 20 typos! Whoops...)