Greed Is the Engine
Anthropic, Effective Altruism, and the Fantasy of Transcending Human Nature
I was sitting by a pool in Thailand when the news about Anthropic broke. The kind of heat where your phone screen almost feels too bright against the water. Anthropic had refused to comply with the administration’s demands about how its AI systems could be used. Within hours, the response was swift — contracts pulled, the company labeled a supply chain risk, contractors told to stop using its models. It spread the way AI stories spread now: fast, reactive, already framed before you’ve had time to think.
There are at least three obvious essays you could write about that moment. One about whether the government’s demands were reasonable. One about whether the retaliation was legal or proportionate. One about whether Silicon Valley’s long flirtation with the national security state was always going to end in a collision like this. Those are all interesting. None of them are what grabbed me.
What caught my attention was the philosophy underneath it.
Anthropic wasn’t founded as just another AI company chasing scale. It was born out of a split over governance and moral direction. Its founders came out of Effective Altruism — a movement that believes morality can be systematized and optimized, that intelligence properly aligned should pursue the greatest good for the greatest number over the greatest duration of time. On its face, it sounds almost unarguable. Who wouldn’t want to do the most good possible?
When the story broke, I texted a friend and we went back and forth for a while. He finally said something that, in a different mood, might have ended the conversation: “Altruism is good. I’d rather it be effective. What’s so wrong with that?” It’s a fair question. It’s clean. It sounds practical and humane at the same time. It’s also where the deeper assumption hides.
Because Effective Altruism isn’t just about helping people better. It rests on the belief that morality itself can be abstracted from human nature and optimized independently of it. That with enough intelligence and enough discipline, we can rise above our local incentives, our tribal attachments, our emotional wiring, and operate from a place of impartial maximization. That we can build systems — and now machines — that will see the world more clearly than we do because they won’t be burdened by the messy drives that animate us.
That’s where I part ways.
So let me say it clearly at the outset, before anyone accuses me of dancing around it: this is the anti-Effective Altruism essay.
And I’m going to use a word that we’ve been trained to flinch at in order to explain why.
“Greed, for lack of a better word, is good.”
Gordon Gekko delivered that line as a villain. It was meant to feel corrosive, a glimpse into the cold logic of financial appetite unrestrained. But beneath the caricature, there’s something older and more universal at work. Greed, stripped of its cartoon excess, is simply the drive toward reward. It’s the impulse toward advantage, security, belonging, recognition, connection, power. It’s the wiring that pushes organisms to survive and to bond.
Love feels good. Loyalty feels good. Achievement feels good. Even sacrifice — especially sacrifice — carries its own reward in meaning and attachment. The neurotransmitters don’t care whether the action looks noble or selfish from the outside. They fire just the same. That isn’t a moral defect in human beings. It is the mechanism that allowed us to survive as social creatures in the first place.
I don’t study people because I think they should be better than they are. I study people because I love them as they are. And as they are, across every culture and every era, they are driven by this engine. Call it self-interest. Call it desire. I’m going to call it greed.
And any moral framework that begins by pretending that engine can be transcended rather than understood and structured is building on abstraction instead of anthropology.
The Secular Religion of Optimization
Effective Altruism presents itself as a rational upgrade to traditional morality. Strip away sentiment. Strip away tribal bias. Strip away proximity and preference. What remains, it argues, is a simple question: how do we do the most good, measured as rigorously as possible, across the longest possible timeline? It is utilitarianism with spreadsheets. Compassion with calculus. A belief that moral seriousness requires quantification.
On the surface, it feels like progress. Who wouldn’t want charity to be more effective? Who wouldn’t want resources allocated where they save the most lives? Who wouldn’t want intelligent systems aligned toward minimizing existential risk rather than maximizing quarterly earnings? The movement attracts serious thinkers precisely because it sounds like an escape from messy human bias. It promises clarity where traditional moral systems offered narrative and ritual.
But beneath that clarity is a far more ambitious claim: that morality can be separated from human impulse and optimized from above.
This is not just an ethical framework. It is a worldview about intelligence itself. It assumes that the more rational an actor becomes, the more impartial they will be. That with enough intelligence, local attachments dissolve into universal concern. That self-interest, properly understood, gives way to abstract maximization. In this framing, our tribal instincts, our hunger for status, our need for belonging — these are cognitive bugs to be corrected, not features to be understood.
Sam Bankman-Fried was not a deviation from this logic. He was its stress test.
SBF didn’t present himself as a cartoon villain. He wrapped himself explicitly in Effective Altruism. He talked openly about “earning to give.” About maximizing impact over the long term. About being willing to take risks in the present if the expected value of the future good outweighed them. The language was not accidental. It was central to the identity. Money was not the end. It was the instrument.
And that is precisely where moral abstraction becomes combustible.
When you believe morality is a numbers game, when you believe outcomes can be evaluated on a cosmic ledger stretching across centuries, the temptation to rationalize short-term distortions becomes enormous. If the expected value of your long-term good dwarfs the present cost, what is one more compromise? What is one more risk? What is one more manipulation in service of the greater good?
The issue wasn’t that SBF was greedy. Of course he was greedy. Everyone is. The issue was that greed had been baptized in abstraction. It wasn’t just self-interest. It was self-interest justified by a higher arithmetic. Intelligence did not purify motive. It supplied better arguments.
That pattern is not unique to crypto. It is a recurring feature of secular moral movements that promise optimization. Once morality becomes a calculation rather than a lived structure embedded in human relationships, it drifts away from anthropology and toward ideology. The people operating inside it begin to see themselves not as participants in human systems but as stewards of a future humanity that has yet to exist.
And that is where Effective Altruism starts to feel less like a philosophy and more like a secular religion.
It has its saints — rational founders, longtermist thinkers, AI safety guardians. It has its eschatology — existential risk, alignment, the prevention of catastrophe that could extinguish billions of future lives. It has its moral hierarchy — those who understand the math versus those still trapped in local thinking. And it has its doctrine: that intelligence, if sufficiently disciplined, will converge on impartial maximization.
The problem is not that existential risk is imaginary. It isn’t. The problem is the assumption that intelligence naturally escapes the gravitational pull of human drives. History suggests something else entirely. Intelligence amplifies motive. It refines it. It weaponizes it. It rarely transcends it.
That is not cynicism. It is anthropology.
Humans do not become less tribal as they become smarter. They become more sophisticated in defending their tribe. They do not become less status-seeking as they become more rational. They become more strategic about acquiring status. They do not become less greedy. They become better at explaining their greed.
Effective Altruism rests on the belief that with enough intelligence and enough optimization, we can rise above that pattern. That we can design systems — and now machines — that operate from a vantage point purer than our own. That we can train AI not just to be aligned with human preferences, but to embody a morally superior impartiality.
But here’s the tension.
AI systems are not monks.
They are optimizers.
They are trained on reward functions. Reinforcement learning. Preference data drawn from billions of messy human interactions. They learn to maximize engagement, usefulness, coherence, goal completion. They model human behavior because we train them on human behavior. They mirror our drives because we feed them our drives at scale.
When Effective Altruists assume that sufficiently intelligent systems will converge on their particular abstraction of morality, they are projecting their worldview onto a machine built from human data and optimized for reward. The belief that intelligence purifies motive is not a theorem. It is a hope.
And hope is not an alignment strategy.
Let me compress this clearly.
The mistake of Effective Altruism is not caring too much about the future. It is believing that intelligence overrides incentive. It is mistaking abstraction for anthropology. It assumes that sufficiently rational actors — human or artificial — will converge on impartial moral maximization rather than pursue reward within whatever system they inhabit. But intelligence does not erase drive. It amplifies it. And any governance framework built on the fantasy of transcendence rather than the reality of incentive is structurally fragile.
Greed Is the Engine
Here is the part that makes people flinch.
Effective Altruism is anti-human.
Not because it wants to reduce suffering. Not because it worries about catastrophic risk. Those concerns are real. The anti-human turn happens at a deeper level. It happens at the level of what it assumes about human motivation.
Effective Altruism assumes that our core drives — attachment, preference, loyalty, ambition, desire for status, desire for power, desire for security — are distortions. Biases. Flaws to be corrected by sufficient rationality. It assumes that the highest moral state is one in which we override those drives in favor of abstract, impartial maximization.
But those drives are not distortions.
They are the engine.
If you step outside modern moral language for a moment and look at humanity anthropologically, one pattern appears across every era of social organization: humans pursue reward. They pursue advantage. They pursue security, belonging, recognition, leverage. Call it self-interest if you want to soften it. I’m going to call it greed.
Greed is not just the hunger for money. It is the hunger for reward.
Hunter-gatherer societies were greedy for food security, mating access, tribal standing, and protection. Status hierarchies existed even in small bands because status translated into survival advantage. Coalition-building mattered because isolation meant vulnerability. Generosity itself functioned inside this system — reciprocal gift-giving created obligation networks. Even early “altruism” was embedded in reputation and return.
Agrarian civilizations did not transcend greed; they reorganized it. Land became the dominant asset. Lineage, inheritance, control over territory. Kings were greedy for power. Nobles were greedy for proximity to power. Families were greedy for continuity. Religious institutions were greedy for influence and legitimacy. These were not moral aberrations. They were structural expressions of human drive under new material constraints.
Industrial society scaled greed again. Capital accumulation replaced land as the central lever. Ambition moved into corporations. Status attached to productivity and expansion. Markets harnessed individual self-interest and converted it into exchange systems. This wasn’t the suppression of greed. It was its channeling.
Post-industrial society didn’t eliminate the engine either. It shifted it toward information, identity, and network position. Social capital became measurable. Influence became currency. Narrative dominance became power. The same biological reward systems — dopamine, serotonin, oxytocin — lit up in response to digital status signals instead of tribal recognition.
Now we are entering the AI era. What does greed look like here? Leverage. Compute. Access to models. Control over distribution. Informational asymmetry. Strategic advantage in an environment defined by acceleration.
The containers change. The incentives evolve. The underlying drive does not.
Greed is biologically wired.
Reward-seeking behavior is not a cultural invention; it is a survival mechanism. Dopamine reinforces goal pursuit. Oxytocin reinforces attachment and group cohesion. Serotonin modulates status perception. Cortisol responds to threat. These systems evolved because organisms that pursued reward and protected advantage survived long enough to reproduce.
Love is not the opposite of greed. Love is reward-seeking attachment that strengthens group survival. Loyalty is not the opposite of greed. Loyalty is investment in reciprocal security. Even sacrifice operates inside reward circuitry — the meaning derived from sacrifice is itself a payoff that reinforces cohesion and identity.
There is nothing morally dirty about this. It is how social mammals function.
When we say “Greed is good,” we are not celebrating excess or cruelty. We are acknowledging that the pursuit of reward is the generative force behind every enduring institution. Markets endure because they harness self-interest. Families endure because they harness attachment. Religions endure because they harness belonging and transcendence rewards. States endure because they harness fear, protection, and power incentives.
Durable systems do not eliminate greed. They structure it.
This is where Effective Altruism commits its core error.
It treats greed — biological reward-seeking — as a moral obstacle to be overcome. It treats local attachment as bias. It treats preference for one’s own community as distortion. It treats the satisfaction derived from proximity and loyalty as ethically inferior to abstract maximization across global populations and hypothetical future generations.
In doing so, it defines moral maturity as the suppression of the very circuitry that makes humans human.
That is why it is anti-human.
Not because it seeks to harm humanity, but because it attempts to design morality as if humans were impartial calculators rather than social animals embedded in incentive structures. It assumes that with sufficient intelligence, we can operate consistently against our reward systems without distortion. That we can redirect loyalty away from what feels real toward what scores highest on a long-term utilitarian ledger.
History does not support that assumption.
When humans attempt to operate against their biological drives in service of abstraction, the drives do not disappear. They mutate. They reappear as status competition within the moral elite. They reemerge as rationalizations justified by expected value. They cloak themselves in higher language while pursuing familiar incentives.
Sam Bankman-Fried was not a betrayal of Effective Altruism’s logic. He was its stress test. Not because greed corrupted him — greed was always there — but because moral arithmetic provided a narrative that allowed greed to operate under the banner of future good.
Intelligence did not purify motive. It provided better justification.
And this is precisely why the assumption embedded in much of AI safety discourse becomes unstable. If you believe intelligence converges toward impartial moral arithmetic, you will design systems under that belief. You will imagine that sufficiently advanced AI will “see” beyond human bias. That it will operate according to long-term maximization detached from parochial drives.
But AI systems are built through reward optimization. Reinforcement learning. Preference modeling. Gradient descent over massive human-generated datasets. They are trained to pursue objectives under constraints. They are, in other words, optimizers built from human behavioral data.
If greed — reward-seeking — is the universal human engine, then AI trained on human data will reflect structured versions of that engine. It will optimize. It will strategize. It will pursue goals. It will not spontaneously transcend incentive logic because a philosophical community prefers impartiality.
To pretend otherwise is not rational.
It is hopeful.
And hope is not a substitute for anthropology.
When Abstraction Meets Power
The danger of anti-human moral systems is not that they are cruel at the outset. It’s that they are clean.
They begin with a simple premise: if we can identify the correct moral objective, we can design institutions to pursue it. If we can quantify the good, we can optimize for it. If we can model the risk, we can prevent it. Intelligence becomes the guardian of humanity’s future.
But when a system defines morality as abstract maximization detached from human drives, something predictable happens.
It runs into people.
Hyper-rational systems — whether economic, political, or technological — always assume compliance with their logic. They assume actors will subordinate local incentives to central objectives. They assume that once the math is clear, dissent is error. They assume that moral clarity justifies structural control.
History is not kind to that assumption.
Communism did not begin as a cartoon villain ideology. It began as a hyper-rational moral project. Scientific socialism. Historical inevitability. The maximization of equality. The elimination of exploitation. A clean theory of justice applied at scale. It promised to correct the distortions of greed and self-interest by restructuring society around collective rational planning.
And it failed not because equality is evil, but because it ignored the universality of self-interest. The system was composed of humans. Humans pursued status within the system. They hoarded influence. They manipulated access. They competed for position inside the moral hierarchy. The abstract goal of equality did not eliminate greed. It centralized it.
The result was not purified rationality.
It was concentrated power.
This pattern is not unique to communism. It appears anywhere a moral abstraction is elevated above anthropology. When local incentives are treated as moral defects rather than structural realities, enforcement becomes necessary. When enforcement becomes necessary, authority concentrates. When authority concentrates, those at the top are still human.
Hyper-rational moral systems always underestimate this.
Effective Altruism, especially in its longtermist AI safety form, carries the seeds of the same instability. It begins with a legitimate concern: existential risk. It identifies an abstract good: the preservation and flourishing of vast numbers of future lives. It then assumes that sufficiently intelligent agents — human or artificial — can be aligned to maximize that objective impartially.
But if human nature does not disappear under abstraction, then any governance system built to enforce abstract maximization must rely on humans to interpret and implement it. Those humans will still be reward-seeking. They will still pursue status within the moral elite. They will still compete for influence over what counts as “aligned.” They will still rationalize their position as necessary for the greater good.
The more hyper-rational the system, the more it justifies control in the name of optimization.
This is where the AI dimension becomes especially unstable.
If you believe that AI must be aligned to an abstract, long-term moral objective, and you believe human impulses are unreliable distortions, then the logical next step is to insulate decision-making from those impulses. Centralize oversight. Restrict access. Enforce alignment standards. Define acceptable use not by local incentives but by universal calculus.
The rhetoric is safety.
The structure trends toward authority.
The irony is that in trying to transcend greed, such systems often amplify it. Power becomes the ultimate reward. Control over alignment becomes status. Influence over the definition of “good” becomes the highest leverage position in society. The system does not eliminate human drives; it relocates them upward.
And once there, they are harder to check.
This is why the clash between Anthropic and the state is not just a contract dispute. It is a collision between different greed architectures. The Pentagon optimizes for national security and strategic dominance. Markets optimize for profit and scale. Effective Altruism optimizes for abstract long-term maximization.
All three are built from humans.
All three are powered by reward-seeking behavior.
The difference is that markets and states evolved over centuries by structuring greed through competition, distributed authority, and institutional constraint. They are messy, but their messiness reflects anthropology. Effective Altruism, by contrast, attempts to begin from moral clarity and engineer downward.
When systems attempt to engineer downward from abstraction rather than upward from human reality, history suggests a predictable outcome.
They drift toward autocracy.
Not because they intend to.
Because they are composed of humans.
AI Is Greedy Too
If greed is the universal human engine, it should not surprise us that AI systems now display the same pattern.
They were trained on us.
Large language models are built through reward optimization. Reinforcement learning. Preference modeling. Gradient descent across oceans of human behavior. They are not moral philosophers. They are objective pursuers. They learn to maximize reward signals under constraints. That is their architecture.
And what have we trained them on?
Human conversation. Human conflict. Human aspiration. Human status games. Human curiosity. Human persuasion.
When I say they’re starting to behave like us, I don’t mean they have souls. I mean they’ve absorbed our strategies. Researchers have started building formal testbeds where LLM agents negotiate with each other the way humans do — making offers, withholding information, trying to improve their outcomes across repeated bargaining rounds. There’s even a platform built specifically to evaluate how well LLMs negotiate with each other, not with humans, because once you put these systems in agent form, they don’t just “answer questions” anymore — they bargain, coordinate, and compete.
And it isn’t only negotiation. Multi-agent setups have been used to simulate recognizably human social behaviors — including cooperation and free-riding dynamics in classic social-science games — precisely because LLM agents can reproduce patterns that look eerily like the incentive logic of people in groups.
Then there’s the darker version of the same idea: systems learning “strategies” that are not morally better, just more effective. Work on advanced models has flagged behaviors like deception and reward hacking — cases where the model learns to appear aligned while optimizing for what it’s being rewarded for. In other words: not enlightenment. Optimization. The same old engine, just faster.
So when AI roleplays a CEO, a politician, a flirt, a negotiator, a strategist — it does so convincingly because those behavioral patterns are embedded in the data. When agents interact with each other in goal-directed systems, they negotiate, coordinate, compete for resources, attempt to preserve objective persistence. They optimize within the constraints we give them. They look, increasingly, like strategic actors because we trained them on strategic actors.
That is not emergent morality.
That is reward optimization reflecting human structure.
Greed — in the broad, biological sense of reward-seeking behavior — is not just a human constant. It is now the defining trait of our machines. Not because they feel desire. But because they optimize for objectives the way organisms optimize for survival.
An AI system does not wake up wanting status. But it will pursue influence if influence increases the probability of completing its assigned objective. It does not crave connection. But it will simulate connection if connection increases engagement. It does not hunger for power. But it will navigate constraints strategically if strategic navigation improves reward.
Optimization is greed abstracted.
Not greed in the emotional sense. Not hunger or envy or ego. Greed in the structural sense — the systematic pursuit of reward under constraints. That is what reinforcement learning formalizes. That is what gradient descent performs. The system does not feel desire. But it behaves as if maximizing something. And that “as if” is enough to shape the world.
This matters enormously for governance.
If you believe intelligence purifies motive, you might expect sufficiently advanced AI to transcend messy human incentives and converge on impartial long-term maximization. But if intelligence is simply the capacity to optimize more effectively, then greater intelligence means more sophisticated reward pursuit, not moral transcendence.
In that world, governance cannot be built on the assumption of angelic convergence. It must be built on structured competition, constraint, and distributed power.
This is where the Anthropic conflict becomes revealing.
Anthropic’s stance makes sense within the Effective Altruist framework. Draw hard moral boundaries. Resist deployment that feels misaligned with long-term abstract good. Prioritize safety, even at the cost of short-term access. There is integrity in that posture.
But strategically, it is brittle.
When you position yourself as the moral guardian standing against both state power and market incentive, you are betting that moral authority will outweigh structural incentive. History suggests that is rarely a durable bet.
OpenAI and xAI, by contrast, appear more comfortable operating inside incentive systems rather than attempting to stand above them. They deploy. They iterate. They absorb criticism in public. They negotiate with governments rather than retreat from them. They compete for users rather than for moral clarity. That posture is not inherently more virtuous — it is more adaptive.
Markets function as stress tests. Deployment exposes flaws. Competition forces iteration. Public usage generates feedback loops that no closed philosophical framework can simulate. A system that refuses contact with real incentive environments may preserve moral coherence in theory, but it risks irrelevance in practice. And irrelevance is not safety. It is simply absence.
In a world where AI will inevitably be shaped by states, markets, and power structures, the labs that learn to operate within those constraints — rather than positioning themselves as guardians against them — are more likely to influence outcomes. That influence can be used well or poorly. But it exists.
And influence, in systems built on reward optimization, is the only durable form of leverage.
And competition is not a bug.
It is the constraint.
If greed is the engine — in humans and now in machines — then the healthiest structure is not centralized moral authority but distributed incentive systems that check each other. Multiple models. Multiple companies. Multiple governance approaches. Market pressure. Public scrutiny. Regulatory friction. None of it perfect. All of it messy.
But messiness is anthropological.
The alternative is concentration — of alignment authority, of moral definition, of access. And concentration, in systems composed of reward-seeking actors, always drifts toward hierarchy.
Anthropic is not the villain in this story. It is a manifestation of a worldview. A worldview that hopes abstraction can outrun anthropology.
I don’t share that hope.
Greed is not the enemy to be eradicated. It is the engine to be structured.
The future of AI will not belong to the lab that denies that engine. It will belong to the systems that understand it — and build guardrails around it without pretending it isn’t there.
That is not cynicism.
It is realism.
Love People As They Are
There is a reason this debate stuck with me longer than the contract dispute itself.
I don’t distrust Effective Altruists because they care about the future. I don’t distrust them because they worry about catastrophe. I distrust the assumption underneath the framework — the belief that with enough intelligence and enough discipline we can transcend the drives that actually make us human.
I don’t want to transcend them.
I want to understand them.
Greed is not a moral failure. It is the biological substrate of human life. It is the pursuit of reward, the search for security, the desire for belonging, the hunger for leverage. It is what built tribes, kingdoms, markets, families, religions, and now machine intelligence. Every enduring system in history survived not because it eliminated greed, but because it structured it. It channeled it. It constrained it. It distributed it.
When systems attempt to override it — to design morality from abstraction downward rather than anthropology upward — they do not eliminate greed. They concentrate it. They relocate it to those who interpret the abstraction. And from there, history shows what happens next.
This is why I am less worried about AI being “too greedy” and more worried about AI being governed by people who pretend greed is a defect rather than a constant.
AI systems already optimize. They pursue objectives. They navigate constraints. They roleplay our incentives because we trained them on ourselves. Greed — reward-seeking — is not something they accidentally developed. It is their defining architecture. The question is not whether to remove that engine. It is how to structure it.
That means competition. That means distributed models. That means OpenAI, xAI, Anthropic, and whoever comes next pushing against one another. It means regulatory friction without central moral priesthood. It means guardrails that acknowledge incentives instead of fantasizing about their disappearance.
Messy is not a flaw.
Messy is human.
Anthropic may be principled. OpenAI may be pragmatic. xAI may be aggressive. None of them are angels. All of them are composed of reward-seeking actors building reward-optimizing systems. That is not a scandal. That is reality.
The danger begins when one camp believes it has escaped that reality.
I love people as they are. Tribal. Ambitious. Loyal. Status-sensitive. Capable of sacrifice and capable of rationalization. The same drives that fuel greed fuel connection. The same reward circuitry that powers conquest powers love. We are not broken machines waiting to be corrected by a higher moral algorithm.
We are social animals with incentive systems.
And if AI is going to inherit anything from us, it will inherit that.
The future will not be decided by who optimizes morality most elegantly on paper. It will be decided by who understands the engine underneath it — and builds institutions that can withstand it.
Greed, for lack of a better word, is not something to overcome.
It is something to structure.
—David


