Modernized Classics Logo Modernized Classics
Cover art for On Liberty, Utilitarianism, and Other Essays featuring a minimalist balance scale formed by two opposing speech bubbles, suggesting liberty versus authority, set against a refined parchment-toned background with modern typography.

On Liberty, Utilitarianism, and Other Essays

by John Stuart Mill, simplified

Originally published: 1859 Modernized: 2025

I — Introductory

This essay isn’t about the famous free-will debate—the so-called Liberty of the Will versus what used to be labeled Philosophical Necessity. It’s about something more down-to-earth and more politically explosive: civil (or social) liberty—what it is, and where the legitimate power of society over an individual should end.

People rarely state this question plainly, and even more rarely argue it in general terms. But it quietly shapes the biggest public fights of any era, because it’s always in the room even when nobody names it. And it’s hard not to think it will soon be recognized as the central political problem of the future. In one sense, none of this is new: humanity has wrestled with it for as long as we have records. What’s new are the conditions. As societies become more complex and “civilized,” the old answers stop fitting, and the topic demands a deeper, more fundamental treatment.

Liberty vs. Authority: the original fight

If you look at the early histories most people learn first—Greece, Rome, England—the dominant storyline is a struggle between Liberty and Authority. But in those older worlds, “liberty” didn’t mean what many people mean today. It meant protection against the government’s rulers—a shield against political tyranny.

Most of the time, rulers were assumed to be naturally opposed to the people they governed. Except in a few of the more democratic Greek city-states, the rulers were imagined as a separate force with their own interests: a single monarch, or a tribe, or a hereditary caste. They got power through inheritance or conquest; they didn’t hold it on the people’s permission; and the people often neither dared nor even wished to dispute their supremacy. The danger wasn’t that government existed—it was that government, by its nature, was a loaded weapon.

The logic went like this: if you don’t want the weak to be picked apart by countless small predators, you need a bigger predator to keep the smaller ones in check. But the “big predator” is still a predator. The chief vulture is just as likely to tear into the flock as the lesser ones. So people had to live in a constant defensive posture, watching for the beak and claws.

That’s why “patriots” (in the old sense) made their goal simple: set limits on what rulers are allowed to do. That limitation is what they called liberty.

They pursued it in two main ways:

  • Rights and immunities: Get rulers to formally recognize certain protected zones—political liberties or rights—that it would be a breach of duty to violate. If a ruler did violate them, people treated resistance, even rebellion, as justified.
  • Constitutional restraints: Build checks into the system so that major acts of power require the consent of the community, or of some body meant to represent the community’s interests.

Across most of Europe, rulers were sooner or later forced to accept at least some version of the first approach. The second was harder. And where it didn’t exist, creating it became the main project of those who loved liberty; where it existed partially, the project became making it stronger.

Still, as long as people were satisfied fighting one enemy with another—accepting a master so long as they had some real protection against his worst impulses—their ambitions didn’t reach much farther than “limit the ruler.”

A new idea: rulers as delegates

Then something shifted. Over time, people stopped assuming it was a law of nature that governors must be an independent power, inherently opposed to the governed. It began to seem far better for public officials to be delegates—temporary renters of authority, not permanent owners of it—removable whenever the public chose.

On this view, only elections and real accountability could guarantee that government power wouldn’t be used against the people. Gradually, this push for elective and temporary rulers became the leading goal of popular movements wherever they existed. It also displaced, to a large extent, the older obsession with merely limiting the ruler’s power.

As the battle to make rulers chosen by the ruled advanced, some people started to say: maybe we’ve been worrying about limits too much. Limits make sense when rulers are habitually hostile to the people. But if rulers truly represent the nation—if they’re identified with the public—then what’s there to fear? The nation doesn’t need protection from its own will. If rulers are genuinely responsible to the people and can be quickly removed, then the people can “trust” them with broad power, because that power is really just the nation’s power, concentrated into a convenient instrument.

This mood—more a feeling than a carefully defended theory—ran through much of the last generation of European liberalism, and on the Continent it still often dominates. Many Continental political thinkers treat any strict limits on government power as unnecessary, except when they believe a government has no right to exist in the first place. Something similar might have become common in England as well, if the circumstances that encouraged it had lasted.

The reality check: “self-government” isn’t that simple

But success, in politics and philosophy alike, has a way of exposing weaknesses that failure can hide. The claim “the people don’t need limits on their power over themselves” can feel self-evident when popular government is only an ideal—something you read about in history or imagine as a future dream.

Even the violent detours of the French Revolution didn’t automatically disprove it, because the worst horrors could be attributed to a usurping minority, or to the temporary convulsion of a society erupting against monarchical and aristocratic despotism, rather than to the ordinary workings of representative institutions.

Over time, though, democracy stopped being theoretical. A democratic republic came to cover a large part of the globe and became one of the strongest forces among nations. Elective, responsible government became a real, living fact—something the world could observe and criticize.

And that’s when people began to notice a problem hidden inside the comforting language. Phrases like “self-government” and “the people ruling themselves” don’t quite describe what actually happens.

  • The “people” who exercise power are not always the same individuals as the “people” over whom that power is exercised.
  • The “self” in “self-government” usually doesn’t mean each person governing themself; it means each person governed by everyone else.

In practice, “the will of the people” often means the will of the most numerous group, or the most active group, or simply the group that succeeds in being treated as the majority. That majority can absolutely want to oppress a minority. So the same basic precautions we take against abusive rulers are also needed against abusive majorities.

That’s why limiting government power over individuals doesn’t become less important just because the government is accountable to “the community.” In reality, it’s accountable to the strongest party within the community. This way of seeing things appeals both to careful thinkers and to those powerful groups in Europe who believe democracy threatens their interests. It has therefore spread easily. And in modern political discussion, the tyranny of the majority is now widely listed among the dangers society must guard against.

Social tyranny: the quieter, deeper threat

At first—and still, in common talk—people fear majority tyranny mainly because they imagine it working through official acts: laws, courts, police, penalties. But more reflective observers saw something worse. When society itself becomes the tyrant—society as a collective force pressing down on the individuals who compose it—its tools aren’t limited to government offices.

Society can carry out its own orders without passing a law. It can enforce its “mandates” through approval and rejection, reputation and shame, hiring and firing, inclusion and exile. And when society issues the wrong mandates—or issues mandates at all in areas where it has no business interfering—it can produce a kind of social tyranny that is often more frightening than political oppression.

Why? Because it usually doesn’t need extreme punishments to be effective. It leaves fewer escape routes. It reaches into the details of everyday life. And it can do something politics often can’t: it can pressure people not just to obey, but to become a certain kind of person—flattening individuality, stunting self-development, and trying to force every character into the same approved mold. It doesn’t just constrain conduct; it can, in effect, enslave the soul.

So protection against abusive officials is not enough. People also need protection against:

  • the tyranny of prevailing opinion and feeling
  • the social impulse to impose “normal” ideas and practices as rules on dissenters
  • the tendency to treat disagreement not as a fact of human diversity, but as something to be corrected, punished, or erased

There is, therefore, a real boundary to how far collective opinion may legitimately interfere with individual independence. Finding that boundary—and holding it against constant pressure—is just as necessary for a healthy human society as protection against political despotism.

The hardest part: where to draw the line

Most people won’t deny this principle when it’s stated in general terms. The real battle is practical: where exactly is the boundary? How do we make a fair adjustment between individual freedom and social control?

On this question, almost everything remains unresolved.

After all, much of what makes life valuable depends on restraining other people’s actions. We need rules. Some rules must be enforced by law. Many more are enforced by social opinion in areas where law would be inappropriate. The central question is: which rules, and on what basis? Outside a few obvious cases, humanity has made surprisingly little progress in answering it.

Different ages and different countries have answered differently—sometimes so differently that what one society treats as obvious, another sees as bizarre. Yet each society typically assumes its own rules are self-evident and self-justifying. That near-universal confidence is one of the strongest effects of custom. Custom doesn’t merely feel natural; it routinely masquerades as nature itself.

Custom also prevents people from feeling any need to justify the moral rules they impose on one another. In many societies it isn’t even considered necessary to give reasons—either to others or to oneself. People are encouraged to believe, sometimes even by those who want to sound philosophical, that their feelings on these matters are better than reasons, and make reasons unnecessary.

The working rule underneath a lot of moral judgment is simple: each person feels that everyone ought to act the way they—and the people they sympathize with—would like them to act. Almost nobody admits, even privately, that their standard is personal preference. But if an opinion about conduct isn’t supported by reasons, it’s hard to call it anything else. And if the “reasons” offered boil down to “many other people feel the same way,” then it’s still preference—just a crowd-sized preference.

To the ordinary person, though, that crowd-backed preference feels not only adequate but decisive. It becomes their main justification for many of their beliefs about morality, taste, and propriety—except where those beliefs are explicitly written into their religious creed. Even there, it often guides how they interpret the creed.

So moral judgments end up shaped by the full messy mix of forces that shape human desires about how others should behave:

  • sometimes reason
  • often prejudice or superstition
  • sometimes social affection
  • sometimes openly antisocial emotions—envy, jealousy, arrogance, contempt
  • most commonly, self-interest, both legitimate and illegitimate, along with personal fears and desires

Wherever one class dominates, a large share of a country’s morality grows out of that class’s interests and its sense of superiority. Think of the moral codes governing relationships like:

  • Spartans and Helots
  • slaveholders and enslaved people
  • princes and subjects
  • nobles and commoners
  • men and women

In many cases, these “moralities” were built to serve the interests and pride of the dominant group. And then, in a feedback loop, the resulting sentiments shape how the dominant group treats one another as well.

On the other hand, when a formerly dominant class loses power—or when its dominance becomes unpopular—the prevailing moral tone often takes on the flavor of an impatient resentment of superiority itself.

Another major source of enforced rules has been humanity’s servility toward what people believed were the preferences and dislikes of their earthly rulers, or of their gods. This servility is fundamentally selfish, but it isn’t usually hypocrisy. It produces real emotions—real hatred and disgust—and those emotions have fueled atrocities like burning “magicians” and “heretics.”

Of course, the broad and obvious interests of society do influence moral sentiment, and strongly. But often not because people reason their way to “this helps society.” More often the influence comes indirectly, through sympathies and antipathies that grew out of social arrangements. And feelings that have little to do with society’s real interests have been just as powerful in creating moral codes.

Why reformers so rarely defend freedom in principle

Put all this together, and you get a blunt conclusion: the rules society enforces—through law or opinion—have mostly been determined by the likes and dislikes of society, or of some powerful part of it.

Even people who were ahead of their time in thought and feeling usually didn’t challenge that fact as a matter of principle. They might clash with society over specific issues, but they tended to focus on asking, “What should society approve or condemn?” rather than asking, “Should society’s approval and condemnation rule individuals at all?”

In other words, they often tried to change what the crowd hated, rather than joining forces with all “heretics” to defend freedom as such. They fought for tolerance in their own case, not necessarily for a general right to dissent.

The main exception has been religious belief—and it’s an instructive exception. It shows, among other things, how fallible the so-called moral sense can be. A sincerely devout bigot’s theological hatred is one of the clearest examples of what people call “moral feeling.”

The first people who broke away from what called itself the Universal Church were usually no more willing to tolerate religious differences than the Church had been. But once the conflict cooled without any side winning completely, each church or sect had to settle for holding the ground it already possessed. Minorities, realizing they were unlikely to become majorities, were forced to ask for the right to differ from those they couldn’t convert.

So it is mainly on this battlefield—religion—that the rights of the individual against society have been asserted in broad, principled terms, and society’s claim to enforce conformity on dissenters has been openly challenged.

And even there, practice lags far behind principle. Religious liberty has been fully realized in very few places, and usually only where something else tipped the balance: religious indifference—a dislike of having one’s peace disturbed by theological warfare. Among religious people, even in the most tolerant countries, the duty of toleration is often accepted with unspoken reservations.

One person can handle disagreement about how a church is run, but draws the line at disagreement about doctrine. Another is “tolerant,” as long as you’re not Catholic or Unitarian. Another will accept anyone who believes in revealed religion. A few stretch their goodwill a bit farther—up to belief in a God and an afterlife—and no farther. The pattern is telling: wherever the majority’s convictions are still vivid and deeply felt, the majority still expects obedience almost as strongly as ever.

England is a good example of how this works in practice. Because of our particular political history, the pressure of public opinion often bears down harder than it does elsewhere in Europe, even while the pressure of law is, in many ways, lighter. English people tend to bristle at obvious, direct meddling by Parliament or the executive in private life. But that reflex doesn’t come mainly from a principled respect for individual independence. It comes from an older habit of seeing “the government” as representing interests opposed to “the public.”

For a long time, most people in England didn’t experience government power as their power, or government opinions as their opinions. Once they do—once the majority feels the state is simply its instrument—individual liberty will be just as vulnerable to invasion by government as it already is to invasion by social pressure.

Still, there remains a real readiness to push back against laws that try to control people in areas where the law hasn’t traditionally controlled them. That instinct is healthy overall. The trouble is that it’s often applied without much care. People object loudly to legal interference without asking the crucial question: Is this actually a legitimate area for law to regulate? So the same anti-interference feeling can be brilliantly protective in one case and badly misfired in the next.

Why? Because there’s no widely accepted principle that people use to test whether government interference is appropriate. Most of us decide case by case, guided by preference.

  • Some people, whenever they spot a good thing that could be done or a harm that could be fixed, immediately want government to take it on.
  • Others would rather put up with a great deal of social misery than hand one more corner of human life over to official control.

And in any particular debate, people tend to pick a side based on their general temperament, or how much they care about the specific issue, or whether they trust the government to handle it the way they’d like. Much more rarely do they argue from a consistent view about what government is for. The result is predictable: with no shared standard, each side is about as likely as the other to be wrong. Government is invoked where it shouldn’t be, and denounced where it should be, with roughly equal frequency.

This essay is built to supply a standard. I want to defend one simple principle that should govern, absolutely, how society deals with individuals through compulsion and control—whether the tool is the blunt force of law and penalties or the quieter but still powerful pressure of public opinion.

Here is the principle:

  • The only reason people—individually or collectively—are justified in interfering with someone’s freedom of action is self-protection.
  • The only legitimate purpose for exercising power over a member of a civilized community, against their will, is to prevent harm to others.

Your own good—physical or moral—isn’t enough. You can’t rightfully be forced to do something (or not do something) because it would be better for you, make you happier, or because other people think it would be wiser or more virtuous. Those may be excellent reasons to argue with you, warn you, persuade you, even plead with you. They are not reasons to coerce you—or to punish you if you refuse.

To justify coercion, the conduct being restrained has to be likely to cause harm to someone else. In other words, the only part of anyone’s behavior for which society can properly hold them accountable is the part that concerns other people. When an action concerns only the person themselves, their independence is—by right—complete. Over your own body and mind, you are sovereign.

Important limits: this doctrine is meant for human beings who have reached maturity. We are not talking about children, or people below whatever age the law sets for full adulthood. Those who still need care from others must be protected not only from outside injury but sometimes from their own actions.

For the same reason, we can also set aside societies in what you might call humanity’s “minority,” where the obstacles to spontaneous progress are so severe that there’s little real choice about how to overcome them. In those conditions, a ruler genuinely committed to improvement may be justified in using whatever means will reach an otherwise unreachable end. Despotism can be legitimate when dealing with “barbarians,” if—and only if—the aim is their improvement and the methods actually achieve it. Liberty, as a principle, doesn’t apply until people are capable of being improved by free and equal discussion. Before that, the best you can hope for is simply obediently following an exceptional ruler—an Akbar or a Charlemagne—if you’re lucky enough to get one.

But once people have reached the point—long ago, in the nations we’re concerned with here—where they can be guided toward their own improvement by conviction and persuasion, then compulsion aimed at their own good no longer has any excuse. From that point on, coercion is defensible only for one reason: the security of others.

Why I’m not arguing from “abstract rights.” I’m deliberately not trying to win this case by appealing to some notion of rights that floats above usefulness. For moral questions, the final court of appeal is utility—but utility understood in the broadest way, grounded in the enduring interests of human beings as progressive creatures.

Those interests, I argue, justify external control over individual spontaneity only when someone’s actions affect other people’s interests.

If someone commits an act that harms others, there is, on its face, a case for punishment—by law when law can safely apply, or, when it can’t, by widespread moral disapproval. But there are also cases where society may rightfully compel positive action for the benefit of others—for example:

  • giving evidence in a court of justice
  • bearing a fair share of the burdens of common defense, or other necessary collective work
  • performing certain basic acts of personal beneficence when duty is obvious, like saving a life or stepping in to protect the defenseless from abuse

A person can harm others not only by what they do, but also by what they fail to do. In both cases, it’s fair to hold them accountable for the injury. Still, forcing people to prevent harm through inaction demands much more caution than punishing harm through action. Making people answerable for directly doing harm is the rule; making them answerable for not preventing harm is the exception. But the exception exists, and there are serious cases where it’s clearly justified.

In everything that concerns a person’s outward relations with others, they are—by right—answerable to those affected, and, when necessary, to society acting as the protector of those interests. Even then, there can be good reasons not to enforce responsibility in a given case. Those reasons must come from the practical facts of the situation, such as:

  • leaving the person to their own judgment will, on balance, lead them to act better than any available form of social control would
  • the attempt to control them would create harms worse than the harms it would prevent

When reasons like these block enforcement, something still has to fill the gap. The agent’s own conscience should take the empty seat of judgment and act as the guardian of the interests that lack external protection—judging itself more strictly precisely because it can’t be judged by others.

The core “private sphere.” There is a large region of human life where society, as distinct from the individual, has at most an indirect interest. It includes everything in a person’s conduct that affects only themselves—or, if it affects others, does so only with those others’ free, voluntary, and informed consent and participation.

When I say “affects only themselves,” I mean directly and in the first instance. Of course, what affects a person can ripple outward and affect others through them. I’ll address that complication later. But even granting that, this is the proper territory of human liberty. It includes three main freedoms:

  1. The inward domain of consciousness: the fullest freedom of conscience—liberty of thought and feeling, and absolute freedom of opinion and sentiment on every subject: practical or theoretical, scientific, moral, or theological.
  2. Liberty of tastes and pursuits: the freedom to shape your life to fit your own character; to do what you choose and accept the consequences—without interference from others, so long as you don’t harm them, even if they think your choices foolish, perverse, or wrong.
  3. Freedom of association: the liberty, within the same limits, to join with others for any purpose that doesn’t involve harm—assuming the participants are adults and neither coerced nor deceived.

No society is truly free—no matter what its constitution says—if these liberties are not broadly respected. And no society is completely free unless they exist in an absolute and unqualified form.

The only freedom that deserves the name is this: pursuing your own good in your own way, as long as you don’t try to deprive others of theirs or block their efforts to obtain it. Each person is the proper guardian of their own health—bodily, mental, and spiritual. Humanity gains more by letting people live as seems good to themselves than by forcing everyone to live as seems good to everyone else.

Why this needs saying anyway. None of this is new, and to some readers it may sound like a platitude. But it collides head-on with the dominant trend of opinion and practice. Society has spent as much energy—according to its own lights—trying to force people into its idea of personal excellence as it has trying to force them into its idea of social excellence.

Ancient city-states believed they had the right to regulate every part of private conduct, and many ancient philosophers supported that view, because they assumed the state had a deep stake in the complete bodily and mental training of every citizen. In small republics surrounded by powerful enemies—always at risk of foreign conquest or internal upheaval—that mindset may have made a kind of sense. They lived under constant peril, where even a brief lapse in discipline could be fatal, and they couldn’t afford to wait for the slow, long-term benefits of freedom.

In the modern world, several changes made such sweeping legal control harder. Larger political communities made close supervision less practical. And, above all, the separation of spiritual from temporal authority—placing the direction of conscience in different hands from the direction of worldly affairs—blocked law from reaching so deeply into private life.

But if legal interference decreased, moral repression often grew stronger. The machinery of social pressure has been used even more fiercely against deviations in matters that primarily concern the self than it has against deviations in purely social matters. Religion, one of the most powerful forces shaping moral feeling, has usually been driven either by the ambition of a hierarchy seeking to control every department of conduct, or by the stern spirit of Puritanism.

And even reformers who have most loudly rejected older religions have not always been any less eager to claim a right to dominate others spiritually. Think of M. Comte: his system aims to establish—more by moral pressure than by legal force—a despotism of society over the individual that goes beyond anything imagined by the strictest disciplinarians among the ancient philosophers.

Beyond the quirks of individual thinkers, there is also a broad, growing tendency to stretch society’s power too far over the individual—through opinion, and even through legislation. And since the general movement of modern change is to strengthen society and weaken the individual, this encroachment isn’t the kind of problem that naturally fades away. It’s the kind that becomes more dangerous over time. Human beings, whether acting as rulers or as fellow citizens, are strongly tempted to treat their own opinions and preferences as the rule everyone else should follow. That impulse draws strength from some of the best and some of the worst parts of our nature, and it is restrained mainly by one thing: lack of power. If power is increasing rather than shrinking, then unless we build a strong moral barrier against this mischief, we should expect it to grow.

A focused starting point: thought and expression. To make the argument easier to follow, I won’t begin with the entire thesis all at once. Instead, I’ll start with one branch of it—the branch where the principle I’ve stated is already partly recognized by common opinion.

That branch is liberty of thought—and you can’t separate it from the closely related liberty of speaking and writing. Most countries that claim religious toleration and free institutions accept these freedoms to some extent as part of their political morality. But the deeper reasons—both philosophical and practical—behind them are less familiar than you might expect, even to many people who shape public opinion. And once you truly understand those reasons, you’ll see they reach far beyond this one topic. That’s why a careful look at freedom of thought will serve as the best doorway into everything that follows.

If nothing I’m about to say is new to some readers, I hope they’ll forgive me. For three centuries this subject has been argued again and again—and I’m going to argue it once more.

II — Of The Liberty Of Thought And Discussion

By now, we can probably agree on one thing: nobody should have to defend freedom of the press as a basic safeguard against corrupt or tyrannical government. The idea that a legislature or a ruler—especially one whose interests don’t match the public’s—should get to decide which opinions people may hear, which arguments may circulate, and which doctrines are “allowed” has been argued against so thoroughly (and so successfully) that I don’t need to re-litigate it here.

Even so, it’s worth noticing something: in England, the law around the press still carries the same submissive spirit it had under the Tudors. In practice, though, it’s rarely enforced against political discussion—except during sudden panics, when fear of unrest makes ministers and judges behave badly. More broadly, in constitutional countries, governments usually try to control speech only in a specific situation: when suppressing an opinion lets them act as the instrument of the public’s intolerance.

So let’s make the most favorable assumption for censorship. Suppose the government is perfectly aligned with the people. Suppose it never uses force unless it sincerely believes it’s enforcing the public’s will.

Even then, the public still has no right to do this—either directly or through the government acting in its name. The power to silence people is illegitimate in itself. A “good” government has no more moral right to it than a “bad” one. In fact, it can be even more dangerous when it rides on the wave of majority feeling, because it comes wrapped in the comforting illusion that “everyone agrees.”

Here’s the core principle: if all humanity except one person held the same opinion, humanity would be no more justified in silencing that lone dissenter than that dissenter would be justified—if he had the power—in silencing everyone else.

Why? Because an opinion isn’t like a piece of private property whose only value is to its owner. If suppressing an opinion merely harmed one individual’s personal happiness, we might argue about scale—hurting a few versus hurting many. But the real damage of silencing expression is much larger. It’s theft from the entire human race:

  • from the living,
  • from future generations,
  • and especially from the people who disagree with the dominant view.

If the suppressed opinion is true, we lose the chance to replace error with truth. If it’s false, we lose something almost as valuable: the sharper, clearer understanding of truth that comes from seeing it collide with a strong opponent. Truth that never has to fight gets lazy—and so do the minds that hold it.

To see why this matters, we have to consider two separate possibilities:

  1. the silenced opinion might be true;
  2. even if it’s false, silencing it is still a serious harm.

1) The censored opinion might be true—and we’re not gods

Start with the obvious: an opinion that authorities try to suppress might actually be right.

Of course the people doing the suppressing will say it’s wrong. But they aren’t infallible. They have no special authority to settle the question for everyone, forever, and to deny everyone else the materials for judging it. Refusing even to hear an opinion because you’re “sure” it’s false is just another way of saying: “My confidence counts as certainty.”

And that’s what censorship always is at bottom: an assumption of infallibility.

The depressing part is that while people readily admit in theory that humans are fallible, they rarely behave as if that admission has consequences. Almost everyone knows, abstractly, “I can be wrong.” But very few build any safeguards into their thinking. They don’t seriously entertain the possibility that the opinion they feel most confident about might be exactly the kind of mistake they confess is possible.

People with absolute power—or anyone who is used to being deferred to—tend to feel that kind of certainty across nearly everything. People in more fortunate positions, who sometimes get contradicted and occasionally corrected, are usually less overconfident—but they still tend to treat as unquestionable the beliefs that are shared by their whole social environment, or by the people whose judgment they habitually trust.

And “the world,” for most of us, doesn’t mean humanity. It means:

  • our political tribe,
  • our religious community,
  • our social class,
  • our professional circle,
  • our friend group.

A person counts as unusually broad-minded if “the world” means something as large as their country or their era.

What makes this reliance on “the world” especially irrational is that most people are perfectly aware that other eras, countries, sects, churches, classes, and parties have believed—and still believe—the opposite of what their own circle treats as obvious. Yet they quietly hand responsibility for “being right” to their own local world and never let the arbitrariness of that choice disturb them.

It’s pure accident. The same forces that make someone a particular kind of believer in London would likely have made them a Buddhist or a Confucian in Beijing. Nothing about the person’s confidence proves correctness. It often proves geography.

And history screams the lesson: eras aren’t infallible any more than individuals are. Every age has held convictions that later ages judged not just false, but ridiculous. And it’s just as certain that some beliefs that feel “settled” today will be rejected by the future, as it is that we now reject many once-common beliefs.

2) “But we have to act”—the common reply, and the real difference

At this point, a predictable objection appears:

“Look, banning harmful falsehood isn’t claiming infallibility. Governments make judgments all the time. People are given judgment so they can use it. If we refused to act because we might be wrong, we’d do nothing—no duties fulfilled, no interests protected. That objection would apply to every action, and if it applies to everything, it can’t single out censorship.

Besides: it’s the duty of individuals and governments to form the best opinions they can, carefully and conscientiously. And when they’re sure, it’s not virtue to hesitate—it’s cowardice. Why should society allow dangerous doctrines to spread just because earlier societies persecuted some truths? Governments make mistakes about taxes and wars too; do we therefore abolish taxes and refuse to fight any wars? We can’t have absolute certainty, but we can have enough confidence to live. We act on what we believe. And it’s no more of an assumption to stop bad people from corrupting society with harmful lies.”

Here’s the answer: that argument blurs a crucial distinction.

There’s a huge difference between:

  • acting on a belief while allowing it to be challenged, and
  • acting on a belief by preventing it from being challenged.

You’re entitled to treat a belief as true for practical purposes only under one condition: you keep the door wide open for contradiction and disproof. Full freedom to argue against your view isn’t a luxury; it’s the very thing that makes your confidence rational. Without that freedom, a human being—limited, biased, and mortal—has no defensible basis for thinking they’re right.

Why people get things right at all: errors are fixable, but only in public

Look at the history of ideas, or even the everyday way people live. Why aren’t human opinions and human behavior even worse than they are?

It’s not because human understanding is automatically powerful. On most questions that aren’t self-evident, ninety-nine people out of a hundred can’t really judge well. And even the hundredth—the capable one—is only relatively capable. In every past generation, many of the most brilliant people believed things we now know are wrong, and did or approved things nobody today would defend.

So why, overall, do we see any real tilt toward reasonable beliefs and reasonable conduct? If the balance were truly terrible, human affairs would be—and would always have been—nearly hopeless.

The answer is that the human mind has one saving strength: our mistakes can be corrected.

But not by experience alone. Experience needs interpretation. Discussion tells us what experience means. Bad opinions and bad practices gradually give way to facts and arguments—but facts and arguments don’t work by magic. They have to be presented, explained, and pressed. Most facts don’t “speak for themselves” without someone drawing out their implications.

That means the value of human judgment depends on keeping the tools of correction always within reach. You can trust judgment only when it can be challenged.

Think about a person whose judgment you genuinely respect. How did they get that way?

  • They stayed open to criticism of both their beliefs and their conduct.
  • They listened to what could be said against them.
  • They absorbed what was fair.
  • They learned to identify what was mistaken and explain to themselves (and sometimes to others) why it was mistaken.
  • They understood that the only way to get near the whole truth on any subject is to hear what people of many different perspectives can say, and to study how the topic looks from different kinds of minds.

No one becomes wise any other way. It’s not just good advice; it’s built into the limits of human intelligence.

And this habit—constantly testing your own views against others—doesn’t make you paralyzed or indecisive. It does the opposite. It’s the only stable foundation for acting with justified confidence. If you’ve actively sought the strongest objections, faced the hardest questions, and refused to block out any light from any direction, you have a better claim to reliability than someone—or some crowd—who has never put their beliefs through that process.

Even saints get cross-examined

If the wisest people need open challenge in order to trust their own judgment, it’s not too much to ask the same from the public—a mix of a few wise and many foolish individuals.

Even the Roman Catholic Church, one of the most intolerant institutions in matters of doctrine, makes room for a “devil’s advocate” when canonizing a saint. The idea is simple: even the holiest person shouldn’t be honored until the strongest case against them has been heard and weighed.

The same logic applies to science. If even Newtonian physics weren’t allowed to be questioned, we couldn’t have the level of confidence in it that we do. The beliefs most worthy of our trust have one real safeguard: a standing invitation to the entire world to prove them wrong.

If nobody takes up the challenge, or if the challenge is tried and fails, we still aren’t perfectly certain. But we’ve done the best the human mind can do in its current state. We haven’t blocked truth’s route to us. And if the arena stays open, we can hope that if there’s a better truth available, it will be discovered when minds are ready to grasp it. Until then, we can rely on having reached as close to truth as our era allows.

That’s as much certainty as a fallible creature can have—and the only way to get it.

“Free discussion, but not that free” is nonsense

It’s odd how often people accept the logic of free discussion but complain about “pushing it to extremes,” as if the extreme cases are optional. They miss the point: if the reasons for free discussion aren’t strong enough to cover the hardest case, then they aren’t strong enough to cover any case.

It’s also odd how people imagine they aren’t claiming infallibility when they say: “Yes, debate anything that might be doubtful—but this particular doctrine must not be questioned, because it’s certain.”

Certain according to whom?

To call a proposition “certain” while there exists even one person who would deny it—if allowed—but who is not allowed, is to declare that you and your allies are the judges of certainty, and judges who refuse to hear the defense. That isn’t certainty. It’s power posing as certainty.

“It’s useful for society” doesn’t rescue censorship—it just moves the goalposts

In modern times—an age often described as lacking faith but terrified of skepticism—people frequently defend protected opinions less by insisting they’re true and more by insisting they’re necessary. The argument goes like this:

“People don’t just believe these doctrines because they’re correct; society needs them. Some beliefs are so beneficial—maybe even indispensable—that the government has a duty to uphold them, just like it protects other major social interests. In cases this urgent, we don’t need infallibility. We need reasonable assurance, backed by the general opinion of mankind. And anyway, only bad people would want to weaken such wholesome beliefs. So it’s fine to restrain bad actors and block practices that only they would want.”

This way of thinking tries to make censorship about usefulness, not truth, so it can avoid taking responsibility for claiming infallibility.

But it doesn’t actually avoid that responsibility. It just relocates it.

Because the usefulness of an opinion is itself a matter of opinion—debateable, uncertain, and in need of discussion just as much as the opinion being defended. You still need an infallible judge to declare an idea “harmful” in a way that justifies silencing it—unless the condemned view is given full freedom to defend itself.

And it won’t work to say, “Fine, the heretic can argue his view is harmless or useful, but he can’t argue it’s true.” Truth is part of usefulness. If we want to know whether a proposition should be believed, how could we possibly forbid discussion of whether it’s true?

In fact, in the minds of good people—not bad ones—no belief that is contrary to truth can be genuinely useful. How are you going to stop them from making that argument when they’re accused of wrongdoing for rejecting a doctrine that authorities call “useful” but that they believe is false?

Meanwhile, defenders of orthodox opinions never hesitate to use the truth-claim when it suits them. They don’t treat “utility” as separable from “truth.” On the contrary, it’s precisely because they think their doctrine is true that they insist belief in it is indispensable.

So you end up with a rigged conversation: one side may use “this is the truth” as the decisive argument, while the other side is forbidden to challenge that claim. Under those conditions, you can’t have a fair debate about usefulness. And in practice, when law or public pressure forbids people to dispute the truth of an opinion, it’s just as intolerant of denying that opinion’s usefulness.

Most people who argue for silencing an opinion won’t quite say, flat-out, that free discussion is unnecessary. Instead, they soften it. They’ll claim it’s “not absolutely essential,” or that suppressing it isn’t really a moral wrong—it’s just a regrettable but reasonable precaution.

To see what’s actually at stake, it helps to stop talking in vague abstractions and pick a hard case—the kind least favorable to the argument for free speech. So let’s choose the opinions that many people think are the most important to protect from doubt: belief in God and an afterlife, or widely accepted moral doctrines.

The moment you do that, a critic of free expression gets a big rhetorical advantage. They can say (and plenty of decent people will think it, even if they don’t say it): Wait—are these really the ideas you think aren’t certain enough to deserve legal protection? Are you saying that being confident in God is “claiming infallibility”?

But that’s not what I mean by “claiming infallibility.” Being deeply sure of something—even something sacred—isn’t the problem. The problem is deciding the question for other people while refusing to let them hear the best case on the other side. That move—I will settle this for you, and you don’t get to listen to dissent—is what deserves condemnation. And it deserves condemnation even when it’s done in service of what you personally hold most dear.

Here’s the principle, stated plainly:

  • You can believe an opinion is false.
  • You can believe it’s dangerous.
  • You can even believe it’s morally corrupt.

But if you use that belief—whether it’s only your private judgment or it’s backed by your whole society—to prevent that opinion from being heard in its own defense, you are acting as though you and your side cannot be wrong. You are claiming a kind of certainty that no human being is entitled to.

And notice something grim: the temptation to silence speech is strongest precisely when we label the speech “immoral” or “impious.” Those are the moments when suppression feels most righteous—and those are the moments when it does the most damage. History’s most shocking moral blunders are often committed in exactly this mood: one generation sincerely convinced it is protecting decency, while posterity looks back in disbelief.

This is also where you find some of the most infamous episodes in which law was used to crush the best people and the best ideas. The people were destroyed, sometimes completely. The ideas, sometimes, survived—and then, in a kind of bitter irony, later generations used those very ideas to justify persecuting the next set of dissenters, now accused of misreading or rejecting the “accepted” interpretation.

People can’t be reminded too often that there was once a man named Socrates, and his society—its laws and its public opinion—came into direct conflict with him. He lived in a place and time overflowing with extraordinary individuals, yet those who knew both the era and the man best handed him down as its most virtuous citizen. And we remember him as something even larger: a template for later moral teachers, a starting point for both Plato’s soaring ideals and Aristotle’s careful practicality—two fountains feeding much of later ethical philosophy.

And what did Athens do to that man? It executed him, after a legal conviction, for impiety and immorality.

  • Impiety, because he rejected the gods endorsed by the state; his accuser even argued he believed in no gods at all.
  • Immorality, because his teaching supposedly “corrupted the youth.”

There’s every reason to think the court believed those charges sincerely. In other words: they weren’t necessarily villains. They may have been, by their own lights, conscientious citizens doing their duty. And still they condemned—very likely—the person who most deserved gratitude to be put to death as a criminal.

If we jump to the only other legal atrocity that can stand beside Socrates without feeling smaller, we arrive at an execution on Calvary, a little more than eighteen centuries ago. The man whose life made such an impression on those around him that the next eighteen hundred years worshipped him as God on earth—was publicly killed as what? A blasphemer.

Human beings didn’t merely misunderstand their benefactor. They took him for the opposite of what he was and treated him as a monster of irreligion. And now, ironically, the people who did it are remembered as the impious ones—precisely because of how they treated him.

Our modern horror at these events—especially the later one—often makes us unfair to the people who carried them out. Because the uncomfortable truth is: they don’t look like obvious monsters. They look like ordinary “good citizens” of their time—people with a full, even above-average, supply of the religious, moral, and patriotic feelings their society praised. Exactly the kind of people who, in any era (including ours), are likely to live respected lives.

Take the high priest who tore his robes at the words that, by his culture’s standards, counted as the worst possible offense. He was probably just as sincere in his outrage as modern respectable religious people are in theirs. And if many of us had lived then—born into the same community, absorbing the same assumptions—we would likely have acted the same way.

And if an orthodox Christian ever feels tempted to think the people who killed early Christian martyrs must have been worse than Christians today, it’s worth remembering one inconvenient fact: one of those persecutors was Saint Paul.

Let’s add one more example—arguably the most striking, if the weight of the mistake is measured by the wisdom and decency of the person who made it. If anyone in history had reason to think, I’m among the most enlightened people alive, it was Emperor Marcus Aurelius.

He ruled as absolute monarch over the civilized world of his day. And yet he lived with unusually strict justice—and, more surprisingly for someone raised in hard Stoic discipline, an exceptionally tender heart. The few faults attributed to him were mostly faults of softness and indulgence, not cruelty. His writings remain one of the highest ethical achievements of the ancient world, and in spirit they differ only slightly—if at all—from the most recognizable moral teachings of Jesus.

By character, Marcus Aurelius was “Christian” in almost everything except doctrine. He was more aligned with the Christian ideal of conduct than many rulers who later called themselves Christian.

And still, he persecuted Christianity.

Why? Not because he was vicious, but because—given what he believed—he thought he was doing his job.

He saw society around him as fragile and in bad shape. He believed it was held together, and kept from collapsing further, by shared reverence for the traditional gods and institutions. As the person responsible for the whole structure, he saw it as his duty to prevent it from falling apart. He couldn’t imagine what could replace those social “ties” if they were cut.

But Christianity, at least on its surface and in its early posture toward the old order, looked like a movement determined to dissolve those ties. So unless Marcus thought it was his duty to adopt Christianity himself, it seemed to follow—painfully, but logically—that it was his duty to suppress it.

And because Christianity’s theology did not appear to him to be true or divinely grounded—because the story of a crucified God did not seem credible to him—he could not foresee that this “unbelievable” foundation would, in time, become a powerful renewing force in the world. So the gentlest and most admirable of philosopher-rulers, acting under a solemn sense of responsibility, authorized persecution.

That is one of history’s most tragic facts.

It’s hard not to wonder how different the world’s Christianity might have been if the faith had become the empire’s religion under Marcus Aurelius rather than under Constantine. But it would be unfair—and untrue—to pretend Marcus lacked reasons that people later used to justify punishing anti-Christian teaching. He had every one of them.

No Christian today is more convinced that atheism is false and socially corrosive than Marcus Aurelius was convinced of the same thing about Christianity. And if anyone was likely, by intelligence and moral seriousness, to “get it,” it should have been him.

So if you support punishing people for spreading certain opinions, ask yourself what you’re implying. Unless you honestly believe you are wiser and better than Marcus Aurelius—more informed, more intellectually elevated above your era, more devoted to truth—then you should hesitate before you join the crowd in claiming a shared certainty that history has already shown can lead good people into terrible acts.

When defenders of religious freedom point out that punishment can’t be justified without also justifying Marcus Aurelius, opponents sometimes retreat to a different line. Under pressure, they’ll accept the implication and say, with Dr. Johnson, that the persecutors of Christianity were right: persecution is a kind of trial by fire that truth should endure—and will endure—because legal penalties can’t ultimately defeat truth, even if they can usefully stamp out harmful error.

That argument is odd enough that it deserves a careful look.

If someone says, “Persecuting truth is acceptable because persecution can’t really hurt truth,” they can’t be accused of deliberately blocking new discoveries. But it’s hard to praise the “generosity” of their view toward the human beings who bring those discoveries to us.

Think about what it means to contribute a genuinely new truth: to show the world something it didn’t know but deeply needs; to prove it has been wrong about a major matter of earthly or spiritual importance. That’s among the greatest services a person can render. And in certain moments—like those of the early Christians or the Reformers—people who think like Dr. Johnson would even call that service the most precious gift imaginable.

And yet, on this theory, the proper “reward” for such benefactors is martyrdom—being treated as criminals, even the worst kind. That wouldn’t be a tragic mistake for humanity to regret. It would be the normal, justified order of things.

In fact, this view resembles an old legal custom from the Locrians: the person proposing a new law had to stand before the assembly with a noose around his neck, to be tightened immediately if the crowd rejected his proposal after hearing his reasons. If you defend a system that treats benefactors like that, you probably don’t value the benefit very much. And in practice, this attitude tends to belong to people who think new truths were nice once—but we’ve had enough of them now.

Besides, the popular saying that “truth always wins against persecution” is one of those comforting myths people repeat until it sounds like wisdom. Actual history contradicts it.

History is full of truths crushed by force. Even when they aren’t destroyed forever, they can be driven underground for centuries.

Look only at religion. Long before Luther, “reformations” erupted again and again—and were stamped out again and again. Arnold of Brescia was crushed. Fra Dolcino was crushed. Savonarola was crushed. The Albigensians were crushed. The Waldensians were crushed. The Lollards were crushed. The Hussites were crushed. And even after Luther, wherever persecution stayed relentless, it usually worked. Protestantism was uprooted in Spain, Italy, Flanders, and the Austrian Empire—and would likely have been uprooted in England too if Queen Mary had lived longer, or if Queen Elizabeth had died sooner.

Persecution fails only when the supposed heretics are too strong to be effectively persecuted.

No reasonable person should doubt that Christianity itself could have been erased from the Roman Empire. It survived and eventually dominated partly because persecution was intermittent—short bursts separated by long stretches in which Christians could teach and organize. It is sentimental nonsense to imagine that truth, simply because it is truth, has some magical advantage over error when facing prisons and execution stakes. People are often just as passionate for falsehood as for truth, and strong enough legal penalties—official or even social—can usually stop the spread of either.

Truth’s real advantage is different. When an idea is true, it can be extinguished once, twice, many times—but over the long run, someone will usually rediscover it. And eventually, one of those rediscoveries will land in a moment when conditions are favorable enough that the idea survives long enough to grow strong—strong enough to resist future attempts to silence it.

At this point someone will object: Come on—we don’t execute people for new opinions anymore. We aren’t like those who killed the prophets. If anything, we build monuments to them.

Fair enough: we generally don’t kill “heretics” now. And the level of punishment most modern societies would tolerate—however ugly—often isn’t severe enough to wipe an opinion out completely.

But we shouldn’t flatter ourselves that legal persecution has vanished. Laws still punish opinions, or at least their expression. And enforcement isn’t so unthinkable, even now, that it could never return in full strength.

Consider England in 1857. At the summer assizes in Cornwall, a man described as having an otherwise blameless life was sentenced to twenty-one months in prison for speaking and writing offensive words about Christianity on a gate.

Around the same time in London, at the Old Bailey:

  • Two people were rejected as jurors on separate occasions because they honestly said they had no religious belief; one was openly insulted by a judge and by counsel.
  • A third person, a foreigner, was denied justice against a thief for the same reason.

How could that happen? Because of a legal doctrine claiming that no one may give evidence in court unless they profess belief in God (any god would do) and in an afterlife. In practice, that treats unbelievers as legal outcasts—people outside the law’s protection.

The implications are chilling. If the only witnesses to a robbery or assault are unbelievers, the offender can act with impunity. Even if others were present, an offender can still escape if the decisive proof depends on the testimony of someone deemed “unfit” to swear an oath.

The assumption behind this rule is that an oath is meaningless coming from someone who doesn’t believe in an afterlife. Anyone who accepts that assumption displays real ignorance of history, because many unbelievers across the ages have been people of conspicuous integrity and honor. And anyone with even modest awareness of human reality knows how many people widely admired for virtue and achievement are, at least among friends, nonbelievers.

Worse, the rule defeats itself. It claims atheists must be liars—yet it accepts testimony from any atheist willing to lie about belief, and rejects only those who refuse to fake a creed rather than speak a falsehood. A rule that is this self-exposed—so obviously absurd for its stated purpose—can survive only as a badge of hostility, a leftover piece of persecution. And it has an especially twisted feature: the “qualification” for suffering it is being clearly proven not to deserve it.

In the end, that rule—and the mindset behind it—is almost as insulting to believers as it is to unbelievers.

If you claim that anyone who doesn’t believe in an afterlife must be a liar, you’re basically saying the only thing keeping believers honest is fear—fear of hell, specifically. Let’s not insult the people who push rules like that by assuming they’re describing their own moral psychology as “Christian virtue.”

Still, these attitudes matter. Even when the old machinery of persecution has mostly been dismantled, little scraps of it linger. And sometimes they’re not so much evidence of a burning desire to persecute as they are a very English habit: taking a weird pride in loudly defending a bad principle long after you’ve lost the stomach to enforce it. Unfortunately, we can’t count on that reluctance lasting forever. Public opinion isn’t a calm lake; it’s a surface that keeps getting churned up, sometimes by genuine reforms, and sometimes by people trying to revive the worst habits of the past.

In practice, what gets celebrated as a “religious revival” often becomes, in narrower minds, a revival of bigotry. And in a society where intolerance is always sitting there in the emotional background—especially in the respectable middle of the culture—it doesn’t take much to turn that simmering judgment into active punishment of people who are seen as legitimate targets.

Here’s the key point: a country isn’t mentally free just because the law has stopped doing its worst. What really determines mental freedom is:

  • what people think about dissenters, and
  • how they feel entitled to treat them.

For a long time now, the main damage done by legal penalties hasn’t been the penalty itself. It’s that the law props up social stigma. And stigma can be brutally effective. In fact, it’s often more effective than the courts. That’s why, in England, it’s rarer to openly say what society condemns than it is, in many other countries, to say what the law might punish.

If you’re not financially independent, public opinion can hit you like a statute book. For most people, being shunned, blacklisted, or quietly excluded from opportunities can be as devastating as a jail cell. You might as well be imprisoned if you’re cut off from the ability to earn a living.

If your income and security don’t depend on anyone’s approval, then sure—you can survive the consequences of speaking freely. The worst you face is being talked about, judged, and treated coldly. That shouldn’t require saint-level bravery. And there’s no need to beg for sympathy on behalf of people who are insulated from real material harm.

But even if we’ve softened the punishment we impose on people who think differently, we may still be doing just as much damage—to ourselves.

Look at history. Socrates was executed, and yet Socratic philosophy rose and spread like sunrise across the intellectual world. Early Christians were thrown to lions, and yet Christianity grew into a massive institution that overshadowed older traditions and crowded them out.

Social intolerance works differently. It doesn’t usually kill anyone. It rarely wipes out an idea completely. What it does is more insidious: it pressures people to hide what they believe, or to keep their beliefs private and make no real effort to share them.

The result is a strange, stagnant landscape. In a culture like this, “heretical” opinions don’t visibly win or lose over decades. They don’t flare up, clash, and force society to think. They just smolder in small circles of studious people, never lighting up the larger world—whether with true insight or even with seductive error. Everything stays outwardly calm. The dominant views remain publicly unchallenged, and dissenters aren’t completely forbidden from thinking—as long as they keep their thinking contained.

It’s an easy way to buy “peace” in the intellectual world. No trials. No prisons. No public mess. Just a quiet expectation that you won’t rock the boat.

But that calm comes at a cost: the moral courage of the human mind.

When many of the sharpest and most curious thinkers decide it’s safer to keep the real foundations of their beliefs locked inside, and then publicly argue from premises they no longer accept, you don’t get a culture that produces open, fearless, coherent minds. You get two kinds of people:

  • Conformists, who repeat the standard lines because it’s easier, and
  • Careerists for truth, who might privately chase what’s real but publicly tailor their arguments to what an audience will tolerate.

And if someone refuses to become either of those, there’s a third escape route: they shrink their intellectual world. They focus only on topics that can be discussed without touching first principles—small, practical issues where you can talk policy and technique without asking what’s right, what’s true, or what kind of society we’re building.

But that’s backwards. Those small practical problems would mostly fix themselves if human minds were stronger, bolder, and more expansive. And they won’t be fixed well until minds are stronger and bolder. Yet the very thing that strengthens minds—free, daring speculation about the biggest questions—gets abandoned.

If you don’t think this enforced quiet harms anyone, start with the most obvious consequence: it prevents real discussion of unpopular views. Without open debate, you never get a full stress-test. Some mistaken ideas may be blocked from spreading—but they don’t disappear. They just linger.

And the biggest harm doesn’t fall on the “heretics.” It falls on everyone else.

The truly devastating effect of a culture that punishes inquiry is that it trains ordinary people to fear their own thinking. It stunts mental development. It teaches reason to flinch.

How much does the world lose because smart people with timid personalities won’t follow a bold line of thought to the end—because they’re terrified it might end somewhere that could be branded “irreligious” or “immoral”? We can’t even measure it.

Sometimes you meet someone like this: deeply conscientious, intellectually subtle, capable of real insight—yet spending a lifetime playing mental chess with themselves. They can’t silence their reasoning, but they also can’t accept where it points. So they pour all their ingenuity into trying to make conscience and logic fit neatly inside orthodoxy. And often they never fully succeed.

No one becomes a great thinker without accepting one basic duty: follow your mind wherever it leads.

And here’s a truth that makes many people uncomfortable: truth often gains more from the honest errors of someone who studies seriously and thinks for themselves than it gains from the correct beliefs of people who hold them only because they’ve never allowed themselves to question anything. That doesn’t mean error is good. It means that genuine thinking—done with preparation and integrity—moves understanding forward, while unexamined correctness leaves the mind inert.

But the point of free thought isn’t just to produce a few geniuses. It matters even more for the rest of us. Freedom of thought is what allows average people to reach the full mental stature they’re capable of.

You might still get a rare great thinker under conditions of intellectual slavery. But you will never get an intellectually alive population.

Whenever a society has briefly approached that kind of widespread mental energy, it’s because the fear of “forbidden speculation” temporarily lifted. When there’s an unspoken rule that first principles aren’t up for debate—when the biggest human questions are treated as settled and closed—you should not expect a culture of high mental activity.

History makes this obvious. No period has produced widespread intellectual vitality when controversy avoided the topics large enough to ignite real passion. When the biggest questions are off-limits, people aren’t stirred from the foundations. Even those with ordinary abilities never get the jolt that raises them into the dignity of genuine thinking.

Europe has seen this kind of intellectual ignition at least three times:

  • in the period right after the Reformation,
  • in the speculative movement of the late eighteenth century (especially on the Continent, among more educated circles), and
  • in the brief but intense intellectual fermentation of Germany during the era associated with Goethe and Fichte.

These periods differed wildly in what they believed. But they shared one crucial feature: the yoke of authority broke. An old mental despotism was thrown off, and no new one had yet taken its place.

Those surges made Europe what it is. Nearly every major improvement—whether in thought or institutions—can be traced back to one of these waves. And for some time now, it looks like those impulses are running out. We shouldn’t expect a new start until we once again insist on our mental freedom.

Now let’s move to the second part of the argument. Put aside, for the moment, the idea that accepted opinions might be false. Assume they’re true. Even then, we still need to ask a crucial question: What happens to a true belief when it isn’t freely and openly challenged?

Because even if someone hates admitting it, they should be able to see this: no matter how true an opinion is, if it’s not fully, frequently, and fearlessly debated, people won’t hold it as a living truth. They’ll hold it as a dead dogma.

There’s a type of person—thankfully less common than it used to be—who thinks it’s enough for someone to agree confidently with “the truth,” even if they have no idea why it’s true and couldn’t defend it against even shallow objections. Once these people manage to get their creed taught by authority, they naturally conclude that questioning it can’t possibly do any good, and might do real harm.

When that mindset dominates, it becomes almost impossible for society to reject the prevailing opinion in a wise, careful way. People may still reject it—but they’ll do it rashly and ignorantly. That’s because you can’t actually seal off discussion forever. When doubts slip in, beliefs that were never grounded in genuine understanding tend to collapse under the first argument that merely sounds plausible.

But suppose that doesn’t happen. Suppose the true belief stays. Even then, if it stays as a mere prejudice—a belief held independently of reasons and insulated from argument—that isn’t how a rational person should hold truth. That isn’t knowledge. Truth held that way is basically just another superstition, clinging by accident to the right words.

If we believe human intellect and judgment should be cultivated—and Protestants at least generally do—then what better training ground could there be than the questions that matter so much we insist everyone have an opinion about them?

If “cultivating understanding” means anything, it means learning the grounds of what you believe. On subjects where it’s crucial to believe rightly, people should be able to defend their views against at least the standard objections.

Someone might reply: “Fine. Teach people the reasons for their beliefs. That doesn’t mean we need controversy. People learn geometry without hearing anyone deny the theorems, and they still understand the proofs.”

True. And that works in mathematics because mathematics is unusual: there’s effectively nothing coherent to say on the other side. The evidence is one-sided. There are no serious objections that need answering.

But on any subject where disagreement is possible, truth usually depends on weighing competing reasons. Even in science, the same facts can often support different explanations—think of a geocentric model versus a heliocentric one, or older chemical theories like phlogiston versus oxygen. To understand why one view is right, you have to understand why the rival view isn’t—and how we know that. Until you can do that, you don’t really understand your own belief.

And when we move into far more complex territory—morality, religion, politics, social life, everyday practical judgment—most of the work of defending any contested view is spent dissolving the appearances that make an alternative view look convincing.

One of the greatest speakers of the ancient world, second only to one, said he always studied his opponent’s case as hard as—if not harder than—his own. That habit isn’t just a courtroom trick. Anyone who cares about reaching truth in difficult subjects needs the same discipline.

Because if you know only your own side, you don’t actually know much—not even of your own side. Your reasons may be strong. Nobody may have refuted them. But if you can’t refute the opposing reasons—or if you don’t even know what they are—you have no rational basis for choosing between the views. The intellectually honest position would be to suspend judgment. If you don’t, then you’re not following reason; you’re following authority or personal inclination.

And it isn’t enough to hear your opponents described by your own teachers, in your teachers’ wording, paired with your teachers’ “refutations.” That doesn’t do justice to the opposing case, and it doesn’t bring it into real contact with your own mind.

To truly understand, you need to hear the opposing arguments from people who actually believe them—people who defend them sincerely and forcefully, using the best version of the case. You need to feel the full pressure of the strongest objections. Otherwise, you’ll never genuinely grasp the part of the truth that answers those objections.

This is why so many “educated” people aren’t educated in the deepest sense. Most can argue smoothly for what they believe, but their belief could be wrong for all they really know. They’ve never placed themselves, mentally, where a thoughtful dissenter stands. They haven’t asked what someone on the other side would say at their best. So they don’t truly understand even the doctrine they claim to hold.

They miss the connective tissue—the parts that explain and justify the rest:

  • how two facts that seem to conflict can be reconciled,
  • why one argument should outweigh another when both feel strong,
  • what considerations actually “turn the scale” for a fully informed mind.

That part of truth is rarely known except by people who have listened carefully and fairly to both sides, and tried to see each case in its strongest light.

So essential is this discipline, in fact, that if no serious opponents of important truths exist, we should invent them—and supply them with the strongest arguments a skilled devil’s advocate can muster.

To weaken all this, a critic of free discussion might say: “Sure, philosophers and theologians should know the deepest arguments on both sides. But ordinary people don’t need that. They don’t need to be able to unmask every clever fallacy. It’s enough that somebody out there can answer the objections, so nothing that could mislead the untrained goes unanswered. Simple people can learn the obvious reasons, trust authority for the rest, and take comfort in knowing that experts have handled the harder difficulties.”

Even if you grant this view everything its supporters could reasonably ask for—even if you assume people don’t need a deep understanding of a truth in order to believe it—the case for free discussion still doesn’t weaken. Because even this softer standard admits something crucial: people deserve rational assurance that serious objections have been answered.

But how can objections be answered if no one is allowed to say them out loud? And how can anyone know an answer is genuinely solid if critics aren’t allowed to show where it fails?

At minimum, the people tasked with resolving hard questions—philosophers, theologians, public intellectuals—have to confront the toughest versions of those objections. And they can’t do that unless the objections are stated openly, clearly, and as strongly as they can honestly be made.

How institutions try to dodge this problem

The Catholic Church, historically, developed a workaround. It split the world into two groups:

  • People who may accept doctrine by conviction (after engaging with arguments).
  • People who must accept it on trust.

No one is supposed to pick and choose what to believe, but the clergy—at least those considered reliable—are permitted, even praised, for learning the opposing arguments so they can refute them. That can include reading heretical books. Ordinary believers generally cannot, except with special permission that’s difficult to obtain.

So the system treats knowledge of the “enemy’s case” as useful for teachers, while deliberately withholding it from everyone else. The result is an elite with more intellectual training, though not more intellectual freedom, than the general public. It’s a clever way to produce the kind of mental advantage the institution needs. Because while culture without freedom won’t produce a broad, generous mind, it can produce a sharp courtroom-style advocate—someone skilled at defending a position within fixed rules.

Protestant countries, at least in theory, can’t rely on that trick. Protestantism says each person is responsible for their own religious choice; you can’t offload that responsibility onto clergy. And in the modern world, there’s another practical obstacle: you can’t realistically keep books read by educated people out of the hands of everyone else. If society expects its teachers to know what they need to know, then writing and publishing must be free—without restraints.

The deeper harm: people don’t just forget the reasons—they forget the meaning

Suppose the damage done by suppressing discussion (when the accepted opinion is actually true) were limited to this: people forget why they believe what they believe. You might say, “That’s an intellectual problem, but not a moral one. The belief still shapes character in the right direction.”

But that’s not what happens. In practice, when there’s no discussion, people often lose not only the grounds of an opinion, but the meaning of the opinion itself.

The words remain, but they stop calling up real ideas—or they call up only a thin slice of what those words once carried. Instead of a vivid understanding and a living belief, you get:

  • memorized phrases repeated by habit,
  • and maybe, at best, the outer shell of the idea—the “husk”—with the subtle core gone.

This pattern fills an enormous chapter of human history, and it deserves serious attention.

Why beliefs go stale: the life cycle of doctrines

You can see this in almost every moral doctrine or religious creed.

When a doctrine is new—when it’s being created and argued into existence—it’s full of energy and meaning for the people who originate it and their early followers. The meaning stays intense, and often becomes even clearer, while the doctrine is still fighting for dominance against rivals.

Eventually, one of two things happens:

  • It wins and becomes the general opinion, or
  • it stops advancing but holds the ground it has gained.

Either way, the fight fades. Debate slows, then dies.

At that point, the doctrine becomes less something people choose and more something people inherit. Conversion becomes rare, and the doctrine stops occupying people’s minds as an urgent, living question. Believers no longer stay alert to defend their view against critics or to persuade outsiders. They settle into quiet acceptance. They avoid hearing arguments against their creed, and they stop pressing arguments on dissenters.

From that moment, the doctrine’s living power usually starts to decline.

That’s why teachers of every creed so often complain about how hard it is to keep believers feeling the truth they claim to accept—hard to make it sink into emotion and actually govern behavior. They don’t complain about that while the creed is still fighting for survival. In the struggle phase, even weak defenders know what they’re fighting for, what distinguishes their view from others, and why it matters. And during that phase you can always find people who have taken the core principles seriously—thinking them through in multiple forms, testing their implications, and letting them reshape character the way a true belief should.

But once belief becomes hereditary—received passively rather than actively—the mind stops exercising itself on the questions the creed raises. Then the drift begins:

  • people retain the formulas,
  • they give them a sleepy, dutiful assent,
  • and they act as though “believing” relieves them of the need to experience the belief, examine it, or make it real.

Gradually the creed stops connecting with a person’s inner life. And you get a common modern sight: a creed that sits outside the mind, like a crust that hardens over it. It doesn’t let new, living convictions enter, yet it does little for the mind or heart—except to stand guard and keep the interior strangely empty.

A vivid example: how most Christians treat Christian ethics

You can see how powerful doctrines can survive as dead beliefs—never truly felt or understood—by looking at how most believers relate to Christianity.

By “Christianity,” I mean what all churches and sects count as Christianity: the moral teachings and precepts in the New Testament. Nearly all professing Christians treat these as sacred and accept them as laws. And yet it’s barely an exaggeration to say that not one Christian in a thousand actually guides or tests their daily conduct by those laws.

What most people truly consult is something else: the customs of their nation, their social class, or their religious community.

So they carry two standards at once:

  1. A set of moral rules they believe came from infallible wisdom.
  2. A set of everyday judgments and habits that partially align with some of those rules, conflict outright with others, and overall compromise between religious teaching and worldly incentives.

They praise the first standard. They obey the second.

Consider some of the teachings Christians say they believe:

  • the blessed are the poor, humble, and the mistreated,
  • it’s easier for a camel to pass through the eye of a needle than for a rich person to enter heaven,
  • don’t judge, so you won’t be judged,
  • don’t swear oaths at all,
  • love your neighbor as yourself,
  • if someone takes your cloak, give your coat too,
  • don’t obsess over tomorrow,
  • if you want perfection, sell what you have and give to the poor.

Most Christians aren’t lying when they say they believe these. They do believe them in the way people believe what they’ve always heard praised and never heard seriously examined. But living belief—the kind that actually controls behavior—extends only as far as it is socially normal to act on it.

In full strength, these teachings are often used like ammunition: something you throw at opponents. And they’re also put forward, when convenient, as noble reasons for actions people already wanted to do. But if someone pointed out that these maxims demand countless things ordinary believers never even consider doing, the result wouldn’t be repentance or reflection. The person would mostly get labeled as one of those annoying types who “act holier than everyone else.”

So for most believers the doctrines don’t grip the mind. They aren’t a driving force. People respect the sound of the words, but the respect doesn’t travel from the words to the realities the words are supposed to name. When it comes time to act, they don’t ask, “What does this teaching require?” They ask, “How far do respectable people go with this?”—looking around for Mr. A and B to set the limit on how much obedience is expected.

Why early Christianity was different—and why that matters

It’s safe to say it wasn’t like this with the early Christians. If it had been, Christianity would never have expanded from a small, despised Jewish sect into the religion of the Roman Empire.

When outsiders said, “Look how these Christians love one another” (a comment you rarely hear now), it reflected something real: they felt the meaning of their creed far more vividly than later generations did. And that probably helps explain why Christianity today spreads so slowly beyond its historic strongholds, and why—after eighteen centuries—it remains largely concentrated among Europeans and their descendants.

Even among intensely religious people today—those who take doctrine seriously and attach more meaning to it than most—it often happens that the most active, motivating part of their belief is shaped less by the words of Christ and more by later figures like Calvin or Knox—people whose temperament and priorities feel closer to their own. Christ’s sayings, meanwhile, sit quietly in the mind, producing little effect beyond the pleasantness of hearing gentle, admirable phrases.

There are plenty of reasons why the distinctive doctrines that mark off a sect stay more alive than doctrines shared across sects. Teachers also work harder to keep those distinctive points vivid. But one reason is simple: the distinctive doctrines are challenged more often and have to be defended against open opponents. Once there’s no enemy in the field, teachers and learners alike tend to fall asleep at their post.

This isn’t just religion: it happens to “life wisdom,” too

The same pattern shows up in all traditional teachings—not only in morals and religion, but in everyday prudence and “life knowledge.”

Every language is packed with general observations about how life works and how to live well:

  • everyone knows them,
  • everyone repeats them,
  • everyone nods along as if they’re obvious truths.

And yet most people don’t truly learn what those sayings mean until experience—often painful experience—forces the meaning into focus. How often, after a shock or disappointment, does someone suddenly remember a proverb they’ve heard forever and think, “Now I get it”? If they had felt its meaning earlier the way they feel it now, they might have avoided the disaster.

To be fair, not all of this is caused by the lack of discussion. Some truths can’t be fully understood until life makes them personal. But even for those truths, people would understand far more—and what they did understand would sink much deeper—if they were used to hearing the matter argued both ways by people who actually grasp it.

One of humanity’s most dangerous habits is this: we stop thinking about an idea once it stops feeling doubtful. That habit accounts for half our mistakes. Someone once called this “the deep sleep of a settled opinion.”

Do we need disagreement for truth to stay alive?

At this point, someone might object:

Do we really need ongoing disagreement for real knowledge? Does some portion of humanity have to keep being wrong so that anyone can feel the truth? Does belief become lifeless the moment it becomes widespread? Do we only understand a proposition if some doubt remains? And once everyone accepts a truth, does it somehow die inside us?

No. That’s not my claim.

As humanity progresses, more and more doctrines will become settled—no longer seriously disputed. In fact, you could almost measure human well-being by how many important truths have become uncontested, and by how weighty those truths are. The gradual ending of major controversy, one topic after another, is a normal part of how opinion consolidates. When the opinions are true, that consolidation is healthy; when the opinions are false, it’s dangerous and toxic.

Still, even though the narrowing of disagreement is in one sense inevitable and in another sense necessary, we don’t have to pretend every consequence is good. Losing a major tool for keeping truth intelligently and vividly grasped—the tool provided by having to explain it to critics or defend it against challengers—is a real cost. It’s not enough to outweigh the benefits of universal acceptance, but it’s not a minor drawback either.

When that advantage disappears, I’d like to see educators and leaders deliberately create a substitute—some method that keeps the hard questions and apparent difficulties present in a learner’s mind, as if a sharp dissenting opponent were right there pressing the challenge and demanding an answer.

We used to have tools for this—and we’ve let them go

Instead of building such substitutes, we’ve mostly abandoned the tools we once had.

Take Socratic questioning, shown at its best in Plato’s dialogues. It was designed precisely for this job. It worked as a kind of disciplined negative discussion: it didn’t start by handing you a new doctrine, but by pushing you to see that you didn’t really understand the slogans you’d absorbed. Its goal was to bring someone—who had merely inherited common opinions—to a moment of clarity: “I don’t actually know what I mean.” And once you felt that ignorance, you could be guided toward a stable belief grounded in a clear grasp of both what the doctrine means and what evidence supports it.

The medieval school disputations aimed at something similar. They tried to ensure that a student understood their own position and, necessarily, the opposing position—and could state the reasons for one and answer the other.

Those exercises had a fatal flaw: their premises leaned on authority more than reason. As mental training, they were weaker than the powerful dialectical method that shaped Socrates’ intellectual heirs. Still, the modern world owes more to both practices than it usually admits. And our current educational methods don’t really replace either.

Someone who learns only from teachers or books—even if they avoid the trap of pure memorization—has no built-in pressure to hear both sides of a question. As a result, even among serious thinkers, it’s surprisingly rare to actually know both sides well. And the weakest part of most people’s defense of their own opinion is exactly what they intend as their reply to opponents.

Our age has even developed a fashion for sneering at negative logic—the kind of thinking that identifies weak arguments and bad practices without immediately offering a shiny replacement truth.

Negative criticism—pointing out what’s wrong, poking holes, stress-testing assumptions—would be a pretty sad “final product” if that’s all we ever got. But as a tool for reaching real understanding and convictions that deserve to be called convictions, it’s hard to overvalue. Until people are deliberately trained in this habit again, we shouldn’t expect many truly great thinkers, and the overall level of intellect will stay low everywhere except in fields like math and the physical sciences, where rigorous argument is built into the culture.

On almost any other subject, a person’s opinions only rise to the level of knowledge if they’ve done one of two things:

  • Had their views seriously challenged by other people, so they were forced to defend, revise, or rebuild them.
  • Put themselves through the same mental grind on their own—the kind you’d have to do if you were debating smart opponents in public.

That kind of disciplined, argumentative thinking is both essential and hard to manufacture when it’s missing. So when it shows up naturally—when people are actually willing to contest the “accepted” view—it’s absurd to treat that as a nuisance we can do without. If anyone is willing to challenge a received opinion (or would be willing, if the law and social pressure didn’t punish them for it), we should be grateful. We should listen with an open mind. And we should be relieved that someone is doing for us—at their own cost—what we ourselves would otherwise have to do with far more effort, if we care at all about either the certainty or the aliveness of what we believe.


Why Disagreement Helps Even When No One Is “Totally Wrong”

There’s another major reason diversity of opinion is valuable, and it will stay valuable for a long time—at least until humanity reaches an intellectual maturity that, honestly, still seems unimaginably far away.

So far, we’ve been working with two familiar scenarios:

  1. The mainstream view is false, and the dissenting view is true.
  2. The mainstream view is true, but it only becomes fully understood—and felt—when it has to fight off a plausible opposing error.

But there’s a third case that’s even more common: both sides contain parts of the truth.

In that situation, the dissenting view matters because it carries the missing piece—the part the mainstream view leaves out.

Here’s how it usually works:

  • Popular opinions about things you can’t directly see or measure are often true as far as they go, but they’re rarely the whole truth. They tend to be a partial truth that’s been stretched, warped, and separated from other truths that should have kept it balanced.
  • Heretical or unpopular opinions are often those neglected truths breaking loose—either trying to reconcile with what the mainstream already has right, or else squaring off against it as an enemy and declaring themselves the entire truth.

That second pattern—each side insisting it has all of reality in its pocket—is still the most common, because human thinking is usually one-sided, and genuinely many-sided thinking is the exception. That’s why even “revolutions” in opinion often look less like a careful upgrade and more like a swap: one chunk of truth goes down as another rises.

Even what we call progress tends to substitute one partial truth for another instead of adding and integrating. The “improvement,” such as it is, is that the new fragment is more urgently needed and better suited to the moment than the fragment it replaces.

That’s why, given how partial prevailing opinions typically are—even when they’re grounded in something real—any view that captures some missing portion of the truth should be treated as precious, even if it arrives tangled up with errors and confusion. A sensible judge of human affairs won’t get indignant because someone who forces a neglected truth into the spotlight happens to miss other truths we already see. On the contrary: as long as popular truth stays one-sided, it’s often useful that unpopular truth comes with one-sided champions too. They’re usually the ones with enough energy to make a complacent public pay attention to the fragment of wisdom they’re shouting—often as if it were the whole thing.


Rousseau as a Useful Explosion

Take the eighteenth century. Educated people—and most of the uneducated who followed their lead—were practically intoxicated with admiration for “civilization” and the triumphs of modern science, literature, and philosophy. They wildly exaggerated how different modern people were from ancient people, and they comforted themselves with the belief that the difference was entirely in modern humanity’s favor.

Then came Rousseau’s paradoxes. They landed like explosives. They shattered that tight block of one-sided admiration and forced people to reassemble their thinking in a better, more complex shape.

This doesn’t mean Rousseau was “more right” overall than the mainstream. In fact, the mainstream opinions were closer to the whole truth: they contained more solid, positive truth and far less outright error. And still, Rousseau carried something the mainstream badly lacked—truths that the fashionable story had suppressed.

Those truths didn’t vanish when the intellectual storm passed. They settled into the culture like sediment:

  • the higher value of simplicity of life
  • the way artificial society—with its constraints and hypocrisies—can weaken people and rot their moral instincts

These ideas have never fully disappeared from educated minds since Rousseau wrote. Over time they’ll do their work, even if, right now, they still need to be insisted on as strongly as ever—and, more than that, insisted on through actions, because mere talk on this topic has nearly worn itself out.


Politics: Why We Need Competing Sides

Politics offers an even clearer example. It’s almost a cliché to say that a healthy political life needs both:

  • a party of order/stability, and
  • a party of progress/reform

And that remains true until one side grows broad-minded enough to become a party of both—able to tell what should be preserved from what should be cleared away.

Each style of thinking is useful largely because of what the other lacks. And, crucially, the pressure from the opposing side is what keeps each one from drifting into irrationality.

Unless we allow genuinely free expression—and strong advocacy—on both sides of the big practical tensions of life, we don’t get balance. We get a seesaw: one side rises, the other falls. Consider how many enduring oppositions keep showing up in public life:

  • democracy vs. aristocracy
  • property vs. equality
  • cooperation vs. competition
  • luxury vs. abstinence
  • sociality vs. individuality
  • liberty vs. discipline
  • and so on

In real life, truth on these issues usually isn’t a matter of choosing one pole and rejecting the other. It’s a matter of reconciling and combining them—finding a workable adjustment. Very few people have minds roomy and impartial enough to do that accurately in their heads. So society ends up doing it the blunt way: through a struggle between opponents fighting under rival banners.

And on these open questions, if either side deserves not just tolerance but active encouragement, it’s typically the side that happens to be in the minority at that time and place. Why? Because the minority opinion often represents the interests being neglected—the part of human well-being that’s at risk of getting less than its share.

To be clear, I’m not claiming that this country is currently intolerant about most of these disputes. I’m using them as familiar examples to show something broader: given the current state of human intellect, only diversity of opinion gives us any real chance of fair consideration for all sides of the truth. Whenever you find people who break the apparent unanimity of the world on any subject—even if the world is right—it’s usually a sign that the dissenters have something worth hearing. And truth would lose something if they were forced into silence.


The “Christian Morality Is the Whole Truth” Objection

Someone might object: “Fine, maybe most popular principles are partial truths. But some received principles—especially on the highest and most vital subjects—are more than half-truths. Christian morality, for example, is the whole truth about morality. If someone teaches a morality that differs from it, they’re simply wrong.”

This is the most important version of the objection in practice, so it’s the best test of the general rule. But before declaring what Christian morality is or isn’t, we should be clear about what we mean by it.

If by “Christian morality” you mean the morality of the New Testament, it’s hard to see how anyone who has actually drawn their understanding from the text itself could think it was presented—or intended—as a complete moral system. The Gospels repeatedly assume a pre-existing moral framework and focus on correcting it in certain particulars or lifting it to something broader and higher. And they speak in very general terms—often too general to be interpreted literally, and more like poetry or powerful rhetoric than like a carefully drafted legal code.

That’s why no one has ever been able to extract a full ethical doctrine from the New Testament alone without filling in the gaps from the Old Testament—that is, from a system that is indeed elaborate, but also barbarous in many respects, and designed for a barbarous people.

Paul—who explicitly opposed this Judaizing way of interpreting and expanding the doctrine—still assumes a pre-existing morality too, namely the moral ideas of the Greeks and Romans. Much of his guidance to Christians is essentially an adaptation to that world, even to the point of giving what looks like approval to slavery.

What people commonly call “Christian” morality would be more accurately called theological morality. It wasn’t produced by Christ or the apostles. It emerged later, gradually constructed by the early Catholic Church over the first several centuries. And while modern thinkers and Protestants haven’t accepted it wholesale, they’ve altered it far less than you might expect. Often they simply removed certain medieval additions—then each group replaced them with new additions shaped by its own temperament and priorities.

I’m not denying the debt humanity owes to this moral tradition and its early teachers. But I won’t hesitate to say that it is, on many important points, incomplete and one-sided—and that if ideas and feelings not endorsed by it had not also helped shape European life and character, the world would be worse than it is now.

So-called Christian morality carries the marks of a reaction—a pushback against paganism. Its ideal is more:

  • negative than positive
  • passive than active
  • innocence more than nobleness
  • avoiding evil more than energetically pursuing good

Its “don’t” tends to drown out its “do.”

In its fear of sensuality it turned asceticism into an idol, later softened into a kind of legalistic respectability. It offers heaven and hell as the central motives for virtue—falling below the best ancient ethics, and nudging morality toward selfishness by tying duty to personal reward and punishment. It detaches a person’s sense of obligation from the real interests of other people, except where helping them can be reframed as enlightened self-interest.

It’s also, at its core, an ethic of passive obedience: submit to whatever authorities exist. Yes, you needn’t obey when commanded to violate religion—but you must not resist, and certainly must not rebel, no matter how much injustice is done to you.

Meanwhile, the best pagan moralities gave too much weight to duty to the state, sometimes crushing individual liberty. But in purely Christian ethics, that whole domain of public duty is barely recognized. A striking maxim about public responsibility—roughly, that a ruler sins against God and the state if he appoints someone to office when there is a better-qualified person available—appears not in the New Testament but elsewhere. In modern morality, such recognition of public obligation as we do have comes from Greek and Roman sources, not Christian ones. Even in private life, much of what we value—magnanimity, high-mindedness, personal dignity, even the sense of honor—comes from the human, secular side of education. It could not have grown from a moral standard that treats obedience as the only acknowledged virtue.


What This Does (and Doesn’t) Claim About Christ’s Teachings

None of this requires the claim that these defects are unavoidable in every possible conception of Christian ethics, or that the missing elements of a complete morality can’t be reconciled with Christianity. Still less does it require an attack on Christ’s own teachings.

I believe the sayings attributed to Christ are exactly what there is evidence they were meant to be. I don’t see them as irreconcilable with anything a comprehensive morality needs. Everything excellent in ethics can be brought within their spirit without doing greater violence to their language than theologians have already done when trying to squeeze from it a full practical code of conduct.

And yet that’s entirely consistent with believing that those sayings were meant to contain only part of the truth. Many essential elements of the highest morality are not provided for—and not meant to be provided for—in the recorded teachings, and they have been pushed aside in the ethical system that the Church later built on top of those teachings.

Given that, it’s a serious mistake to keep insisting that Christianity contains a complete rule of life in the way many people demand. Its founder meant it to sanction and enforce such a rule only in part, not to supply the entire thing.

Worse, this narrow theory is becoming a practical moral problem. It undermines the value of the moral education that many sincere people are now working hard to spread. I worry that by shaping minds and feelings on an exclusively religious template—and discarding the secular standards that once existed alongside Christian ethics, supplementing it while also absorbing some of its spirit—we end up producing (and already are producing) a low, servile type of character. Such a person may submit to what they take to be the Supreme Will, but can’t rise to—or even sympathize with—the idea of Supreme Goodness.

To regenerate human morality, we need ethical sources beyond those that can be derived from Christianity alone. Christianity isn’t an exception to the general rule: in an imperfect stage of human development, the interests of truth require diversity of opinion.

This doesn’t mean that once we stop ignoring moral truths outside Christianity, we must start ignoring the truths within it. That kind of prejudice or blindness, when it happens, is plainly bad. But we can’t expect to avoid it entirely. It’s the price of an immeasurable good: preventing a part of the truth from tyrannizing as if it were the whole.

That exclusive claim must and should be resisted. And if a reaction produces defenders who become one-sided in their turn, we may regret it—but we must tolerate it. If Christians want nonbelievers to be fair to Christianity, they should be fair to nonbelief. Truth is not served by pretending not to know what anyone familiar with literary history knows: a large share of the noblest and most valuable moral teaching was produced not only by people who didn’t know the Christian faith, but by people who knew it and rejected it.


Free Discussion Won’t End Sectarianism—but It Still Matters

I’m not claiming that unlimited freedom to express every possible opinion would eliminate the evils of religious or philosophical sectarianism. Whenever narrow minds become intensely committed to a truth, they tend to preach it, drill it in, and even live it as if no other truth existed—or at least as if no other truth could limit or qualify it.

And I admit something else: the tendency of opinions to harden into sects isn’t cured by free discussion. Sometimes it’s intensified. When a truth that should have been noticed but wasn’t finally gets proclaimed by people viewed as enemies, others may reject it even more violently for that very reason.

But the real benefits of clashing opinions don’t land on the loudest partisan. They land on the steadier, more open-minded observer—the person who isn’t trying to “win,” but to understand. The biggest danger isn’t a heated fight between competing pieces of the truth. It’s the quiet suffocation of one half of the truth. When people are made to hear both sides, there’s still hope. When they hear only one side, mistakes harden into prejudices, and even genuine truth stops working like truth—because it gets stretched, oversold, and warped into something false.

And since one of the rarest mental skills is the ability to judge fairly between two sides when only one has a lawyer arguing for it, truth has a fighting chance only when every side of it—every view that contains even a slice of the truth—finds not just defenders, but defenders who can earn a hearing.

At this point, we can say plainly: freedom of opinion and freedom to express opinion are essential to humanity’s mental health (and everything else rests on that). The case rests on four main reasons:

  • First: If you silence an opinion, it might be true—for all you know. To assume otherwise is to assume you can’t be wrong.
  • Second: Even when the silenced opinion is mistaken, it often contains some truth. And because the “accepted” view is almost never the whole truth, the only reliable way to find what’s missing is to let opposing views collide and reveal the remainder.
  • Third: Even if the accepted opinion is not only true but the whole truth, people won’t really understand it unless it is allowed—better, forced—to face serious, energetic challenge. Without that pushback, most people hold the “true” belief the way they hold a prejudice: by habit, not by grasping the reasons.
  • Fourth: Worse still, the doctrine itself can lose its meaning. It becomes a dead slogan—something recited, not something that shapes character and behavior. It turns into a formal badge of membership that does little good, while also crowding out genuine conviction built from reason and lived experience.

“Sure,” some people say, “let everyone speak—as long as they stay polite.” Before leaving the subject, we should address that idea: that all opinions may be freely expressed, but only if the tone is “temperate” and never crosses the line of “fair discussion.”

The first problem is practical: where exactly is that line? If the test is “does it offend the people being criticized,” then experience gives a blunt answer—people feel offended precisely when the criticism is effective. The sharper and more convincing the challenge, the more likely it is to be labeled “unfair.” Anyone who presses hard on a view, especially when the defenders can’t easily respond, will be called “intemperate” the moment they show any real emotion.

That’s a serious point in practice, but it leads to a deeper objection. Yes: the way a person argues, even for a true position, can be ugly and deserves criticism. But the worst kinds of wrongdoing in argument are usually the hardest to prove—except when someone accidentally gives themselves away. The gravest offenses are things like:

  • arguing dishonestly and strategically (sophistry),
  • hiding inconvenient facts or arguments,
  • distorting what the dispute is actually about,
  • caricaturing the opposing view.

The trouble is that people do these things constantly while sincerely believing they’re being fair. They aren’t necessarily stupid or incompetent in general; they’re just biased, swept along by their side, and blind to their own distortions. So it’s rarely possible, on solid grounds and with a clean conscience, to declare a particular misrepresentation a moral crime—and it would be even less reasonable for the law to step in and police that kind of argumentative misconduct.

Now consider what people usually mean by “intemperate discussion”: invective, sarcasm, personal attacks, and the rest. Condemning those tactics would deserve more sympathy if anyone proposed banning them evenly. But in practice, the demand is almost always one-sided: people want to restrain harshness aimed at the dominant opinion, while giving a free pass to harshness aimed at unpopular views. When someone attacks the mainstream with biting language, they’re scolded for bad manners. When someone uses the same weapons against an outsider view, they’re praised for “honest zeal” and “righteous indignation.”

And whatever harm these tactics cause, it’s greatest when they’re used against those who are comparatively defenseless. Any unfair advantage gained by mockery and intimidation flows overwhelmingly toward what’s already widely accepted.

The ugliest version of this move is when a debater brands the other side as bad people—immoral, corrupt, dangerous. People with unpopular opinions are especially exposed to this kind of slander because they tend to be few, lack influence, and have almost no one invested in defending them except themselves. Meanwhile, those who argue against a prevailing opinion can rarely use the same weapon without it backfiring. They can’t do it safely, and even if they could, it would mostly rebound onto their own cause.

So the pattern is predictable:

  • A minority view can usually get a hearing only by carefully moderating its language and avoiding offense almost to the point of self-censorship.
  • Meanwhile, harsh abuse from the side of the majority does real work: it discourages people from openly holding the minority view, and it scares others away from even listening.

If you care about truth and justice, then the more urgent restraint isn’t on sharp criticism of mainstream beliefs—it’s on the bullying, contemptuous language used to silence dissent. If we had to choose, we’d have more reason to discourage vicious attacks on unbelief than vicious attacks on religion.

Still, the conclusion is straightforward: law and official power should restrain neither side’s rhetoric. Social judgment is the proper tool here, and it should decide case by case. Public opinion should condemn anyone—whatever their position—whose advocacy shows:

  • lack of honesty,
  • malice,
  • bigotry,
  • intolerance.

But it should not assume those vices just because someone takes the “wrong” side (meaning: the side we disagree with). And it should give real respect to anyone, whatever they believe, who has the calmness to see and the integrity to state what their opponents actually think—without exaggerating their faults and without hiding anything that could be said in their favor.

That is the real morality of public discussion. It is often violated. But there are many people who largely live up to it—and many more who sincerely try.

III — Of Individuality, As One Of The Elements Of Well-Being

Given everything we’ve just said about why people must be free to form opinions—and say them out loud without fear—and about how badly the mind (and then the moral life) shrivels when that freedom is denied or only grudgingly allowed, the next question is obvious: don’t the same reasons also mean people should be free to live their opinions?

Not in every sense, of course. Nobody thinks actions should enjoy the same unlimited protection as thoughts. In fact, even speech loses its shield when it’s used in a way that predictably sets off harm. You can argue in print that grain merchants “starve the poor,” or that private property is “theft,” and society should let you make your case. But if you shout that message to an angry crowd gathered outside a grain dealer’s house—or slap it onto posters and pass them around to whip the crowd up—then you’ve crossed into incitement. At that point, punishment can be justified.

So here’s the boundary: any act that harms other people without a solid justification can—and in serious cases must—be checked by public disapproval and, when necessary, by direct intervention. Individual freedom has to stop short of turning someone into a menace. But if a person avoids interfering with others in matters that belong to them, and is simply following their own judgment in matters that primarily concern themselves, then the very same logic that protects free opinion also protects the freedom to put those opinions into practice, at one’s own risk.

Why? Because all the big truths we rely on to defend freedom of thought apply just as strongly to freedom of living:

  • People aren’t infallible.
  • What we call “truth” is often a half-truth, missing the other side.
  • Agreement is only valuable when it emerges from the widest, freest clash of competing views.
  • Diversity isn’t a problem—it’s an advantage, at least until humans get much better at seeing the whole truth at once.

And those points don’t just govern what people believe. They govern how people act. If it’s useful—given human imperfection—that people hold different opinions, it’s also useful that people try different ways of living. We need experiments in living: room for different characters, different lifestyles, different personal “operating systems,” so long as they don’t harm others. The value of a way of life isn’t proved in theory; it’s proved by someone actually trying it.

In short, whenever the issue doesn’t mainly concern other people, individuality should get to speak up and steer. A life run on autopilot—where your rulebook is mostly other people’s traditions and habits—lacks one of the essential ingredients of happiness and, more importantly, the single most important ingredient of progress, both personal and social.

The hardest part of defending this principle isn’t drawing the line between liberty and social control. That part isn’t mysterious once you accept the goal. The real obstacle is that most people don’t even treat the goal as important. If everyone genuinely felt that the free development of individuality is a core requirement of well-being—not just a nice extra alongside “civilization,” “education,” “culture,” and the rest, but a necessary condition for all of them—then liberty wouldn’t be undervalued, and setting its limits wouldn’t be unusually difficult.

But that’s not how most people think. In the common moral imagination, spontaneity barely counts as something valuable in its own right. Most people are satisfied with the world as they find it—largely because they’re the ones who keep it that way—and they honestly can’t see why the standard ways of living shouldn’t be “good enough” for everyone. Even many social and moral reformers treat spontaneity not as a vital human good, but as an irritant: a messy, maybe rebellious obstacle to their preferred blueprint for society.

Outside Germany, very few people even grasp the idea that Wilhelm von Humboldt—famous both as a scholar and as a statesman—built an entire treatise around: that the proper “end” of human beings, as reason (not fleeting desire) demands, is the fullest, most balanced development of our powers into a complete whole. From that, Humboldt concludes that the central object every person should pursue—and anyone trying to influence others should keep front and center—is individuality of power and development. And he says two conditions make this possible: freedom and a variety of situations. When you combine those, you get vigor and rich diversity, which together produce originality.

Still, even if this way of thinking is unfamiliar—and even if it surprises people that individuality could be valued so highly—the debate can’t really be about whether individuality matters at all. It’s about how much.

Nobody’s ideal of excellent conduct is a world where everyone does nothing but copy everyone else. No one seriously argues that a person should leave no trace of their own judgment or personality on how they live or manage their affairs. At the same time, it would be ridiculous to claim people should live as if no one had learned anything before they were born, as if experience has taught us nothing about what kinds of lives tend to go better or worse.

Of course young people should be taught the established results of human experience—and trained to benefit from them. But once someone reaches maturity, it’s both their privilege and their proper condition as a human being to use experience in their own way. They need to decide what parts of the past actually apply to their circumstances and character.

Traditions and customs do have some authority: they are evidence—imperfect evidence, but still evidence—of what other people have learned. That gives them a claim to some initial respect. But there are several reasons that claim is limited:

  1. Other people’s experience may be too narrow, or they may have read the lesson wrong.
  2. Even if their reading is correct, it may not fit you.
  3. Customs are built for typical situations and typical personalities—and your situation or your personality may not be typical.
  4. Even if a custom is generally good and happens to suit you, following it just because it’s customary doesn’t develop you.

That last point is the key. Your distinctly human abilities—perception, judgment, discrimination, mental activity, even your moral preferences—are exercised only when you have to choose. If you do something simply because “that’s the done thing,” you haven’t chosen at all. You get no practice in figuring out what’s best, or even in wanting what’s best.

Our mental and moral powers work like muscles: they strengthen only through use. Doing something because others do it doesn’t train those powers any more than believing something because others believe it. If the reasons for an opinion don’t convince your own mind, accepting it won’t strengthen your reason—it will probably weaken it. And if the motives for an action aren’t in tune with your own feelings and character (assuming no one else’s rights are at stake), then acting anyway trains you into numbness. It makes your character sluggish instead of alive.

When a person lets the world—or their slice of it—pick their life plan for them, they only need one skill: imitation. But when a person chooses their plan for themselves, they have to bring their whole mind online. They must observe, reason, and judge; they must gather information; they must weigh and discriminate; and once they decide, they need steadiness and self-control to stick to a considered choice. And the more of their conduct they actually direct through their own judgment and feelings, the more they develop these capacities.

Sure, it’s possible to shepherd someone down a decent path and keep them from obvious dangers without requiring much of this. But then what, exactly, have you produced? It matters—not only what people do—but what kind of people they become in the doing.

Among all the things human life can be spent improving and beautifying, the most important is the human being themselves. Imagine we could get houses built, grain grown, battles fought, lawsuits argued, even churches built and prayers recited by machinery—human-shaped automatons. Trading real people for those automatons would still be a loss, even if the people we traded away were only the “starved specimens” that modern civilization too often produces. Human nature isn’t a machine you assemble according to a blueprint and set to a prescribed task. It’s a living thing—more like a tree—that needs to grow outward in multiple directions, following the inner forces that make it alive.

Most people will agree, at least in theory, that it’s good for people to use their minds, and that an intelligent following of custom—or even the occasional intelligent break from custom—is better than blind, mechanical conformity. Up to a point, we admit that our understanding should be our own.

But we’re far less willing to admit that our desires and impulses should be our own too. Many treat having strong personal impulses as mostly dangerous—a temptation, a trap. Yet desires and impulses are just as much a part of a fully developed human being as beliefs and self-restraint. Strong impulses are risky only when they’re lopsided: when some drives are overdeveloped while other necessary ones remain weak and unused.

People don’t do wrong because their desires are strong. They do wrong because their conscience is weak. There’s no natural link between strong impulses and weak conscience. If anything, the natural link runs the other way.

To say someone has stronger and more varied desires and feelings than someone else is just to say they have more raw human material to work with. That means they may be capable of more harm—but it also means they’re capable of more good. Strong impulses are another name for energy. Energy can be misused, but an energetic nature always has the potential to produce more good than a lazy, passive one.

People with the strongest natural feelings are also the people whose cultivated feelings can become strongest. The same intense sensitivities that make personal impulses vivid are also the source of the deepest love of virtue and the firmest self-control. Society protects itself and does its duty not by refusing this “hero-making” material because it doesn’t know how to shape it, but by cultivating it.

When someone’s desires and impulses are truly their own—when they express their nature as it’s been formed and refined by their own development—we say they have character. Someone whose desires and impulses aren’t their own has no character, any more than a steam engine has a character. And if their impulses are not only their own but also strong, and governed by a strong will, then they have an energetic character.

So anyone who argues that we shouldn’t encourage individuality in desires and impulses is, whether they realize it or not, arguing that society doesn’t need strong natures; that it isn’t better off with many people of real character; and that a high general level of energy isn’t desirable.

There were times—early in the history of societies—when spontaneity and individuality really did outrun society’s ability to discipline and contain them. Back then, the problem was getting strong-bodied or strong-minded people to obey rules that required self-control. To solve that, law and discipline (like popes struggling against emperors) claimed authority over the whole person, trying to control every aspect of life in order to control character—because society didn’t yet have other reliable tools for social binding.

But today, society has largely won that fight. The danger now isn’t an excess of individuality; it’s a shortage of it. The world has changed drastically since the era when the powerful lived in habitual rebellion against laws and had to be “chained up” so that people around them could enjoy even a little security.

Now, from the top of society to the bottom, almost everyone lives as if under the gaze of a hostile censor. And it isn’t only in matters that affect others. Even in matters that mostly affect only themselves, people don’t ask:

  • What do I prefer?
  • What fits my temperament?
  • What would give the best part of me room to grow?

Instead they ask:

  • What fits my position?
  • What do people of my income and “station” usually do?
  • And worst of all: what do people above me usually do?

It’s not that they consciously weigh custom against inclination and pick custom. The deeper problem is that it doesn’t even occur to them to have an inclination except toward what’s customary. The mind itself bends under the harness. Even in leisure, conformity comes first: people like things in crowds, and they “choose” only among the options already socially approved. Any unusual taste or eccentric conduct is avoided as instinctively as a crime. And after years of not following their own nature, they end up with no nature left to follow: their capacities wither, their desires go weak, their pleasures go stale, and they drift into lives with few real opinions or feelings that are genuinely homegrown.

Is that what we want human nature to become?

Some people would say yes—at least on a Calvinistic view of life. On that theory, the fundamental human crime is self-will. The whole of human goodness is reduced to obedience. You don’t choose; you submit. “This is your duty; anything else is sin.” Human nature is treated as so corrupt that no one can be saved until that nature is essentially killed inside them. From that perspective, crushing human capacities—our faculties, our susceptibilities—isn’t tragic; it’s fine. A person needs only one ability: surrender to the will of God. Any other capacity is valuable only insofar as it helps them obey more efficiently—and if it doesn’t, they’d be better off without it.

That is Calvinism in its strict form. Many people who wouldn’t call themselves Calvinists still hold a softened version. The “softening” comes from reading God’s will less harshly: allowing that people may gratify some inclinations. But even then, the rule is the same—gratify them not in the way you yourself choose, but in the way you’re told is acceptable. In other words: gratification by obedience, in a pattern laid down by authority, and therefore—by the logic of the situation—essentially the same for everyone.

In one subtle form or another, this narrow theory of life has real momentum today. And it produces exactly the kind of human character it deserves: pinched, cramped, stiff with rules, and hostile to growth. Plenty of people sincerely believe this is how the Creator intended humans to be—just as plenty of people have believed that trees look best when they’re clipped into stumps or sculpted into animal shapes, rather than allowed to grow as nature made them.

If you believe any part of religion is the idea that a good Creator made human beings, then it fits that belief much better to think this: we were given our abilities so we could develop them—not stamp them out. On that view, what would please such a Being isn’t self-erasure, but every step we take toward the best version of what a human can be: deeper understanding, stronger action, richer enjoyment.

That’s why there’s more than one kind of human excellence. The severe, Calvinist ideal treats humanity as something to be mostly denied. But there’s another tradition that says the human “self” isn’t merely a problem to suppress.

  • Self-denial matters.
  • But so does self-assertion—what critics once dismissed as “pagan self-assertion.”

Alongside the Christian and Platonic ideal of self-government, there’s the Greek ideal of self-development. The first can enrich the second, but it shouldn’t erase it. Yes, it may be better to be a John Knox than an Alcibiades. But better than either is a Pericles—someone whose excellence is not just intensity or restraint, but breadth, balance, and public-spirited greatness. And if a Pericles existed today, he wouldn’t lack whatever was genuinely admirable in John Knox.

Individuality isn’t a nuisance to sand down. It’s the raw material of a great life. People become noble—worth looking at, in the best sense—by drawing out what’s distinctive in themselves and cultivating it, while staying within the boundaries set by other people’s rights and interests. And because our work carries our character into the world, the same process makes life itself more vivid: more varied, more stimulating, better fuel for big thoughts and elevated emotions.

A person whose individuality is well developed becomes:

  • more valuable to themselves (because their life is fuller and more alive),
  • and therefore more valuable to others.

When the “units” have more life in them, the whole society has more life too.

Of course, some restraint is necessary. You can’t let the strongest personalities trample the rights of everyone else. But even that kind of “compression” comes with compensation—especially when you look at human growth. The opportunities an individual loses when they’re prevented from indulging harmful impulses are usually opportunities that would have come at other people’s expense anyway. And even for the person being restrained, there’s a real tradeoff: curbing the selfish part of our nature makes room for better development of the social part.

There’s a big difference between two kinds of restraint:

  • Restraint for justice’s sake—rules that protect others—strengthens the capacities that aim at other people’s good.
  • Restraint by mere disapproval—being blocked in harmless things simply because others dislike them—develops nothing worthwhile, except, sometimes, the toughness required to resist.

And if you don’t resist—if you simply submit—it doesn’t build character at all. It dulls it. It flattens the whole personality.

So if we want to give human nature a fair chance, we have to accept something basic: different people must be free to live different lives. Whenever an era has allowed that kind of latitude, history has remembered it. Even despotism doesn’t do its worst work as long as some individuality survives under it. And anything that crushes individuality is despotism, whatever label it wears—whether it claims to enforce God’s will or society’s preferences.

At this point, you might think the argument is finished. If individuality is just another word for development, and if only the cultivation of individuality produces fully developed human beings, what more needs saying? What higher praise could you give any social arrangement than that it helps people become as excellent as they can be? And what harsher indictment could you give any obstacle than that it prevents this?

But that won’t persuade the people who most need persuading. So we have to show something else too: that developed people are useful to the undeveloped. We have to make it clear to those who don’t care about liberty—and wouldn’t use it themselves—that they still get something intelligible in return when they allow other people to use it freely.

To start, they might learn from them.

No one seriously denies that originality is valuable in human life. We always need people who can:

  • discover new truths,
  • notice when yesterday’s “truths” no longer fit the facts,
  • start new practices,
  • and model better conduct, better taste, and better judgment about how to live.

You’d only dispute this if you believed humanity has already reached perfection in every habit and institution.

Still, it’s true that not everyone can provide this benefit. Compared with the whole human race, only a small number of people’s experiments would actually improve established practice if others copied them. But those few are the salt of the earth. Without them, life would turn into a stagnant pond.

And their role isn’t only to invent what didn’t exist before. They also keep existing good things from going stale. Because if nothing new ever happened, would we still need thinking? Or would people simply keep performing inherited routines, forgetting why they do them—behaving like trained animals rather than reflective human beings?

Even our best beliefs and practices tend to harden into habit. They become mechanical. And unless there is a steady succession of people whose recurring originality keeps reminding us of the reasons behind those beliefs and practices—so they don’t become mere tradition—then those dead forms can’t withstand even a small shock from something truly alive. Civilization itself can rot into empty ceremony, as it did in Byzantium.

Genius will always be a minority. But if you want genius at all, you have to preserve the conditions it needs. Genius needs freedom the way lungs need air. By definition, people of genius are more individual than most. That means they don’t fit—without damage—into the few standardized “molds” society offers, mostly to spare its members the effort of shaping themselves.

When such people, out of timidity, agree to be forced into one of those molds, everything in them that can’t grow under that pressure stays stunted. Society gets little benefit from their gifts. But if they have the strength to break free, society often turns them into a public warning label: “wild,” “erratic,” and so on—like complaining that the Niagara River refuses to flow politely like a Dutch canal.

That’s why I’m stressing the value of genius—and the need to let it unfold in both thought and action. In theory, everyone praises genius. In practice, almost everyone is indifferent to it. People like genius when it produces an exciting poem or a beautiful painting. But when genius means what it really means—originality in thinking and living—most people quietly believe they can do without it.

That’s not surprising. Unoriginal minds can’t feel the use of originality. They can’t picture what it does for them. And if they could clearly see it in advance, it wouldn’t be originality. So the first job originality performs is simply this: it opens people’s eyes. Once that happens, they at least have a chance of becoming original themselves.

Meanwhile, remember a basic fact: nothing good ever entered the world without someone being the first to do it. Everything valuable that exists began as somebody’s originality. So people should be modest enough to accept that originality still has work left to do—and to understand that the less they sense a need for it, the more they likely need it.

In reality, whatever respect we claim to have for mental superiority, the broad drift of the modern world is to make mediocrity the dominant force. In ancient times, in the Middle Ages, and—though less and less—through the long transition from feudalism to the modern era, the individual could be a power in their own right. If they also had great talents or high social rank, they could be a major force.

Now, individuals disappear into the crowd.

In politics it’s almost a cliché to say that public opinion rules the world. But the real point is deeper: the only power that truly deserves the name is the power of the masses—and of governments insofar as they make themselves the instrument of mass tendencies and instincts. The same thing is true in private life: in morals, in social relations, in what gets praised and what gets punished.

And “public opinion” isn’t always the same public. In America it’s the whole white population; in England it’s mainly the middle class. But it’s always a mass—in other words, collective mediocrity.

Here’s what’s newer still: the mass no longer borrows its opinions from obvious authorities—church leaders, state dignitaries, public intellectuals, or even books. Instead, its thinking is done for it by people much like itself, speaking in its name, in the moment, through newspapers.

I’m not saying every part of this is bad. I’m not claiming something better is generally possible, given the current low level of intellectual development. But even so, rule by mediocrity produces mediocre rule. No democracy—and no broad aristocracy—has ever, in its political actions or in the character it fosters, risen above mediocrity, except insofar as “the many,” at their best, allow themselves to be guided by the influence and counsel of a wiser and more gifted one, or a few.

Because every wise or noble thing begins with individuals—often with one individual first. The ordinary person’s honor isn’t that they originate these things. It’s that they can recognize them when they appear, respond to them inwardly, and follow them with understanding rather than blind obedience.

This is not an argument for “hero-worship”—the kind that applauds a genius for seizing power and forcing the world to obey him. A great mind can claim only this: the freedom to show the way. The power to compel others down that path doesn’t just violate everyone else’s freedom and development; it also corrupts the strong person himself.

Still, when the opinions of average people become the dominant force everywhere—as they now are, or are becoming—the natural counterweight is more pronounced individuality among those who stand on higher ground intellectually. In times like these, exceptional people shouldn’t be discouraged from living differently from the crowd. They should be encouraged.

In earlier periods, there was no point in departing from the mass unless you could depart better. But today, the mere example of nonconformity—the simple refusal to kneel to custom—is itself a public service. Precisely because the tyranny of opinion makes “eccentricity” a mark of disgrace, it’s healthy—necessary, even—that people be eccentric. Eccentricity has always been abundant wherever strength of character has been abundant. The amount of eccentricity in a society has usually matched the amount of genius, mental energy, and moral courage within it. That so few people now dare to be eccentric is one of the clearest signs of the danger of our time.

I’ve said we should give the widest possible scope to unconventional ways of living so we can discover, over time, which deserve to become new customs. But independence and disregard of custom don’t deserve support only because they might generate better habits for everyone. The right to live in one’s own way isn’t reserved for the brilliant.

There’s no reason human life should be built from one pattern—or even a small handful. If someone has a decent amount of common sense and experience, their own way of arranging their life is best for them—not because it’s objectively best in the abstract, but because it’s their way.

Human beings aren’t sheep. And even sheep don’t all look exactly the same.

Think of something as simple as clothing: you can’t get a coat or boots that fit unless they’re made to your measurements, or you have a huge range of options. So why would it be easier to “fit” a person with a life plan than with a pair of shoes? Are people more alike in their whole mental and moral makeup than they are in the shape of their feet?

Even if the only differences among people were differences in taste, that alone would be enough reason not to force everyone into one model. But the differences run deeper. Different people require different conditions for their inner development, and they can no more thrive in the same moral atmosphere than every kind of plant can thrive in the same climate.

What helps one person grow can block another.

The same lifestyle can be:

  • a healthy stimulus for one person—keeping their capacities for action and enjoyment in good working order,
  • and a crushing burden for another—distracting them until their inner life is suspended or flattened.

People vary so widely in what gives them pleasure, what causes them pain, and how different physical and moral influences affect them that, without a corresponding diversity in how they live, they won’t get their fair share of happiness—and they won’t reach the full mental, moral, and aesthetic stature their nature makes possible.

So why does tolerance—in the realm of social judgment—so often extend only to lifestyles that can protect themselves by sheer numbers? Outside a few monastic settings, diversity of taste is never entirely rejected. Nobody is condemned simply for liking or disliking rowing, smoking, music, athletics, chess, cards, or studying, because both the fans and the non-fans of each are too numerous to be bullied into silence.

But the person—especially the woman—who can be accused of doing “what nobody does,” or not doing “what everybody does,” gets talked about as if they’d committed a serious moral offense. To enjoy the luxury of living a little differently without paying for it socially, people often need a title, or some other badge of rank, or at least the reflected importance of being connected to those who have it.

And I mean a little differently. Anyone who indulges much in doing as they like risks more than snide remarks. They can be threatened with something worse: being labeled mentally unsound and having their property taken away and handed over to relatives.

There’s also something about the current direction of public opinion that makes it especially hostile to conspicuous individuality. The average person isn’t just moderate in intellect; they’re also moderate in desire. They don’t want anything strongly enough to push them into unusual choices. And because they don’t understand people who do, they lump anyone with strong tastes together with the “wild” and “intemperate” types they already feel superior to.

Now add one more ingredient: a powerful moral reform movement. That’s exactly what we’ve had in these times. Real progress has been made—more regularity of conduct, fewer excesses. There’s a broad philanthropic impulse too, and it finds no more tempting target than the “moral and practical improvement” of our fellow human beings. Taken together, these trends make the public more likely than in most earlier periods to lay down general rules for everyone—and to pressure every person to conform to the approved standard.

And that’s the standard—spoken or unspoken—that ends up running the show: don’t want anything too much. The model citizen, in this view, is someone with no sharp edges at all. The goal is to flatten a person’s nature—squeezing down anything vivid or unusual—until they fit the safe, familiar outline of “normal,” the way a foot can be deformed by being bound too tightly.

When an ideal throws away half of what makes a life worth living, it doesn’t produce the other half at its best. It produces a cheap knockoff. Instead of big energy guided by clear thinking, and strong emotions held in check by conscience, you get the opposite: faint feelings and timid energies. And because there’s not much force in them, they’re easy to keep in line. People can “follow the rules” simply because they don’t have enough will or reason to do much else.

Even truly energetic personalities are starting to feel like folklore—something we talk about as if it belonged to a previous age. In England, there’s barely any socially acceptable outlet for real drive outside of business. Yes, commerce still absorbs a lot of effort. But whatever energy survives the workday often gets poured into a “hobby”—maybe useful, even charitable, but almost always narrow: one small project, one contained cause, one limited arena.

So England’s greatness becomes mostly collective. Individually, people are small; we look capable of big things mainly because we’ve gotten good at combining into committees, movements, and institutions. Many moralists, religious leaders, and philanthropic reformers are perfectly satisfied with this. But it wasn’t people like that who made England what it has been—and it won’t be people like that who keep it from sliding into decline.


Custom: the quiet despot

The despotism of custom is one of the most reliable obstacles to human progress. It’s always pushing back against that stubborn human impulse to aim higher than whatever happens to be standard. Depending on the moment, we call that impulse the spirit of liberty, or the spirit of progress, or the spirit of improvement.

But those ideas aren’t identical.

  • Improvement doesn’t always mean freedom; you can “improve” people by forcing change on them.
  • Liberty, when it resists forced improvement, can temporarily line up with the enemies of change.

Still, there’s one source of improvement that doesn’t run dry: liberty. Why? Because freedom creates multiple independent laboratories of living. Every person becomes a possible center of new ideas, new habits, and new experiments. Whether you come to it through a love of liberty or a love of improvement, the “progressive principle” inevitably clashes with custom—because it requires, at minimum, breaking free from custom’s yoke.

That tug-of-war is one of the main drivers of human history. In fact, large parts of the world have, in a blunt sense, no history, because custom rules so completely that nothing genuinely new can take hold. Mill points to “the East” as the clearest case: there, custom is the final court of appeal in almost everything. “Justice” and “right” effectively mean “what conforms.” And unless you’re a tyrant drunk on power, you don’t even think to resist.

Look at what that produces. Those societies must once have had originality. They didn’t spring into existence already crowded, literate, and skilled in the arts. They built those achievements. They were, at one time, among the greatest and most powerful nations on earth. And now? They’ve become subjects or dependents of peoples whose ancestors were still living in forests when the older civilizations had palaces and temples—peoples who, crucially, were not entirely pinned down by custom, because custom shared the stage with liberty and change.

A society, it seems, can be progressive for a while—and then stop. When does it stop? When it loses its individuality.


Europe’s risk: change without difference

If Europe ever suffers the same fate, it won’t look identical. The form of custom threatening Europe isn’t pure stillness. It doesn’t ban change outright. Instead, it bans being different, while still allowing change—so long as everyone changes together.

Fashion is a perfect example. We no longer wear the fixed outfits of our ancestors. But we still must dress like everyone else. The only permitted variation is the collective kind: styles can change once or twice a year, as long as the whole crowd moves in sync. That way, change happens for the sake of change, not because people independently discover what’s beautiful, useful, or comfortable. If change were really driven by taste or practicality, different people would adopt it at different times—and they’d drop it for different reasons. Uniformity can’t tolerate that.

Europe is, in many ways, both progressive and changeable. We invent new machines and keep them—until something better replaces them. We chase improvement in politics and education, even in morals. Though there’s a catch: our “moral improvement” often means persuading, pressuring, or forcing other people to become as good as we believe we are.

So it isn’t progress that we dislike. We’re proud of progress. We flatter ourselves that we’re the most advanced people who ever lived. What we actually fight is individuality. We’d congratulate ourselves if we managed to make everyone the same—forgetting that difference is often what teaches us. Seeing someone unlike us is often the first thing that exposes the limits of our own way of being: the imperfections in our “type,” the strengths in another, and the possibility of combining the best of both into something better than either alone.

China offers a warning. It was a nation with enormous talent and, in some respects, real wisdom—helped along by a rare historical gift: it developed early on a particularly effective set of customs, shaped (at least in part) by thinkers whom even an enlightened European could, with some caveats, call sages and philosophers.

China also built an impressive machine for transmitting its best ideas throughout society. It worked hard to stamp its “wisdom” onto every mind, and to ensure that those who absorbed the most of it rose to positions of honor and power. You might think a society like that had discovered the secret engine of progress—and would stay at the front of the world’s movement.

Instead, it became stationary. It has stayed that way for thousands of years. If it is ever to improve further, the push will have to come from outside.

And here’s the irony: China succeeded—almost perfectly—at the very project many English reformers pursue so eagerly: making a whole people alike, training everyone to govern their lives by the same rules and maxims. And this is what you get.

Modern public opinion in Europe is, in a loose and informal way, what China’s educational and political systems were in a strict and organized way. If individuality can’t push back against that pressure, then Europe—despite its proud past and its proclaimed Christianity—will drift toward becoming another China.


Why Europe advanced in the first place

So what has protected Europe so far? What made the European family of nations a growing, improving part of humanity rather than a frozen one?

Not some innate moral superiority. When Europeans do show “excellence,” it’s more often the result of development than its cause. The real reason has been Europe’s extraordinary diversity of character and culture.

People, social classes, and nations have been strikingly unlike one another. They’ve carved out many different routes through life, and those routes have led to many valuable discoveries and forms of achievement. And yes—at any given period, the travelers on different paths have tended to be intolerant. Each group would often have loved nothing more than to force everyone else onto its own road.

But those attempts rarely succeeded permanently. Over time, each side had to endure long enough to receive the benefits the others offered. Europe’s many-sided growth, Mill argues, depends on this plurality of paths.

That advantage is shrinking.

Europe is moving—noticeably—toward the Chinese ideal of sameness. Tocqueville observed that French people in his own day resembled one another far more than the French of the previous generation. The same, Mill suggests, is even more true in England.

Wilhelm von Humboldt identified two conditions necessary for genuine human development—because they’re what make people meaningfully different from one another:

  • Freedom
  • Variety of situations

In England, that second condition—variety of situations—is shrinking by the day. The environments that shape different classes and individuals are increasingly similar. In the past, different social ranks, neighborhoods, trades, and professions lived in what were almost separate worlds. Now, they largely live in the same one.

More and more, people:

  • read the same things
  • hear the same things
  • see the same things
  • go to the same places
  • aim their hopes and fears at the same targets
  • share the same rights and liberties
  • use the same tools for defending those rights

Differences of position still exist, but they’re small compared to the differences that have disappeared. And the assimilation keeps accelerating.

Mill lists the forces driving it. Political changes tend to lift the low and lower the high. Expanding education pulls people under shared influences and gives them access to the same pool of facts and feelings. Better communication brings distant populations into contact and encourages people to move around, constantly remixing where they live. The growth of commerce and manufacturing spreads the comforts of easier circumstances and throws even the highest ambitions open to competition—so the desire to rise stops being the special mark of one class and becomes a general trait.

But stronger than all of these is something else: in Britain and other free countries, public opinion has become the dominant power in the state. As old social heights that once allowed a person to ignore “the crowd” are leveled, and as practical politicians increasingly lose even the idea of resisting the public will (once it’s clear the public has one), nonconformity loses its social shelter. There’s less and less of a standing social force that, precisely because it resists majority rule, has a reason to protect unpopular opinions and unusual ways of living.

Put all those pressures together and you get a heavy weight pressing down on individuality. It’s hard to see how individuality can hold its ground at all—let alone easily. It will become harder and harder unless the more thoughtful part of the public comes to understand its value: unless they can see that it’s good for people to be different, even when the differences don’t look “better” to them, even when some differences seem—at first glance—worse.

If individuality is ever going to reassert its claims, the moment is now, while the forced blending hasn’t finished its work. Early in the process, resistance can still succeed. Wait too long—until life has been reduced to one nearly uniform pattern—and every deviation will start to look not merely odd, but wicked: impious, immoral, monstrous, “against nature.” Human beings quickly lose the ability to imagine diversity if they go long enough without seeing it.

IV — Of The Limits To The Authority Of Society Over The Individual

Where, exactly, is the boundary line between personal freedom and social authority? When should your choices be entirely your own—and when does the community get a say?

A clean way to draw the line is this:

  • Individuality should govern the parts of life where the main stake belongs to the individual.
  • Society should govern the parts of life where the main stake belongs to everyone else.

Even though society isn’t literally founded on a contract—and it’s not helpful to invent one just to “prove” obligations—there’s still an obvious moral fact: if you benefit from living among other people, you owe something back. Simply being part of a community makes it necessary that everyone follow certain basic rules in how they treat one another.

At minimum, those rules include two duties:

  1. Don’t harm each other’s interests—or more precisely, don’t violate those interests that the community, by law or by common understanding, treats as rights.
  2. Carry your fair share of the burdens—the work, costs, and sacrifices needed to protect society and its members from harm.

Society is justified in enforcing these duties, even forcefully, against people who try to dodge them.

But that’s not the whole story. Someone’s actions can be harmful—or thoughtless and inconsiderate—without crossing the line into a clear violation of anyone’s legal rights. In those cases, society may not be able to punish them by law, but it can respond through public opinion: criticism, disapproval, social distance, and the like.

The moment your behavior negatively affects other people’s interests, society has jurisdiction. Then it becomes a real question—open to argument—whether interfering would actually improve the general welfare.

However, there’s a very different category of conduct: what you do that affects only yourself (or would affect others only if they voluntarily choose to be involved), assuming everyone involved is an adult and mentally competent. In that domain, there should be complete freedom—legally and socially—to act as you wish and live with the consequences.

It’s a serious misunderstanding to hear that and conclude, “So we should all be selfish and ignore each other.” That’s not the point. If anything, we need more genuine, unselfish effort to help other people live well. But benevolence has better tools than punishment. You can persuade, support, model, warn, teach—without reaching for the literal or metaphorical whip.

I’m not dismissing the so-called “self-regarding” virtues—self-control, judgment, discipline, prudence. They matter enormously, maybe second only to social virtues, and perhaps not even second. Education should cultivate both. Still, education works partly through guidance and persuasion, and after childhood is over, persuasion is the only legitimate way to promote these virtues. Adults can owe each other help in seeing what’s better versus worse, and encouragement to choose what elevates rather than what degrades. We should constantly nudge one another toward stronger intellectual and moral habits—toward wiser goals, and away from foolish ones.

But there is a hard limit: neither one person nor a crowd gets to tell a competent adult, “You may not live your life in the way you think benefits you,” when the matter concerns only the person himself.

Why? Because he has the most at stake in his own well-being. Other people’s interest in his happiness—except in cases of deep personal attachment—is usually small compared to his own. And society’s interest in him as an individual (apart from how he treats others) is indirect and partial. Meanwhile, when it comes to his feelings, circumstances, and needs, even an “ordinary” person typically knows vastly more than any outsider can. When society tries to override his judgment about his own life, it’s acting on general assumptions—assumptions that can be wrong, and even when broadly right, are easily misapplied to particular cases by people who only see the situation from the outside.

So in this realm, individuality has its proper territory. In social life, we need general rules so people can predict what to expect from one another. But in a person’s own concerns, his spontaneity—his capacity to choose and try—deserves free play. Others may offer considerations to sharpen his judgment, and exhortations to strengthen his will. They may even press their advice on him. But he remains the final judge.

Yes, he might make mistakes even after warnings. Still, the harm from those mistakes is usually outweighed by the deeper harm of allowing others to force him into what they call “his good.”

This doesn’t mean that other people’s feelings about you should be unaffected by your personal qualities. That would be impossible—and it wouldn’t even be desirable. If someone is outstanding in qualities that promote his own good, admiration makes sense; he’s closer to an ideal of human development. If he’s badly lacking, the opposite feeling naturally follows. There are degrees of foolishness—and what people often (not perfectly) label “lowness” or “depraved taste”—that don’t justify harming the person, but do make him a natural object of distaste, and in extreme cases contempt. You almost can’t have the opposite virtues strongly without reacting that way.

Even if someone wrongs no one, he can behave in a way that forces others to see him as a fool or as someone of lesser character. Since that judgment is itself a consequence he’d rather avoid, it can be a kindness to warn him ahead of time—just as you’d warn someone about any other unpleasant result their choices are likely to bring.

It would actually be better if we did this more often than modern “politeness” allows. We treat honest criticism as rude or presumptuous when it might be a genuine service.

We also have the right to act on our unfavorable opinion of someone, so long as we’re not trying to crush his individuality—just exercising our own. For example:

  • We aren’t obligated to seek someone’s company.
  • We may avoid him (quietly, without making a performance of it), because we’re free to choose the company we enjoy.
  • We may warn others about him if we think his example or conversation could be harmful to those around him.
  • We may prefer others over him when doing optional kindnesses—except where the kindness is specifically aimed at helping him improve.

Through these channels, people can face serious social costs for faults that mainly harm themselves. But notice what makes those costs legitimate: they come as the natural, almost automatic consequences of their behavior—not as penalties deliberately inflicted “to punish.” If someone is rash, stubborn, vain; if he lives beyond his means; if he can’t resist destructive indulgences; if he chases bodily pleasures at the expense of feeling and intellect—then he should expect others to think less of him, and to feel less warmth toward him. He can’t fairly complain about that unless he has earned special claims on others through exceptional excellence in how he treats them—claims that his private self-damage doesn’t cancel.

The core point is this: for the portion of someone’s conduct that concerns only his own good and doesn’t affect others’ interests in their dealings with him, the only “penalties” he should face are those inseparable from other people’s honest unfavorable judgment.

But when a person’s actions harm others, everything changes. Violating rights; causing loss or damage not justified by one’s own rights; lying or double-dealing; using advantages unfairly or meanly; even selfishly refusing to defend others from injury—these deserve moral condemnation, and in serious cases, moral retaliation and punishment. And not just the acts, but the underlying dispositions are properly called immoral: cruelty; malice; envy (one of the most anti-social passions); deceitfulness; explosive anger without real cause; resentment out of proportion to the offense; the love of dominating; the urge to grab more than one’s share of benefits (the Greeks called it pleonexia); pride that feeds on other people’s humiliation; and the kind of egotism that treats self as the center of the universe and tilts every ambiguous question in its own favor.

These traits make up a genuinely bad and hateful character. They are not like the earlier “self-regarding” faults, which may show stupidity or lack of self-respect but don’t, by themselves, amount to wickedness. Self-regarding faults become objects of moral blame only when they involve a breach of duty to others—because, for others’ sake, a person is obligated to take some care of himself.

That’s why so-called “duties to ourselves” aren’t socially enforceable unless circumstances also make them duties to other people. When “duty to oneself” means anything beyond simple prudence, it usually points to self-respect or self-development. And for neither of these should a person be accountable to the public, because it isn’t actually for the good of humankind that he be held answerable to other people for them.

This difference—between losing respect because you lack prudence or personal dignity, and being condemned because you violated others’ rights—is not just wordplay. It changes both how we feel and how we should act.

If someone displeases us in matters where we know we have no right to control him, we may dislike it. We may keep our distance. But we won’t see ourselves as entitled to make his life miserable. We’ll recognize that he already bears—or soon will bear—the full cost of his error. If he ruins his life through bad management, we won’t try to ruin it further. Rather than punishing him, we should be inclined to soften the blow: show him how to avoid the trap, or how to repair the damage. He may stir pity, perhaps dislike, but not anger. We won’t treat him as an enemy of society. At most, we leave him alone—unless we choose to show concern.

But it’s entirely different when he breaks the rules necessary to protect others. Then the bad consequences don’t fall mainly on him—they fall on other people. In that case, society, as the protector of its members, must respond: it must impose pain for the express purpose of punishment, and it must ensure the punishment is serious enough. In one situation, he stands before us as an offender, and we are called not only to judge but, in some form, to enforce a sentence. In the other, it is not our role to make him suffer—except incidentally, as a byproduct of using the same freedom to shape our own lives that we grant to him in shaping his.

Many people will reject this whole distinction between what concerns only the individual and what concerns others. “How,” they ask, “can any part of a community member’s conduct be irrelevant to the community?” After all, no one is totally isolated. It’s hard to hurt yourself seriously or persistently without the harm spilling over—at least to people close to you, and often much further.

If someone damages his property, he harms those who depend on it, directly or indirectly, and he usually reduces the community’s total resources at least a little. If he degrades his mind or body, he not only hurts the people who relied on him for happiness, but he also makes himself less capable of providing the services he owes to others generally. He may even become a burden on others’ affection and generosity. If this kind of self-destruction became common, it would be among the most serious drags on the total amount of good in society.

And even if a person’s follies and vices don’t directly injure others, critics say, he still harms people by his example. So shouldn’t society force him to control himself, for the sake of those who might be corrupted or misled by watching him?

They’ll add one more argument: even if the damage could be kept inside the person himself, should society really leave clearly unfit people to steer their own lives? We already agree that children and minors deserve protection from themselves. So why not extend similar protection to adults who are, practically speaking, just as incapable of self-government?

Gambling, drunkenness, sexual irresponsibility, idleness, filth—these can be just as damaging to happiness and just as obstructive to improvement as many things the law already forbids. So why shouldn’t the law, as far as it practically can, try to repress them too? And since law is inevitably imperfect, shouldn’t public opinion organize itself into a kind of “moral police,” hitting these vices with harsh social penalties?

And, they insist, none of this is really about suppressing individuality or blocking new “experiments in living.” The goal is to prevent behaviors that humanity has tried for ages and found wanting—things experience has condemned as unsuitable for anyone’s true individuality. At some point, after enough time and enough repeated evidence, a moral or prudential truth should count as settled. The hope is simply to stop each new generation from walking off the same cliff that killed the last.

I grant much of the factual premise. When a person harms himself, the fallout often reaches those close to him through both sympathy and shared interests, and in a smaller way reaches society at large. But when that self-harm leads someone to violate a distinct, identifiable obligation to specific other people, then the case no longer belongs to the “self-regarding” category. It becomes properly subject to moral condemnation in the full sense.

For example: if someone’s drinking or extravagance makes him unable to pay his debts, he deserves blame and may even deserve punishment—but for failing his creditors, not for being extravagant. If a person who has taken on the moral responsibility of a family becomes, through the same causes, unable to support or educate them, he’s rightly condemned and might justly be punished—but again, for violating his duty to his family, not for the private vice itself. And if the money he owed them had been diverted not to indulgence but to the most prudent investment, the moral failure would be the same: the duty was still broken.

George Barnwell murdered his uncle to get money for his mistress; but if he had murdered him to raise capital for a business, he would just as surely have been hanged.

Again and again, you see this pattern: someone’s private choices cause pain to the people around them, and the crowd wants to treat that as its business.

Take the classic case: a person who torments their family through addiction. Yes—they deserve blame for the cruelty or the ingratitude. But notice what we’re blaming them for. We’re blaming them for failing to show the basic consideration they owe to the people who share their lives or depend on them. And the same logic applies even when the habits aren’t “vices” in any grand moral sense. If you choose a way of living that consistently makes life miserable for those bound up with you—your partner, your children, your closest dependents—and you have no stronger duty pulling you there and no reasonable self-justification, then you deserve moral disapproval.

Still, the disapproval should be aimed at the failure of consideration, not at every private quirk that may have led there. We can condemn the neglect without claiming authority over the person’s entire inner life, tastes, or personal “errors” that harmed no one directly.

The same boundary shows up with public duties. If someone, through purely self-regarding behavior, makes themselves unable to carry out a clear obligation they owe to the public, then they’ve committed a social offense. Nobody should be punished just for being drunk. But a soldier or a police officer should be punished for being drunk on duty. The line is simple:

  • If there’s definite harm to someone else, or
  • A definite, recognizable risk of harm to another person or to the public,

then we’ve left the realm of personal liberty and entered the realm of morality or law.

But what about the fuzzier kind of “harm”—the potential or constructive injury people claim you do to society when your actions break no public duty and visibly hurt no specific person other than yourself? In those cases, the cost is one society should be willing to pay for the larger good of human freedom.

If you’re going to punish adults for failing to take care of themselves, be honest about what you’re doing: punish them “for their own sake,” not under the pretense that you’re protecting society from the possibility that they might become less useful. Society doesn’t even claim the right to demand every benefit a person might have produced. So don’t smuggle that demand in through the back door.

And in any case, it’s absurd to act as if society has no tools except this: wait for people to do something foolish, then punish them—legally or socially—for the foolishness. Society holds enormous power over people for the entire first part of their lives. Childhood and adolescence are long. That’s the window in which society can try—through education, upbringing, and the shaping of circumstances—to form adults capable of reason and self-control.

Of course society can’t make the next generation perfectly wise or good; it isn’t wise or good enough itself. Even its best-intended efforts sometimes fail, and sometimes it “succeeds” for the wrong reasons. Still, as a whole, society can raise the next generation to be as good as—and a bit better than—itself. If it lets large numbers grow up as overgrown children, unable to act on rational, long-term motives, then society should blame itself before it reaches for the whip.

Because look at what society already has at its disposal:

  • the power of education,
  • the enormous influence of accepted opinion over people least able to judge independently, and
  • the “natural” penalties that come from social disapproval—the loss of respect, the cold shoulder, the contempt that falls on those around us when we disappoint or disgust them.

With all that, society has no excuse for demanding extra power: the power to issue commands and enforce obedience in the purely personal parts of people’s lives—areas where, on any fair principle of justice or good policy, the decision should belong to the person who will live with the consequences.

In fact, nothing does more to undermine the better ways of guiding behavior than reaching for the worst. If you try to coerce people into prudence or temperance, anyone with real backbone—the raw material of a strong, independent character—will push back. They won’t feel you have any right to control their self-regarding choices the way you have a right to stop them from harming you. And soon the rebellion turns into a performance: it becomes a badge of “spirit” to defy the busybodies and do, loudly and proudly, the exact opposite of what they demand. History has seen this swing like a pendulum—one era’s moral crackdown breeding the next era’s deliberate vulgarity.

People also argue, “Fine, maybe it mostly harms the person—but what about the bad example?” Yes, bad examples can be dangerous, especially when they show people harming others with impunity. But that’s not the case we’re discussing. We’re talking about conduct that wrongs no one else and is said to harm the actor. If that’s true, then the example, taken as a whole, should usually do more good than harm—because it doesn’t just display the behavior. It also displays the unpleasant or degrading consequences that tend to follow it. If the behavior deserves censure, then those consequences are part of what the censure is pointing to.

The strongest reason of all to keep the public from meddling with purely personal conduct is this: when the public does meddle, it usually meddles badly—and in exactly the wrong places.

On questions of duty to others, the judgment of the majority—public opinion in its “overruling” form—is often wrong, but probably still more often right. Why? Because in those cases people are mainly judging how certain kinds of conduct, if allowed, would affect them. They’re evaluating their own interests.

But when the majority tries to impose its judgment, as law or moral coercion, on questions of self-regarding conduct, it’s just as likely to be wrong as right. In those cases “public opinion” often means little more than this: some people’s view of what is good or bad for other people. And very often it doesn’t even mean that. Too often it means: “I don’t like it.”

Many people treat anything they find distasteful as an injury to themselves. They feel personally outraged by it. You see the same move in religious disputes: a zealot accuses others of offending religious feelings, and they reply, “No—you offend mine by practicing your disgusting faith.” But there’s no symmetry here. The feeling someone has about their own opinion isn’t morally equivalent to someone else’s irritation at the fact that they hold it. It’s the difference between a thief wanting a purse and the owner wanting to keep it. Your taste is your own concern, in the same way your opinion is your own concern—and your wallet is your own concern.

It’s easy to picture an ideal public that leaves individuals alone in doubtful matters, and only demands that they avoid behaviors that universal human experience has condemned. But where has anyone ever met such a public? When does “the public” honestly consult universal experience? Most of the time, when it meddles in private life, it’s not thinking about evidence or outcomes. It’s reacting to the “outrage” of someone acting or feeling differently from itself. And that standard—barely disguised—gets presented as the command of religion or philosophy by the vast majority of moralists and theorists.

They tell us, “This is right because it’s right,” which in practice means, “because we feel it’s right.” They urge everyone to search their own hearts for moral laws binding on themselves and everyone else. And what does the public do with advice like that? It treats its own feelings of good and evil—if they’re widespread enough—as laws for the entire world.

This problem isn’t just theoretical. You might expect me to list modern examples where people in my own time and country dress up their preferences as moral rules. I’m not writing a full essay on today’s moral blind spots—that’s too large a subject to tack on as a side note. But I do need examples, because the principle matters in real life. I’m not building defenses against imaginary threats. And examples aren’t hard to find: the urge to expand what you might call the moral police—to the point where it crowds out the clearest, most legitimate forms of individual freedom—is one of the most common human impulses there is.

Start with a simple, familiar kind of hostility: the dislike people feel toward others whose religious beliefs differ from theirs—especially when those others don’t follow the same religious rituals and abstinences.

Here’s a small example that reveals something large. Few things, in Christian belief or practice, stir more hatred in many Muslim communities than the fact that Christians eat pork. Europeans often feel deep disgust at practices they associate with Islam; many Muslims feel the same disgust, just as intensely, at this particular way of satisfying hunger. At first glance, you might say, “Well, their religion forbids it.” But that doesn’t explain the kind and degree of their revulsion. Their religion also forbids wine. Drinking it is considered wrong. Yet it typically isn’t seen as disgusting in the same way.

The pork taboo has a different emotional charge. It’s bound up with the idea of “uncleanness,” which, once it sinks into the imagination, can trigger something like instinctive disgust—even in people whose everyday habits aren’t especially hygienic. The intense Hindu sense of ritual impurity is a striking example of the same psychological mechanism.

Now imagine a country where the majority is Muslim, and that majority insists that pork must not be eaten anywhere within the nation’s borders. That’s not a far-fetched hypothetical; plenty of Muslim-majority countries have had rules like this. So the question is: would such a ban be a legitimate exercise of public moral authority? If not, why not?

Notice how hard it is to criticize it on the usual grounds. The practice really is revolting to the public. They also sincerely believe God forbids it. And you can’t fairly label it “religious persecution,” because nobody’s religion makes it a duty to eat pork. The rule would be religious in origin, but it wouldn’t be punishing anyone for their religion as such.

So if we condemn it, the only coherent reason is the one that really matters: the public has no business interfering with people’s personal tastes and self-regarding choices.

Bring the same point closer to home. A majority in Spain has long regarded it as a grave impiety—deeply offensive to God—to worship in any way other than Roman Catholicism, and has treated other public worship as unlawful on Spanish soil. Across much of Southern Europe, people have viewed a married clergy not merely as irreligious, but as indecent—unchaste, coarse, disgusting.

What do Protestants think of these feelings, honestly held though they may be, and of attempts to enforce them on non-Catholics? Yet if we accept the principle that people may interfere with each other’s freedom in matters that don’t affect others’ interests, how can we consistently exclude these cases? Why shouldn’t people try to suppress what they see as a scandal against God and society?

In fact, if you’re looking for a strong case—on the meddler’s own terms—for banning what’s labeled “personal immorality,” you won’t find one stronger than the case these persecutors can make for suppressing what they see as impiety. And unless we’re prepared to adopt the persecutor’s logic—“we may persecute because we’re right; you may not persecute because you’re wrong”—we should be extremely cautious about admitting a principle that we would rightly call tyrannical when it’s used against us.

Someone might dismiss those examples as too foreign to matter—unlikely in our society, where public opinion probably won’t enforce dietary abstinence or police worship, or punish people for marrying (or not marrying) in line with their beliefs.

Fine. Then consider an interference we are by no means safe from.

Whenever the Puritans have held enough power—think New England, or Britain during the Commonwealth—they have tried, often successfully, to stamp out public amusements and nearly all private ones as well: music, dancing, public games, social gatherings for fun, and the theater. Even now, there are large groups in this country whose religious and moral views condemn these recreations. And since these groups are largely rooted in the middle class, which carries a great deal of political and social influence, it’s not impossible that they could someday command a parliamentary majority.

How would everyone else feel if their allowable pleasures were regulated by the strict moral and religious preferences of hardline Calvinists and Methodists? Wouldn’t people bluntly tell these intrusively pious neighbors to mind their own business? That is exactly what should be said—always—to any government or public that claims no one should enjoy any pleasure that it thinks wrong.

Because once you accept the principle, you can’t complain when it’s enforced by whoever holds power. If we admit that the majority may veto harmless pleasures, then we must be prepared to live in a “Christian commonwealth” as the early New England settlers understood it—if a similar religious movement ever regains strength. And religions that seem to be fading have often revived, to everyone’s surprise.

Now consider another scenario—perhaps more plausible than a full Puritan revival. The modern world has a strong drift toward democracy in social life, whether or not it comes with democratic political institutions. People claim that in the United States, where society and government are among the most democratic, the majority’s discomfort with conspicuous wealth acts as an informal sumptuary law: in many places, someone with a huge income supposedly finds it difficult to spend it in any visible way without attracting public disapproval.

Maybe those reports are exaggerated. Still, the situation they describe is not only possible; it is a predictable outcome when democratic feeling combines with the belief that “the public” has a right to veto how individuals spend their money. Add one more ingredient—a wide spread of socialist opinion—and you can easily reach a point where it becomes “infamous,” in the majority’s eyes, to possess more than a tiny amount of property, or to have any income not earned by manual labor.

Ideas like that already circulate widely among parts of the working class, and they press hard on people who live under that particular audience’s judgment—especially its own members. In many industries, the less skilled majority of workers have been known to believe that poor work should be paid the same as good work, and that nobody should be allowed—through piecework or any other arrangement—to earn more by being more skilled or more industrious. They enforce this with a kind of moral police that sometimes slides into a physical one, intimidating skilled workers from accepting higher pay and employers from offering it.

And here’s the uncomfortable point: if the public truly has jurisdiction over private concerns, I don’t see how you can call these people “in the wrong,” or condemn any smaller public for asserting over an individual the same authority that the larger public asserts over everyone.

But we don’t need to rely on imagined futures. Even today, there are blatant invasions of private liberty actively practiced, and even larger ones openly threatened—with some real hope of success. And there are doctrines being pushed that claim the public has an unlimited right not only to ban by law everything it considers wrong, but—so it can get at what it considers wrong—to ban any number of things it admits are harmless.

Under the banner of “preventing drunkenness,” one English colony and nearly half the United States once passed laws that banned people from using fermented drinks for anything except medicine. In practice, outlawing the sale meant outlawing the use—because if you can’t buy it, you effectively can’t drink it. The laws proved hard to enforce, so several states repealed them, including the one that gave the movement its famous name. Still, a group of energetic reformers kept pushing for the same kind of law in England.

They formed an organization—an “Alliance,” as they called it—and got attention partly because of a public exchange between its secretary and one of the few English politicians who seemed to think principles should actually guide politics. Lord Stanley’s side of the correspondence made people hope he might be the real thing, simply because genuine principle is so rare in public life.

The Alliance’s spokesman insisted he would “deeply deplore” any principle that could be twisted into bigotry or persecution, and promised there was a “broad and impassable barrier” between persecution and what the Alliance wanted. Here was his big distinction:

  • Anything involving thought, opinion, or conscience should be outside the reach of law.
  • Anything involving social acts, habits, or relations should fall under the State’s discretion, not the individual’s.

Notice what’s missing: a third category—acts that are personal, not social. And surely drinking a glass of wine belongs there.

Yes, selling alcohol is a kind of trade, and trade affects other people. But the liberty being crushed by prohibition isn’t mainly the seller’s. It’s the buyer’s—the person who wants to drink. The State might as well directly forbid him to drink wine as make it impossible for him to obtain it.

The secretary replied with a sweeping claim: “As a citizen, I have a right to legislate whenever my social rights are invaded by the social act of another.” So what counts as an invasion of these “social rights”? He offered a definition that amounts to this:

  • Alcohol traffic violates my right to security, because it continually creates and fuels social disorder.
  • It violates my right to equality, because it profits from misery that I’m taxed to relieve.
  • It blocks my right to moral and intellectual development, by filling my life with temptations and by weakening the society from which I’m entitled to mutual support and community.

This is a theory of “social rights” so extreme it’s hard to believe it’s been stated so plainly before. Strip away the rhetoric and it says: I have an absolute right that everyone else behave exactly as they should. And if anyone falls short—even slightly—they violate my “social rights,” and I can demand that the legislature remove the “grievance.”

That principle is more dangerous than any single specific restriction. There is almost no interference with liberty it wouldn’t justify. It leaves you, in effect, no real freedom at all—except maybe the freedom to keep your opinions locked inside your skull. Because the moment you say something I consider harmful, you’ve “invaded” my social rights under this doctrine. It treats all of humanity as having a legally enforceable stake in one another’s moral, intellectual, and even physical “perfection”—with perfection defined, conveniently, by whoever is complaining.


Another major example of society overstepping its rightful limits isn’t merely threatened; it has already been imposed for a long time and celebrated as a victory: Sabbatarian laws.

Setting aside one day a week, as life allows, from ordinary work is—whatever its religious origin—a highly beneficial habit. And since working people can’t keep such a day unless there’s broad agreement to do so, there’s a narrow sense in which law can help: if some people’s working would force others to work too, then it may be reasonable for law to secure the custom for everyone by pausing the major machinery of industry on a particular day.

But that justification has a limit. It’s based on other people’s direct interest in the general practice, and it does not extend to what someone chooses to do in their leisure. It doesn’t apply, even a little, to legal restrictions on amusements.

It’s true that one person’s amusement can be another person’s work. But the enjoyment—and, often, the genuine refreshment—of many can be worth the labor of a few, as long as that labor is freely chosen and can be freely left. Workers are right to worry that if everyone worked on Sunday, what is now six days’ wages would be stretched into seven days’ work. But as long as most jobs are paused, the smaller number who do work for others’ enjoyment can earn proportionally more—and they are not forced into those jobs if they’d rather have leisure than extra pay.

If you want a better fix, look to custom: give those particular classes of workers a day off elsewhere in the week.

So if someone insists on restricting Sunday amusements anyway, the only remaining defense is that those amusements are religiously wrong. And that is exactly the kind of motive for legislation that deserves the strongest protest. Let the gods deal with offenses against the gods. It still has to be shown that society—or any official—has been appointed from on high to punish “sins” against Omnipotence that do not also harm other human beings.

The belief that it is one person’s duty to make another person religious is the root of every religious persecution in history. Admit that belief, and you can justify all of them.

Modern attempts—like trying to halt Sunday railway travel, or resisting the opening of museums—don’t usually have the old persecutors’ cruelty. But the mindset is the same at its core: refusing to tolerate others doing what their religion permits because it isn’t permitted by the persecutor’s religion. It’s the conviction that God not only hates the unbeliever’s act, but will blame us if we allow the unbeliever to live in peace.


I can’t resist adding one more example of how cheaply people often treat human liberty: the tone of outright persecution that shows up in parts of the press whenever it feels obliged to mention Mormonism.

There’s a lot one could say about the strange, instructive fact that a supposed new revelation—a religion built on what looks like obvious fraud, without even the glow of an extraordinary founder—could win hundreds of thousands of believers and form a society in the age of newspapers, railways, and the electric telegraph. But what matters here is simpler.

This religion, like many better ones, has had its martyrs. Its prophet and founder was killed by a mob for his teaching. Other followers died in the same lawless violence. The group was driven out, all at once, from the country where it began. And now that they’ve been pushed into an isolated refuge in the middle of a desert, many people in this country openly say it would be right—if only it were convenient—to send an expedition against them and force them to conform to everyone else’s opinions.

What most inflames this hostility is their approval of polygamy. Polygamy is permitted among Muslims, Hindus, and Chinese, yet it becomes a special object of fury when practiced by English-speaking people who call themselves a kind of Christian.

No one disapproves of this institution more than I do—both for other reasons, and because it is not supported by the principle of liberty at all. It directly violates that principle. It fastens chains on one half of the community and releases the other half from equal obligations in return.

Still, we must remember something awkward but relevant: for the women involved—who may be the ones harmed—this arrangement is as voluntary as participation in any other form of marriage. That may sound surprising, but it’s explained by the world’s common ideas and customs: women are taught that marriage is the necessary thing, and in that mental world it makes sense that many would rather be one of several wives than not be a wife at all.

No one is asking other countries to recognize such unions, or to excuse any of their citizens from their own laws because of Mormon beliefs. But when the dissenters have yielded to hostile feeling far more than justice requires—when they have left places where their doctrines were unwelcome, and settled in a remote corner of the earth, making it livable where they were the first to do so—it’s hard to see how any principle other than tyranny can justify preventing them from living there under whatever laws they choose, provided they commit no aggression against other nations and allow perfect freedom for anyone who wants to leave.

A recent writer—able, in some ways—suggests what he calls not a crusade but a “civilizade” against this polygamous community, to end what he sees as a backward step in civilization. I agree it’s backward. But I don’t know of any rule that gives one community the right to force another to be “civilized.”

As long as the people suffering under a bad local law do not ask other societies for help, outsiders with no real connection to them have no right to step in and demand that a way of life—one that all the directly interested parties appear to accept—be abolished simply because it offends spectators thousands of miles away.

If critics want to act, let them do it in legitimate ways:

  • Send missionaries, if they like, to argue against it.
  • Use any fair means to resist the spread of similar doctrines among their own people.
  • But do not silence teachers, and do not use force.

If civilization defeated barbarism when barbarism once dominated the world, it’s absurd to pretend we should now live in fear that barbarism—after being genuinely subdued—will suddenly revive and conquer civilization. A civilization that can be overthrown by its conquered enemy must already have decayed so far that neither its clergy, nor its teachers, nor anyone else has the strength—or even the will—to defend it. And if that is the case, the sooner such a civilization is told to clear out, the better. It will only rot further until it is destroyed and then rebuilt—like the Western Empire—by energetic barbarians.


The case of the Bombay Parsees is a revealing example. This hard-working, enterprising group—descended from Persian fire-worshippers fleeing their homeland before the Caliphs—arrived in Western India and were tolerated by Hindu rulers on one condition: they would not eat beef. Later, when the region came under Muslim rule, the same group secured continued tolerance on a different condition: they would refrain from pork.

What began as obedience to authority turned into habit, and habit turned into identity. The Parsees to this day avoid both beef and pork. Their religion does not require this double abstinence, but over time it became a tribal custom—and in the East, custom can become indistinguishable from religion.

V — Applications

V
Applications

Before you can apply the principles argued in this essay consistently across every corner of government and morality, people have to accept them more broadly as the starting point for discussing details. So the few “case” remarks I’ll make here aren’t meant to chase the doctrine all the way to its farthest consequences. They’re illustrations—samples of how the principles work—meant to sharpen what the two central maxims really mean, where they stop, and how to weigh them when it isn’t obvious which one applies.

Here are the two maxims:

  • First: A person isn’t answerable to society for actions that concern only the person himself. In those cases, society’s legitimate tools are limited to non-coercive responses: advice, instruction, persuasion, and—if others think it necessary for their own well-being—choosing to avoid him.
  • Second: When a person’s actions harm the interests of others, he is answerable, and society may impose social penalties or legal punishment if it judges that some such response is needed for protection.

Harm matters—but it doesn’t settle everything

Don’t jump from “society may interfere only to prevent harm to others” to “society should interfere whenever harm to others is involved.” That’s not true. In real life, people regularly cause each other pain or loss while doing perfectly legitimate things. Sometimes they “block” a benefit someone else reasonably hoped to get, not because they cheated, but because they succeeded.

That kind of collision of interests can be made worse by bad social institutions, and it can also arise even under good ones. Think about it:

  • Someone succeeds in an overcrowded profession.
  • Someone wins a competitive exam.
  • Someone is chosen over another in a contest for a prize both wanted.

In each case, one person’s gain is tied to someone else’s disappointment, wasted effort, or lost opportunity. Yet almost everyone agrees that, for humanity as a whole, it’s better that people pursue worthwhile goals without being stopped by the mere fact that others will be disappointed. In other words, society doesn’t grant disappointed competitors any legal or moral right to be spared this kind of suffering. Society steps in only when the winner used methods it’s against the general interest to allow—namely fraud, treachery, or force.

Trade: a social act, but not always a liberty issue

Buying and selling isn’t purely private. If you sell goods to the public, what you do affects other people and society at large. In that sense, your conduct falls under society’s jurisdiction. That’s why, historically, governments often treated it as their duty—whenever a case seemed important—to fix prices and regulate manufacturing.

But after a long struggle, it became widely recognized that low prices and good quality are most reliably achieved by leaving producers and sellers free, checked only by the buyers’ equal freedom to take their business elsewhere. That’s what people call Free Trade. It rests on reasons different from—but just as solid as—the liberty principle defended in this essay.

Restrictions on trade, or on producing goods for trade, are certainly restraints, and restraint as restraint is an evil. Still, these restraints target behavior society is competent to restrain (because it affects others). They’re wrong in most cases for a practical reason: they don’t actually deliver the benefits they promise.

Just as the liberty principle isn’t the foundation of Free Trade, it also isn’t the foundation of most disputes about how far Free Trade should go—for example:

  • What level of public oversight is acceptable to prevent fraud by adulteration?
  • How far should public health rules go?
  • What protections should employers be required to provide for workers in dangerous jobs?

These questions involve liberty only in the general sense that, all else equal, it’s usually better to leave people alone than to control them. But the basic idea that society may legitimately regulate for these ends isn’t really disputable.

When trade regulation is a liberty issue: controlling the buyer

Some forms of “trade interference” are different in kind. They’re essentially liberty questions because their goal isn’t to ensure honest dealing or safe conditions—it’s to make it impossible or difficult to obtain a particular good. Examples include:

  • prohibition laws like the Maine Law
  • banning the import of opium into China
  • restricting the sale of poisons

In such cases, the most serious objection isn’t that the producer or seller loses freedom. It’s that the buyer’s liberty is being limited.

Poisons and the real job of “police”: prevention without tyranny

The poison example raises a broader question: what are the proper limits of what we might call the police function—how far can society invade liberty to prevent crime or accidents?

It’s undisputed that government may take precautions before a crime occurs, not only detect and punish afterward. But the preventive power is far easier to abuse than the punitive power. Almost any ordinary freedom can be described—often plausibly—as “making crime easier.” If you let that logic run loose, nearly nothing remains safe from regulation.

Still, there are obvious cases where intervention is justified. If a public authority—or even a private citizen—sees someone clearly preparing to commit a crime, they aren’t required to stand by until harm is done. They may step in to prevent it.

If poisons were bought for one purpose only—murder—then prohibiting their manufacture and sale would be right. But poisons can also be used innocently, even usefully. So restrictions aimed at the criminal use will inevitably burden legitimate use as well.

The same structure appears with accidents. If someone is about to cross a bridge known to be unsafe, and there’s no time to warn him, you may physically stop him without any real violation of liberty—because liberty is the ability to do what you want, and he doesn’t want to fall into the river.

But what if the danger isn’t certain—only a risk? Then, except in special cases (a child, someone delirious, or someone so excited or absorbed that they can’t think clearly), only the person himself can judge whether the risk is worth it. In that situation, the proper response is to warn, not to compel.

Apply that reasoning to poison sales, and you can separate legitimate regulations from illegitimate ones. For example:

  • Requiring clear warning labels on a poison doesn’t violate liberty. A buyer can’t reasonably want to remain ignorant of what he’s holding.
  • Requiring a doctor’s certificate in every case, on the other hand, would sometimes make purchase impossible and would always make it more expensive—even when the poison is needed for legitimate uses.

So what can society do that meaningfully deters criminal use while barely burdening legitimate use? The best option is what Bentham called “preappointed evidence”: rules that create reliable records in advance, so that wrongdoing becomes harder to hide.

We already use this idea in contract law. The law may demand signatures, witnesses, or other formalities before it will enforce a contract. The point isn’t to stop legitimate contracts; it’s to ensure that, if there’s later a dispute, there’s proof the agreement was real and valid—and to make fake or coercive contracts harder to pass off as genuine.

Similar precautions could govern sales of items that can become instruments of crime. For example, the seller could be required to record:

  • the exact time of the transaction
  • the buyer’s name and address
  • the precise type and amount sold
  • the stated purpose for the purchase (and the answer as given)

And if there’s no medical prescription, a third person could be required to be present—so that, if the poison is later suspected in a crime, the purchase can’t be denied or quietly erased. Rules like these usually wouldn’t make it much harder for honest buyers to get what they need, but they would make it much harder to use the substance for wrongdoing without leaving a trail.

“Self-regarding” conduct has limits when it reliably spills into harm

Society’s right to prevent crimes against itself also shows where the “self-regarding” maxim must be limited. Some conduct looks self-regarding in the abstract, but in particular people, under particular conditions, it becomes a predictable threat to others.

Take drunkenness. In ordinary cases, it’s not a fit subject for law. But suppose someone has been convicted of violence against others while intoxicated. Then it seems entirely legitimate to impose a special legal restriction on that individual: if he is later found drunk, he can be penalized; and if, while drunk, he commits another offense, the punishment for that offense can rightly be increased. In that person, getting drunk isn’t merely self-harm—it’s part of a pattern that endangers others.

Or take idleness. Except for someone supported by the public, or someone who is breaking a contract, it would be tyrannical to punish idleness as such. But if, through idleness or any other avoidable cause, a person fails to perform a legal duty to others—like supporting his children—it isn’t tyranny to compel him to meet that obligation, even by compulsory labor if no other means will work.

Public indecency: private harm becomes a public offense

There are also acts that harm only the person doing them and therefore shouldn’t be legally forbidden—yet become legitimate targets of law when done publicly, because public performance turns them into an offense against others. This is the category of offenses against decency. It isn’t necessary to dwell on the details here. The key point is that the objection is to the publicity—and that same objection often applies to many actions that aren’t wrong in themselves and aren’t generally thought to be.

Can you do it freely—but not recommend it?

Another hard question follows directly from these principles. Suppose a type of personal conduct is considered blameworthy, but respect for liberty prevents society from banning or punishing it because the harm falls only on the agent. If the agent is free to do it, are other people equally free to counsel or encourage it?

This isn’t an easy problem. At first glance, urging someone to act isn’t purely self-regarding. Giving advice, offering incentives, and trying to influence another person are social acts, and social acts are generally subject to social control.

But with a bit more reflection, you see that even if “instigation” doesn’t perfectly fit the definition of self-regarding liberty, the reasons behind the liberty principle still apply. If people must be allowed—at their own risk—to do what they think best in matters that concern only themselves, then they must also be free to talk with one another about those matters: to exchange opinions, trade suggestions, and consult.

In short: whatever it is permitted to do, it must also be permitted to advise.

The case becomes doubtful only when the encourager is not disinterested—when he profits personally, and even makes it his business to promote what society and the state regard as an evil. That introduces a new complication: the existence of a class of people whose livelihood depends on pushing conduct believed to run against the public good.

So what about acts society must tolerate—say, fornication or gambling—but where a third party earns money by facilitating them? Should a person be free to be a pimp, or to run a gambling house?

This sits exactly on the boundary between the two principles, and it isn’t immediately clear where it belongs. There are serious arguments on both sides.

For toleration, one can say:

  • Making an activity someone’s occupation—earning a living from it—can’t turn into a crime what would otherwise be permissible.
  • The law should be consistent: either permit the act or prohibit it.
  • If the principles defended so far are right, society has no business, as society, deciding that conduct is wrong when it concerns only the individual.
  • Society can go no further than dissuasion; and persuasion in one direction should be as free as persuasion in the other.

Against toleration, one can argue:

  • Even if the state isn’t justified in definitively declaring (for punishment’s sake) that a self-regarding act is good or bad, it may still assume—if it considers it bad—that the matter is at least debatable.
  • If so, the state may reasonably try to reduce the influence of solicitations that are plainly interested, made by people who cannot be impartial because they profit from one side—the side the state believes to be wrong.
  • Nothing of value is lost by arranging things so that people choose, wisely or foolishly, as far as possible on their own prompting, free from the “arts” of those who stimulate appetite for their own gain.

From this perspective, one might say: although laws against “unlawful games” are indefensible—although people should be free to gamble in their own homes, or in member-only clubs formed by subscription and open only to members and guests—public gambling houses should not be permitted.

It’s true that such bans are never fully effective. No matter how much power you give the police (often tyrannically), gambling houses can usually keep operating under other names. But the law can still push them into greater secrecy, so that only people who actively seek them out know where they are. And, on this view, society shouldn’t aim for more than that.

There is real force in these arguments. I won’t try to decide here whether they justify the moral oddity of punishing the accessory while the principal must go free—fining or imprisoning the procurer but not the fornicator, the gambling-house keeper but not the gambler.

Still less should society interfere, on similar grounds, with the ordinary practice of buying and selling. Almost anything that is bought and sold can be used to excess, and sellers have a financial interest in encouraging that excess. But that fact can’t support, for instance, prohibition laws like the Maine Law—because dealers in strong drink, even though they may benefit from abuse, are also necessary for legitimate use.

That said, the dealers’ interest in promoting excessive drinking is a real evil, and it can justify the state in imposing restrictions and demanding guarantees that, without that justification, would count as infringements of legitimate liberty.

A real question is what the State should do about choices it believes are bad for the person making them. Even if the law allows the behavior, should the State still try to nudge people away from it—say, by making alcohol more expensive through taxes, or by reducing the number of places that can sell it?

On practical questions like this, the details matter. A tax designed only to make drinking harder is basically prohibition by another name—just softer. Raise the price and you’ve effectively banned alcohol for anyone who can’t afford the increase. For those who can afford it, you’ve slapped them with a penalty for enjoying a particular pleasure. And once someone has met their legal and moral duties to the State and to other people, what they do for fun—and how they spend their money—is their business, not the government’s. In that light, it can look unfair to single out “stimulants” as special targets for taxation.

But here’s the complication: taxes aren’t optional. Governments have to raise revenue, and in most countries a large chunk of that comes from indirect taxes on consumption. The State can’t avoid making some goods more expensive. That means it can’t avoid, in practice, discouraging the use of some items—sometimes so much that, for poorer consumers, the “discouragement” functions as a real ban.

So the State has an obligation to think carefully about which goods it taxes. It should pick, as much as it can, items people can best do without—and, even more, items whose heavy use it believes is plainly harmful. On that logic, taxing alcohol and similar stimulants is not just permissible; it’s often the right choice, at least up to the point that maximizes revenue (assuming the State actually needs the money the tax brings in).


Licensing and limits on sale

A different issue is whether selling these goods should be treated as a privilege, tightly controlled through licensing, or restricted in other ways. The answer depends on what the restriction is trying to accomplish.

Places where people gather in public always need some level of oversight, and drinking venues especially so, because they’re common starting points for fights, disorder, and other offenses. For that reason, it makes sense to:

  • Allow on-site alcohol sales only by people of known or vouched-for good conduct
  • Set opening and closing hours that make effective supervision possible
  • Revoke a license when a place repeatedly becomes a source of disorder—whether because the owner looks the other way, can’t manage it, or actively turns it into a meeting point for planning crimes

Beyond measures like these—aimed at keeping public order—I don’t see further restriction as justified in principle. For example, deliberately limiting the number of beerhouses or spirit shops mainly to make them harder to reach, and to reduce temptation, does two problematic things at once:

  • It burdens everyone for the sake of restraining a subset of people who might misuse easy access.
  • It only really fits a society that treats working people as if they were children—or “savages,” in the ugly language of older political theory—kept under constant restraint as a kind of training program before they’re allowed the privileges of freedom.

No genuinely free country claims to govern its working class that way. And anyone who values freedom shouldn’t accept such a model unless two things have happened: first, serious efforts have been made to educate people for freedom and to govern them as free adults; second, it has been proven beyond doubt that those efforts cannot succeed and that people can only be governed like children. Just stating that alternative shows how absurd it would be to pretend those efforts have been fully tried in the cases we’re talking about.

In practice, these contradictions survive only because many political systems are patchworks: they talk like free societies, but they still smuggle in pieces of despotic or so-called paternal government. And because a broadly free constitution blocks the State from exercising the full control that paternalism would require, the restraint that does get imposed is too weak to educate anyone morally—yet still intrusive enough to be insulting and inconsistent.


Freedom, agreements, and the problem of “selling yourself”

Earlier, I argued that if a person is free to decide what to do in matters that concern only themselves, then groups of people should also be free to manage, by mutual agreement, matters that concern only them and nobody else. That’s straightforward as long as everyone’s will stays the same. But people change their minds. So even in private matters, people often need to make agreements with each other—and as a general rule, it’s right that agreements be kept.

Still, every legal system treats that general rule as having limits. Some promises aren’t binding—especially when they would violate the rights of other people. And sometimes the law even treats “this would be bad for you” as a reason to void a commitment.

The clearest example is slavery by consent. In most civilized countries, a contract in which someone sells themselves— or allows themselves to be sold—into slavery is invalid. The law won’t enforce it, and public opinion won’t honor it either.

Why? The usual reason for leaving a person alone in their voluntary choices is respect for their liberty. If they choose something freely, that’s evidence it’s what they want—or at least something they can tolerate. And, overall, we normally assume their good is best served by letting them pursue it in their own way.

But when a person sells themselves into slavery, they do something different: they give up liberty itself. They use freedom once, to destroy freedom forever after. That defeats the very reason we protect voluntary choice in the first place. After that contract, they aren’t a free person choosing a life; they’re stuck in a condition where the usual presumption in favor of voluntary consent no longer applies.

So the principle of freedom can’t require that a person be “free” to make themselves permanently unfree. It isn’t freedom to be permitted to sign away freedom.

This logic obviously reaches beyond slavery. Yet in ordinary life we constantly accept limits on our freedom. Not total surrender—but real constraints: jobs, debts, promises, obligations. The necessities of living together demand them.

Still, the same principle that supports broad freedom in purely self-regarding matters also suggests something important about agreements that affect only the people who made them: those people should be able, by mutual consent, to release each other from the agreement. And even when both parties don’t agree to release, it’s hard to claim that all contracts—except perhaps those involving money or a clear monetary equivalent—should be absolutely irreversible in every circumstance.

Baron Wilhelm von Humboldt, in an essay I’ve already drawn on, goes further. He argues that commitments involving personal relations or personal services should never be legally binding beyond a limited time. And he claims that the most significant of these—marriage—has a special feature: its aims collapse unless both people’s feelings support it, so it should take no more than one person’s declared will to dissolve it.

That question is too important and complicated to settle in a quick aside, so I mention it only as an illustration. But it’s also clear that Humboldt’s argument can’t be decided on the simple grounds he gives, at least not without more work than he provides.

Here’s what he leaves out. When someone—by a promise, or even just by their behavior—leads another person to rely on them, a new layer of moral obligation arises. If you encourage someone to build expectations, make plans, or stake part of their life on the assumption that you’ll continue in a certain way, you can’t treat changing your mind as morally weightless. Those obligations might be outweighed in some cases, but they can’t be ignored.

And there’s more. Relationships and contracts often create consequences for other people. They can put third parties in new positions, or—as in marriage—bring entirely new people into existence. That creates obligations by both parties to those third persons. Fulfilling those obligations, or at least deciding how to fulfill them, will often depend heavily on whether the original relationship continues or breaks.

This doesn’t mean you must enforce the contract at any cost to the unwilling person’s happiness. But it does mean these third-party interests are an essential part of the moral picture. Even if, as Humboldt suggests, they shouldn’t count for much in the legal freedom of the parties to separate (and I agree they shouldn’t count for too much), they make a major difference to moral freedom. Before taking a step that could deeply affect others, a person is bound to weigh those interests seriously. If they refuse to do so—if they treat others’ stakes as negligible—they bear moral responsibility for the harm they cause.

I’m spelling this out not because it’s controversial on the particular topic, but because it helps clarify the general idea of liberty—and because debates in this area are often framed as if children’s interests are everything and adults’ interests are nothing.


When “liberty” becomes an excuse for power

I’ve already said that, because people lack clear, consistent principles, societies often grant freedom where they shouldn’t—and deny it where they should. One case where modern Europeans feel most passionately about liberty is, in my view, one of the most misplaced.

A person should be free to do what they want in their own life. But they should not be free to do whatever they want on behalf of someone else, while pretending it’s “their own concern.”

If the State respects each person’s freedom in self-regarding matters, it also has a duty to keep close watch over how any person uses the power the law allows them to hold over others.

Nowhere is this duty more neglected than in family relationships—even though their direct impact on human happiness may outweigh almost everything else. The near-despotic power husbands have historically held over wives doesn’t need long discussion here: the cure is straightforward. Give wives equal rights and equal legal protection, like everyone else. Notably, defenders of that injustice rarely hide behind “liberty”; they defend power openly.

But with children, distorted ideas of liberty become a serious obstacle. Public opinion often reacts as if a man’s children were literally part of his own person, not just “his” in a metaphorical sense. People are astonishingly jealous of any legal interference with a father’s absolute control over his children—more jealous than they are of many intrusions on his personal freedom. That tells you something uncomfortable: many people value power more than liberty.

Consider education. Isn’t it almost self-evident that the State should require every child—every future citizen—to receive an education up to some minimum standard? And yet, who isn’t nervous to say that out loud?

Almost everyone agrees, in theory, that it’s a sacred parental duty (or, as law and custom often frame it, the father’s duty) to educate a child well enough for life—both for the child’s own sake and for the sake of others. But in practice, in this country, people can’t bear the idea of forcing a parent to do it. Instead of requiring effort or sacrifice from the parent, the State offers schooling for free and leaves it up to the parent to accept or refuse.

What remains largely unrecognized is this: bringing a child into the world without a fair prospect of being able to provide not only food, but also instruction and mental training, is a moral crime—against the child and against society. And if a parent fails in that obligation, the State should ensure it is fulfilled, charging the cost to the parent as far as possible.


Compulsory education doesn’t mean State-controlled education

Once you accept the duty of enforcing universal education, a lot of the current chaos disappears. Right now, arguments over what education should teach—and who should control it—turn schooling into a battlefield of sects and parties. Time and effort that should go into educating children gets wasted in endless quarrels about education.

If the government simply required every child to get a good education, it could avoid much of the burden of providing that education itself. It could:

  • Leave parents free to choose where and how their children are educated
  • Help pay school fees for poor families
  • Cover the full cost for children with nobody able to pay for them

Many objections to “State education” are perfectly reasonable—but those objections target the State running and directing education, not the State requiring education. Those are different.

I’m as opposed as anyone to putting the whole education of a people— or any large share of it—into government hands. If individuality of character and diversity of opinion and ways of living matter (and they do), then diversity of education matters too. A single national system of schooling is an efficient machine for making people alike. And the shape of that machine is set by whoever holds the dominant power—whether a monarch, a priesthood, an aristocracy, or simply the majority of the present generation, in proportion as it can enforce its will. That kind of schooling naturally becomes a despotism over the mind, and from there it tends to become a despotism over the body as well.

If State-run education exists at all, it should exist only as one experiment among many, competing with others—serving as an example, a spur, and a way to keep standards from collapsing. There is one exception: if a society is so backward that it cannot, or will not, create proper educational institutions on its own. In that case, government may have to take on schools and universities as the lesser of two evils—just as it may sometimes have to take on other large enterprises when private initiative in the needed form doesn’t exist.

But in a society that does contain enough people capable of providing education, those same people would typically be able and willing to offer it voluntarily—so long as the law makes education compulsory and ensures payment through a mix of family responsibility and State aid for those who can’t afford it.


How to enforce it: public examinations

If education is compulsory, what enforces the rule? The natural instrument is public examinations—for all children, starting young.

For example:

  • Set an age by which every child must be tested to see whether they can read.
  • If a child can’t read, fine the father unless he has a valid excuse.
  • If needed, require the fine to be worked off through labor.
  • Send the child to school at the father’s expense.

Then repeat the examination yearly, expanding the tested subjects over time. The point is to make not just the acquisition but also the retention of a basic minimum of general knowledge effectively mandatory.

Beyond that minimum, offer optional examinations in any subject. Anyone who reaches a defined level of proficiency could claim an official certificate.

To keep the state from using examinations as a back door for shaping people’s beliefs, the content tested—once you get past the purely practical tools of learning like languages—should be limited, even at the highest levels, to facts and settled science. If an exam touches religion, politics, or any other disputed territory, it shouldn’t ask, “Which view is true?” It should ask, “Who holds which view, and what reasons do they give?” In other words, test knowledge about opinions, not loyalty to any of them.

Run things that way, and the next generation won’t be at a disadvantage on controversial questions compared to where we are now. Kids will still grow up as church members or dissenters—just as they do today—but the state’s only role would be to make sure they’re informed church members or informed dissenters. Nothing would stop parents from choosing religious instruction for their children, even in the same schools where they learn everything else.

Trying to push citizens toward specific conclusions on disputed subjects is always harmful. But it can absolutely make sense for the state to verify and certify that someone has learned enough for their opinions to be worth taking seriously. A philosophy student benefits from being able to pass an exam on both Locke and Kant—whether they end up agreeing with one, the other, or neither. And there’s no sensible reason you can’t examine an atheist on the evidence for Christianity, so long as you don’t demand that they profess belief in it.

Even so, I think advanced examinations should be entirely voluntary. It would be far too dangerous to let governments keep people out of professions—including teaching—by claiming they lack qualifications. In Humboldt’s spirit: give degrees and public certificates to anyone who chooses to be examined and can meet the standard, but don’t let those certificates carry any built-in legal advantage over competitors—only whatever weight public opinion chooses to give them.


Bad ideas about liberty distort more than education. They also help people ignore the real moral duties of parents—and block the law from recognizing those duties in cases where the reasons are strongest.

Bringing a human being into existence is one of the most serious actions a person can take. To take on that responsibility—to hand someone a life that could become either a gift or a disaster—without ensuring at least the ordinary chances of a decent existence is a wrong done to that future person.

And in a country that is already overcrowded, or clearly headed that way, having children beyond a very small number—when the result is intensified competition for work and reduced wages—becomes a serious offense against everyone who lives by their labor. That’s why the continental laws that forbid marriage unless the couple can show they can support a family do not exceed the state’s legitimate powers. Whether such laws are wise is a separate question, mostly dependent on local conditions and public feeling. But they aren’t objectionable simply because they “violate liberty.” They are the state preventing a harmful act—an act that injures others—and one that deserves social condemnation even when it isn’t punished by law.

What’s strange is how people’s sense of “liberty” twists itself into knots. They’ll accept genuine intrusions on personal freedom in matters that affect only the individual, yet bristle at any restraint when indulging private desire predictably produces lives of misery and corruption for children—and ripples of harm for everyone close enough to be affected. Sometimes you’d think society believes a person has an absolute right to harm others, and almost no right to please themselves without hurting anyone.


I’ve saved for last a broad set of questions about the proper limits of government action—questions closely related to this essay, but not strictly part of the liberty principle. These are not cases where the state restrains people. They’re cases where it helps them. The question isn’t, “Should government stop you from doing this?” It’s, “Should government do this for you—or arrange for it to be done—instead of leaving it to individuals, alone or in voluntary groups?”

Even when no one’s liberty is being infringed, there are still objections to government stepping in. They come in three main forms.

1) Individuals will often do it better.
As a rule, the people best suited to run a business—or decide how it should be run—are those who have a direct stake in it. That principle condemns the old-fashioned meddling of legislatures and officials in the ordinary workings of industry. Economists have said plenty about this, and it’s not uniquely tied to the themes of this essay.

2) Even if government could do it “better,” it may be better for people to do it themselves.
Sometimes officials might do a task more competently on average. But it can still be valuable to leave it to citizens because of what it does for them: it educates the mind, strengthens initiative, exercises judgment, and builds practical familiarity with real-world problems. This is a major (though not the only) argument for things like:

  • jury trials (in ordinary, non-political cases)
  • local and municipal self-government
  • voluntary associations running industrial and philanthropic projects

These aren’t liberty questions in the narrow sense. They’re development questions—about training people to be capable citizens. That civic training matters: it pulls people out of the tight orbit of private and family interest and gets them used to thinking in terms of shared goods, shared responsibilities, and motives that connect rather than isolate them.

Without those habits and abilities, a free constitution can’t be run—or preserved. That’s why political freedom so often proves temporary in places where it isn’t supported by strong local liberties. Let localities handle local business. Let large industrial enterprises be managed by the voluntary union of those who fund them. This doesn’t just work efficiently; it supports what I’ve argued throughout this essay about individual development and diversity of action.

Government action tends to standardize everything. Individuals and voluntary groups, by contrast, run many different experiments, producing endless variety in experience. The most useful role the state can play here is not to be the only experimenter, but to act as a central storehouse and distributor of information—collecting lessons from many trials and spreading them so each innovator can learn from the rest, instead of having society tolerate no experiments except the government’s own.

3) The biggest danger is expanding government power more than necessary.
Every new function piled onto government increases its influence over people’s hopes and fears. It steadily turns the most active and ambitious citizens into dependents—people who hover around government, or around whichever party aims to become the government.

Imagine the roads, railways, banks, insurance firms, major joint-stock companies, universities, and public charities all becoming government branches. Imagine, too, municipal corporations and local boards turned into departments of a centralized administration. Imagine every employee in all these enterprises appointed and paid by government, and looking to government for every promotion. In that world, even complete freedom of the press and a popularly elected legislature wouldn’t make a country genuinely free—free in anything but name.

And the more efficient and “scientific” the administrative machine, the worse the danger. If the system were engineered to recruit the best minds and run with maximum competence, it could become even more suffocating.

England has recently debated selecting all civil servants by competitive examination, to staff government with the most intelligent and educated people available. Much has been said on both sides. One argument often used by opponents is that a permanent government post doesn’t offer enough money or prestige to attract the very highest talent—since those people can find better careers in the professions or in companies and other public bodies.

If that’s true, it’s not a fatal flaw. It’s a safety valve.

Because if the country’s greatest talent were drawn wholesale into government service, we should worry. Suppose every part of society’s business that requires large-scale coordination or broad, organized thinking were handed to the state, and every office filled by the ablest men. Then almost all practical intelligence and cultivated ability—except purely speculative thought—would be concentrated in a large bureaucracy. Everyone else would look to it for everything: the many for direction, the ambitious for advancement. Getting into the bureaucracy, and rising within it, would become the dominant object of desire.

Under that kind of regime, the public outside government would be badly equipped to criticize or restrain the bureaucracy, because it would lack practical experience. And even if—by accident, by despotism, or by the ordinary workings of popular politics—a reforming ruler reached the top, they still couldn’t carry out reforms that ran against the bureaucracy’s interests.

This is the bleak situation described in accounts of Russia. The Czar may have the power to exile an official to Siberia, but he cannot govern without the bureaucratic body, or against its will. They possess a quiet veto over every decree simply by failing to enforce it.

In more advanced and more rebellious societies, the dynamic changes shape but not substance. When people become used to expecting the state to do everything for them—or at least to do nothing without asking the state not only for permission, but for instructions—they naturally blame the state for every misfortune. When suffering passes their tolerance, they revolt and call it a revolution. Then someone else, with or without legitimate authority, takes the seat of power, issues orders to the same bureaucracy, and life continues much as before. The bureaucracy remains; no one else knows how to replace it.

Now compare that with a society where people are accustomed to handling their own affairs. France offers one example: because so many citizens have served in the military, and many have held at least junior leadership roles, popular uprisings there often include people capable of taking charge and improvising a workable plan. But the Americans go further. In civil life, they tend to be what the French are in war. Leave Americans without a government, and almost any group of them can improvise one and run public business with a decent level of intelligence, order, and decisiveness.

That capacity is what a free people ought to have. And a people capable of it will remain free. They won’t allow themselves to be harnessed by any person or clique merely because that person or clique can seize the central administrative reins. No bureaucracy can hope to force such a people to do what they don’t want.

But where everything runs through a bureaucracy, nothing the bureaucracy truly opposes can be done at all. The constitution of such a country amounts to this: it organizes the nation’s experience and practical capacity into a disciplined governing body whose purpose is to rule the rest. And the more perfect this organization becomes—especially the more it succeeds in attracting and training the most capable people from every rank—the more complete the bondage, even for the bureaucrats themselves.

Because the rulers are also enslaved: not by the ruled, but by the machinery, discipline, and habits of their own system. A high official in an imperial despotism is as much its creature as the poorest farmer. A Jesuit can be utterly submissive to his order even when the order as a whole exists for the collective power of its members.

One more point matters: concentrating a country’s best ability inside government eventually damages government itself. A body of officials, working within a system—and all systems rely heavily on fixed rules—feels constant pressure to sink into lazy routine. Or, when it breaks routine, it may lurch into some half-tested scheme that has captured the imagination of a leading member.

The only reliable check on those twin dangers—and the only stimulus that keeps the governing body’s ability sharp—is sustained criticism from outside by people of comparable ability. So it is essential that strong minds be formed independently of government, and that they have real opportunities and experience, so they can judge great practical affairs competently.

If we want an efficient, capable class of public servants—especially one that can originate and adopt improvements—we must not let it monopolize all the careers and activities that develop the faculties needed to govern. Otherwise the bureaucracy will decay into a pedantocracy: rule by trained but narrow minds.


Where, then, is the boundary? At what point do the dangers to freedom and human progress begin to outweigh the benefits of society acting collectively, through recognized leaders, to remove obstacles to well-being? How do we secure as much advantage as centralized power and intelligence can give, without channeling too much of the nation’s energy into government?

This is one of the hardest questions in the art of government. It’s largely a matter of details, and no single rigid rule can cover every case. Still, I believe the safest practical guiding principle—the ideal to keep in view, and the standard by which to test institutional designs—can be put like this:

Disperse power as widely as efficiency allows; centralize information as much as possible, and spread it outward from the center.

Thus, in city and town government, you’d want something like the New England model: public work split into many small, clearly defined jobs, handled by separate local officers chosen by the people—at least for everything that isn’t better left to the individuals directly involved. But alongside that local machinery, each area of local administration would also have a layer of central oversight—a branch of the national government assigned to keep an eye on that whole category of work.

The point of that central office wouldn’t be to run local life from the capital. Its real value would be as a clearinghouse for knowledge.

It would pull together, in one place, what thousands of local officials learn by doing:

  • practical experience from every district in the country
  • relevant examples from other nations
  • broader guidance from political science and public administration

With that vantage point, the central office should have the right to know what’s being done everywhere, and its special responsibility would be to make what works in one place available to everyone else. Because it isn’t trapped inside any one town’s habits or blind spots, its recommendations would naturally carry weight. Still, as a permanent institution, its actual power should be kept narrow: it should be able to require local officers to follow the laws and general rules already laid down for them.

Everything not covered by those general rules should remain in local hands. Local officials should use their own judgment, and they should answer to the people who elect them. If they break the rules, they should answer to the law. And those rules shouldn’t be invented by administrators; they should be set by the legislature. The central authority’s job is to watch how the rules are carried out—and when they aren’t, to respond in ways that fit the situation:

  • if it’s a legal violation, appeal to the courts to enforce the law
  • if it’s incompetence or bad faith, appeal to the voters to remove the officials who ignored the law’s intent

That, in broad outline, is the kind of supervision the Poor Law Board was meant to exercise over the administration of poor relief across the country. When the Board goes beyond that limited role, it can still be justified in that particular case—because it was trying to cure entrenched mismanagement in an area that affects far more than one parish. A district doesn’t have the moral right to manage its poor relief so badly that it becomes a factory for pauperism—spilling misery outward into neighboring districts and damaging the moral and physical condition of the working population as a whole.

So yes: the Poor Law Board’s powers of administrative compulsion and subordinate rule-making—powers it rarely uses much, given public opinion—can make sense when the stakes are national. But those same coercive tools would be completely inappropriate for matters that are purely local.

Even so, a central office that specializes in information and instruction—helping localities learn from one another—would be useful in every branch of administration. A government can’t have too much of the kind of action that doesn’t block people, but instead supports and energizes them.

The trouble starts at the exact moment the government crosses that line. The mischief begins when, instead of drawing out the energy and competence of individuals and local bodies, it replaces them with its own. When, instead of informing, advising, and—when necessary—calling out failures, it puts people in chains, or tells them to step aside while it does their work for them.

Because in the long run, the value of a state is the value of the people who make it up. Any state that trades away the mental growth and elevation of its citizens for a little more administrative neatness—or for the appearance of competence that comes from routine—pays a hidden price. A state that shrinks its people so they’ll be more obedient tools, even for supposedly good ends, eventually discovers the obvious: with small people, nothing truly great gets done. And the perfect bureaucratic “machine” it built by sacrificing everything else will finally prove useless—because it has driven away the living force the machine needed in order to matter at all.

I — General Remarks

For how much we pride ourselves on living in an age of information, it’s hard to miss one awkward, telling fact: on one of the most important questions humans can ask—what makes something right or wrong?—we’ve made surprisingly little progress toward agreement.

From the earliest days of philosophy, thinkers treated this as the central puzzle: the summum bonum, the “highest good,” or, put more plainly, the foundation of morality. The brightest minds argued over it, built whole schools of thought around it, and fought intellectual wars that could get as fierce as any political debate. And after more than two thousand years, we’re still watching the same banners wave. Philosophers are still divided into familiar camps, and neither specialists nor the wider public seem much closer to a shared answer than they were when a young Socrates sparred with Protagoras and—if Plato captured the scene accurately—defended something like utilitarianism against the everyday morality of Athens.

Now, confusion at the level of “first principles” isn’t unique to ethics. Plenty of sciences argue about their starting points, even mathematics, which many people treat as the gold standard of certainty. Yet those disagreements rarely shake our confidence in the results of those sciences. That might seem strange until you notice how science actually works: the detailed, reliable parts of a science usually don’t depend on tidy, universally agreed “first principles.”

In fact, what textbooks call “first principles” are often not the real source of a science’s certainty at all. Take algebra. If the whole subject depended on the usual “elements” taught to beginners, algebra would be one of the shakiest human inventions—because those elements, in the hands of even very respected teachers, can be stuffed with convenient fictions and puzzling assumptions. The deeper truth is this:

  • What we end up calling a science’s “first principles” are often the final results of philosophical analysis.
  • They come from reflecting on the basic ideas the science uses every day.
  • Their relationship to the science is less like foundations under a building and more like roots under a tree: the tree can flourish even if you never dig down and expose the roots to sunlight.

So in science, we often get dependable particular truths first, and only later try to systematize them into general theory.

But ethics feels like it should work the other way around. Morality—and its close cousin, legislation—is a practical art. And practical arts are about action. Action aims at some end. So it seems natural to expect that our rules for action should take their shape from the end they serve. When you start a project, you’d think the first thing you need is a clear idea of what you’re trying to achieve—not something you only discover at the end.

That’s why it’s so tempting to say: “Surely we must have a test of right and wrong before we can confidently label actions as right or wrong.” A test is supposed to help you decide; it isn’t supposed to be the prize you receive after you’ve already decided.

Some people try to escape this difficulty by appealing to a built-in moral sensor: a moral sense or moral instinct that simply tells us what’s right and wrong. But that move doesn’t solve much.

First, whether such an instinct even exists is itself part of the controversy. Second—and more importantly—anyone with serious philosophical ambitions who believes in a moral faculty has usually had to concede that it doesn’t work like eyesight or hearing. Our eyes tell us what color is in front of us. Our ears tell us what sound is happening right now. But our moral faculty, as thoughtful defenders describe it, doesn’t deliver instant verdicts on particular cases.

Instead, it gives us general principles—the kind of abstract rules we can apply to cases. In other words, it belongs more to reason than to raw sensation. It helps us with moral theory, not with snap perception.

That point is actually shared by two major traditions in ethics, even though they disagree about where moral principles come from:

  • The intuitive school says moral principles are knowable a priori: once you understand the terms, the principles are self-evident.
  • The inductive school says moral truths—like truths about the world—rest on observation and experience.

But despite their fight over evidence, both sides largely agree on the structure of moral judgment:

  • You don’t “see” the morality of a single act the way you see a color.
  • You decide by applying a general law to a particular case.
  • Morality, therefore, is something to be derived from principles.

And both sides often speak as if there is a genuine “science of morals.” Yet, oddly, even the a priori moralists seldom do the one thing their position seems to demand: produce a clear list of first principles, and then reduce them—if possible—to a single, ultimate principle. More often, they either:

  • Treat familiar everyday moral rules as automatically authoritative, as if they were self-evident truths, or
  • Offer some very broad general statement as the “common foundation”—a statement that feels less compelling than the everyday rules it’s supposed to support, and that usually fails to win wide acceptance.

But if someone claims to be building morality on first principles, they owe us one of two things: either one basic principle at the root of all morality, or—if there are several—some clear ranking and decision rule for what to do when principles clash. And whatever plays that ultimate role should be self-evident, because that’s the job description.

Spelling out exactly how much damage this lack of an agreed standard has done in real life would require a full tour through the history and critique of ethical thought—more than we can do here. Still, one point is easy to see: whatever steadiness people’s moral beliefs have achieved has often come from the quiet pull of a standard they never explicitly named.

Even when ethics has functioned less as a guide and more as a ceremonial blessing for whatever people already feel, those feelings are heavily shaped by a practical question: how do things affect our happiness? Because of that, the principle of utility—what Bentham later called the greatest-happiness principle—has influenced moral thinking far beyond the circle of people willing to treat it as the foundation of morality. Many philosophers who dismiss utilitarianism in public still rely on it in practice, at least in the details. There is hardly any serious school of thought that denies that the effects of actions on human happiness matter greatly—and often decisively—when we reason about concrete moral questions, even if that school refuses to call happiness the ultimate source of moral obligation.

You could go further: for every moral philosopher who claims to reason from a priori principles and still bothers to argue (rather than merely declare), utilitarian-style arguments are hard to avoid.

A vivid example comes from Kant’s Metaphysics of Morals. Kant is a towering figure, and his system will remain a landmark in philosophy. In that work he proposes a single, universal principle meant to ground moral obligation:

Act only on a rule that you could will to become a law for all rational beings.

But when he tries to derive specific duties from this principle, something goes wrong. He often can’t show—at least not convincingly—that the universal adoption of shockingly immoral rules would involve any logical contradiction or impossibility. What he does manage to show, again and again, is that the consequences of everyone adopting those rules would be dreadful—so dreadful that no one would want to live with them.

That observation brings us to the purpose of this work. Without getting dragged into a full critique of every competing theory, I want to help readers understand and appreciate the utilitarian (or happiness) theory, and to offer the kind of support it can reasonably receive.

But we should be honest about what “proof” can mean here. You can’t prove an ultimate end the way you prove a theorem. Ultimate ends aren’t the kind of thing you can demonstrate directly. If you prove something is good, you usually do it by showing it leads to something else already accepted as good. Medicine is “good” because it promotes health. But how do you prove health is good? Music is good partly because it produces pleasure. But what proof can you offer that pleasure is good?

So if someone proposes a single, sweeping formula that identifies what is good in itself—and claims that everything else is only good as a means to that end—you can accept or reject the formula, but not in the ordinary, courtroom sense of “proof.”

Still, it doesn’t follow that the choice is pure instinct or arbitrary preference. There’s a broader, more realistic sense of “proof” in philosophy: reasons and considerations that can legitimately move a rational mind toward assent or dissent. The question belongs to reason, and reason doesn’t operate here only by gut-level intuition. If someone can present considerations that appropriately determine the intellect to accept or reject a doctrine, that functions as proof in the only sense the topic allows.

Soon we’ll look at what those considerations are, how they apply, and what rational grounds can be offered for accepting or rejecting the utilitarian formula. But there’s a necessary first step: the formula has to be understood correctly. In my judgment, the biggest barrier to utilitarianism isn’t that people have weighed it carefully and found it wanting—it’s that they’ve formed a hazy or distorted idea of what it actually says. Clear away even the most obvious misunderstandings, and the whole debate becomes simpler, with many difficulties dissolving on the spot.

So before I get to the deeper philosophical case for the utilitarian standard, I’ll start with illustrations of the doctrine itself—what it is, what it isn’t, and how to defuse practical objections that either arise from, or are tightly connected to, misreadings of its meaning. Once the ground is cleared, we can then turn to the theory and see what light can be thrown on it.

II — What Utilitarianism Is

Before anything else, let’s clear up a stubborn misunderstanding.

People sometimes hear “utility” and assume utilitarians mean something narrow and dreary—“useful” as in practical, austere, and basically the opposite of pleasure. That’s an ignorant mistake, and it isn’t worth more than a quick correction. In fact, it’s almost unfair to serious critics of utilitarianism to even hint that they could believe something that confused.

What makes the confusion especially strange is that utilitarianism also gets accused of the exact opposite error: reducing everything to pleasure, and not even the refined kind—pleasure in its crudest, most “party animal” form. As one sharp observer noted, the very same people (sometimes literally the same person) will call the theory “impossibly dry” when they hear utility before pleasure, and “dangerously sensual” when they hear pleasure before utility.

Anyone who knows the tradition knows what utilitarians mean. Every major thinker in this line—from Epicurus to Bentham—used utility to mean pleasure itself, along with freedom from pain. They never set “the useful” against “the agreeable” or “the beautiful.” They’ve consistently said the useful includes those things—among others.

But the word utilitarian got picked up by people who liked its sound and didn’t bother to learn its meaning. In popular use, it often turns into a label for someone who supposedly rejects “pleasure” in certain forms—beauty, ornament, entertainment, delight—anything that looks like it might be “frivolous.” Sometimes it’s even used as praise, as if “utilitarian” meant being above the shallow pleasures of the moment. This warped meaning is basically the only one most people know, and it’s the only one many new readers are learning. You can see why the people who coined the term—then largely stopped using it—might feel pushed to bring it back, if only to rescue it from this total distortion.

So what is utilitarianism, actually?

The view called Utility, or the Greatest Happiness Principle, says:

  • An action is right to the extent that it tends to promote happiness.
  • An action is wrong to the extent that it tends to produce the opposite of happiness.

And it defines terms plainly:

  • Happiness means pleasure and the absence of pain.
  • Unhappiness means pain and the lack (or loss) of pleasure.

To fully spell out the standard, we’d eventually need to say more—especially about what counts as pleasure and pain, and how to classify different kinds. Some of that is genuinely complicated. But none of those follow-up clarifications change the core picture of life that this moral theory rests on:

  • Pleasure and freedom from pain are the only things desirable as ends (as final goals).
  • Everything else we call “desirable”—and utilitarianism has just as many “desirable things” as any other moral system—matters either because it contains pleasure in itself, or because it serves as a means to increase pleasure or prevent pain.

That theory of life provokes intense dislike in many people, including some who are admirable in their feelings and intentions. When they hear that pleasure is the highest end, they take it to mean: “So life has nothing better to aim at than pleasure?” And they respond: that’s base, that’s degrading, that’s a philosophy for pigs. Epicurus’s followers were compared to swine very early on, and modern defenders of the view still get similar “polite” comparisons from German, French, and English critics.

The Epicurean reply has always been simple: you’re the one insulting humanity, not us. Because that accusation assumes human beings can only enjoy the same pleasures as pigs. If that were true, the criticism would land—though it wouldn’t be an insult anymore, just a statement of fact. If humans really had no pleasures beyond what animals can feel, then a life-rule fit for a pig would be fit for a person, too.

But that’s not how human happiness works. The comparison feels degrading precisely because a beast’s pleasures don’t satisfy a human idea of happiness. Human beings have higher capacities than appetite—intellect, imagination, deep emotion, moral feeling—and once you become aware of those capacities, you stop calling anything “happiness” that doesn’t include their fulfillment.

To be clear: I’m not claiming Epicureans did a flawless job of building out all the consequences of the utilitarian principle. To do that well, you need to bring in elements that feel more Stoic and more Christian, too. Still, no known Epicurean picture of a good life treats the pleasures of the mind, imagination, emotion, and moral sentiment as equal to raw sensation. It gives them a far higher value as pleasures.

That said, utilitarian writers have often defended “mental pleasures are better than bodily pleasures” using a particular kind of argument: mental pleasures tend to be more lasting, safer, cheaper, and so on. Those points are real, and utilitarians have argued them convincingly. But they could also have taken a different, “higher” line without leaving utilitarianism behind: it is perfectly consistent with the principle of utility to say that some pleasures are not just more of the same thing, but better kinds of pleasure.

It would be absurd if, in judging everything else, we consider quality as well as quantity, but in judging pleasures we act as if only quantity mattered.

What does “higher quality pleasure” mean?

Here is the only answer that really makes sense. Take two pleasures. If nearly everyone who has genuinely experienced both—people who actually know what they’re comparing—gives a clear preference to one of them, and not because they feel morally obligated to prefer it, then that preferred pleasure is the more desirable one.

And if competent judges place one pleasure so far above the other that they would choose it even knowing it comes with more dissatisfaction—and they wouldn’t trade it away for any amount of the other pleasure that their nature can even enjoy—then we’re justified in calling the preferred pleasure superior in quality. In that case, the difference in quality matters so much that the mere question of “how much” becomes relatively minor.

This isn’t speculation. People who are equally familiar with both kinds of enjoyment overwhelmingly prefer a way of life that uses their higher faculties. Very few people would choose to become a lower animal in exchange for a guarantee of maximal “animal pleasure.” No intelligent person would choose to become a fool. No educated person would choose ignorance. No one with feeling and conscience would choose to become selfish and base—even if you convinced them that the fool or the scoundrel feels more satisfied day-to-day.

They won’t give up what they have “more of”—their fuller powers—for the complete fulfillment of the desires they share with a simpler creature. If someone imagines they would, it’s usually in moments of extreme misery, where they’d trade places with almost anyone just to escape pain, even if that “other life” looks undesirable from a calmer perspective.

A being with higher faculties needs more to be happy. Such a person can suffer more acutely and in more ways than someone with fewer capacities. Yet even with that vulnerability, they don’t truly wish to sink into what they recognize as a lower form of existence.

You can explain this resistance in different ways. You can call it pride (a word we slap onto both noble and ignoble feelings). You can connect it to the love of liberty and independence, as the Stoics did. You can tie it to the love of power or the love of excitement. But the most fitting name for it is a sense of dignity.

Every human being has some version of this sense of dignity. In some people it’s stronger, and it tends to grow with the development of higher faculties (though not in perfectly neat proportion). And in the people who feel it strongly, it becomes such a core ingredient of happiness that anything that clashes with it can only be desirable for a moment—if at all.

People often get this wrong because they confuse happiness with contentment. The person with “higher” capacities might not be more content than a simpler person. In fact, someone with low demands has the best chance of having them fully met. Meanwhile, a richly developed person will often see that the happiness available in this world is imperfect.

But that doesn’t make them envy the person who feels no such imperfection—because that person is unconscious of the loss only by being incapable of the good that’s missing. The imperfections don’t just subtract; they also signal what could have been richer.

So yes: it’s better to be a dissatisfied human than a satisfied pig; better to be a dissatisfied Socrates than a satisfied fool. If the pig or the fool disagrees, that’s because they know only their own side of the comparison. The other party knows both.

But don’t people choose lower pleasures all the time?

They do, and that’s not a refutation. People who can enjoy higher pleasures sometimes postpone them under temptation. That can happen even when they fully recognize the higher pleasures as superior. Weakness of character often leads us to choose what’s nearer, even when we know it’s less valuable. And this happens not only when the choice is “bodily vs. mental,” but even between two bodily pleasures.

People will chase indulgences that damage their health while fully understanding that health is the greater good.

Another objection goes like this: “Young people start out idealistic, excited by everything noble; then as they age they collapse into laziness and selfishness.” True enough as a common pattern. But I don’t think most of those people are calmly choosing lower pleasures over higher ones. More often, by the time they devote themselves entirely to the lower, they’ve already lost the ability to enjoy the higher.

The capacity for noble feeling is, in many people, a delicate plant. It dies easily—not only under direct hostility, but from simple neglect. For many young people, that higher capacity withers quickly if their work and their social environment don’t keep it alive and in use. People lose their higher aims the way they lose intellectual tastes: not because they argued themselves out of them, but because they didn’t have the time or opportunity to practice them. Then they fall back on lower pleasures—not because they deliberately judged them better, but because:

  • they’re the only pleasures available, or
  • they’re the only pleasures they’re still capable of enjoying.

It’s a fair question whether anyone who remains equally capable of both kinds of pleasure ever truly prefers the lower in a calm, deliberate way. Many people, across all eras, have failed while trying to combine both—but that’s a different story from choosing the lower as better.

Who gets to decide what counts as “higher”?

There’s no appeal from the verdict of the only judges who can legitimately judge: people who genuinely know both sides. If the question is which of two pleasures is more worth having, or which of two ways of living feels more satisfying—setting aside moral labels and downstream consequences—then the judgment of those experienced in both (or, if they differ, the majority of them) is the closest thing we have to a final answer.

And we have even less reason to hesitate here because there’s no other “court” we could consult—not even for quantity. How would you decide which of two pains is sharper, or which of two pleasures is more intense, except by asking the general consensus of people who have experienced both? Pains and pleasures don’t share one single, measurable substance, and pain is always unlike pleasure. The only way to judge whether a pleasure is “worth” a pain is the felt assessment of those who can compare. So if those assessments say that the pleasures of the higher faculties are preferable in kind—even apart from intensity—then that judgment deserves real weight.

I’ve emphasized this because you can’t have a fair understanding of utility or happiness as a guide for human conduct without it.

But notice something crucial: accepting utilitarianism does not require believing that each person’s highest happiness always comes from the highest pleasures. The utilitarian standard isn’t “my greatest happiness.” It’s the greatest total happiness overall. Even if someone wanted to argue that a noble character isn’t always happier for itself, there’s no doubt that nobility makes other people happier—and the world gains enormously from it.

So utilitarianism reaches its goal only through the general cultivation of nobleness of character. And if someone tried to claim that this would mean each person benefits only from other people’s nobleness, while their own nobleness is pure loss to their happiness, the sheer statement refutes itself; it’s too absurd to need a detailed takedown.

The standard, stated precisely

On the Greatest Happiness Principle, the ultimate end—the thing for the sake of which everything else is desirable, whether we’re talking about our own good or everyone’s—is a life as free from pain as possible and as rich as possible in enjoyment, in both:

  • quantity (how much), and
  • quality (what kind).

And the test of quality—the rule for weighing quality against quantity—is the preference of those best equipped to compare: people with broad experience, plus the habits of self-awareness and self-observation that make their comparisons reliable.

If that is the end of human action, then it’s also the standard of morality. Morality, in this view, is the set of rules and principles for human conduct such that—if people followed them—this kind of life could be secured, as far as possible, for all humankind. And not only for humans, but, as far as the nature of things allows, for all sentient beings.

A second kind of objection: “Happiness isn’t a rational aim.”

At this point another group of critics shows up and says: happiness—any kind of happiness—can’t be the rational goal of life, because:

  1. It’s unattainable. They sneer: “What right do you have to be happy?” (Carlyle adds: “And what right did you have, not long ago, even to exist?”)
  2. We don’t need happiness. They claim the noblest people have always known they can do without it, and that no one becomes truly noble without learning renunciation—the discipline of giving up—treating it as the beginning and necessary condition of virtue.

If the first objection were true—if no human happiness were possible at all—it would strike at the root. Because then happiness couldn’t be the aim of morality, or of any rational way of living.

Even then, though, something could still be said for utilitarianism. Utility doesn’t mean only chasing happiness; it also means preventing or reducing unhappiness. If happiness were a fantasy, the second task would become even more urgent—as long as people intend to keep living, rather than escaping by the kind of collective suicide certain romantics have recommended under special conditions.

But when someone flatly insists that it’s impossible for human life to be happy, that claim—unless it’s just wordplay—is at least a major exaggeration. If by “happiness” you mean an unbroken stream of intense excitement and delight, then yes, obviously, that’s impossible.

A burst of intense joy doesn’t last. At best, it runs for a few minutes—or, with breaks, maybe a few hours or days. It’s a bright flare, not a pilot light that stays on forever. The philosophers who said that happiness is life’s goal knew this perfectly well, even if their critics liked to mock them for it.

What they meant by “happiness” was never nonstop ecstasy. They meant a life that looks more like this:

  • A small number of brief pains, not constant misery
  • Many kinds of pleasures, spread through ordinary days
  • More active pleasure than passive pleasure—doing, creating, engaging, not just being entertained
  • And, holding it all together, realistic expectations: not asking life to give what it simply can’t

People lucky enough to build a life like that have always thought it deserved the name “happy.” And plenty of people still do live that way—at least for long stretches. The real obstacles aren’t built into human nature. They’re built into our bad education and our bad social arrangements.

Some critics might say: “If you teach people that happiness is the purpose of life, won’t they demand something bigger than this modest picture?” But look around. Huge numbers of people have been content with far less.

In practice, a life that feels satisfying usually depends on just two ingredients—either one of which can sometimes carry the whole load:

  • Tranquillity (peace, steadiness, room to breathe)
  • Excitement (energy, novelty, challenge, intensity)

With a lot of tranquillity, many people can be content with very little pleasure. With a lot of excitement, many people can make peace with a fair amount of pain. And there’s nothing impossible about helping most people get a decent share of both. They aren’t enemies; they actually support each other. A long period of calm naturally sets you up to want some action, and a period of action makes rest feel genuinely sweet.

Only two extremes break this balance:

  • If someone’s lazy in a way that has hardened into a vice, they don’t want excitement even after plenty of rest.
  • If someone’s craving for excitement has become a kind of disease, the calm after stimulation feels dull and tasteless—when it should feel pleasurable, in proportion to the intensity that came before it.

When people who are doing reasonably well in the outside conditions of life still find life not worth much, the reason is usually blunt: they care about no one but themselves. If you have no public commitments and no private affections, your sources of excitement shrink fast—and whatever thrills you do have start to lose their value as you near the point where death will end every selfish project. By contrast, people who leave behind those they love—and especially those who also feel a real connection to the wider interests of humanity—can stay vividly interested in life right up to its last days, as much as they were in youth and health.

After selfishness, the next major reason life feels unsatisfying is lack of mental cultivation. By that I don’t mean you need to be a philosopher. I mean a mind that has had the doors opened: it has tasted knowledge and learned, even imperfectly, how to use its own powers. A mind like that finds almost endless interest in what’s around it:

  • the phenomena of nature
  • what human beings make in art
  • the imaginative worlds of poetry
  • the drama and pattern of history
  • the customs, motives, and behavior of people past and present
  • and the direction we might be heading in the future

Yes, it’s possible to become indifferent to all of this—without having explored even a tiny fraction of it. But that usually happens only when someone never had any real moral or human stake in these things in the first place, and treated them as nothing but fuel for idle curiosity.

And here’s the crucial point: there is no law of the universe that says only a few people can have enough education and mental culture to enjoy these sources of interest. In a civilized country, that level of cultivation could be the inheritance of everyone. Just as there’s no built-in necessity that anyone must be a selfish egoist—someone with no feeling or concern beyond their own cramped individuality.

We already see plenty of people who are better than that, which is strong evidence for what humanity could become. Genuine private love and a sincere concern for the public good are possible—unequally, of course, but genuinely possible—for every human being who is raised well.

And in a world where there is so much to be fascinated by, so much to enjoy, and so much to fix, anyone with even this moderate moral and intellectual equipment is capable of a life that can honestly be called enviable. Unless such a person is blocked by bad laws or by being forced under someone else’s will—unless they’re denied the freedom to use the sources of happiness within their reach—they won’t fail to find that enviable kind of life, so long as they escape the most direct and crushing evils: the big causes of physical and mental suffering, like poverty, illness, and the cruelty, unworthiness, or early loss of the people they love.

So the real pressure point isn’t “Can people be happy at all?” It’s the struggle against those calamities—calamities it’s rare luck to avoid completely, and which, in our current world, often can’t be prevented and often can’t be reduced by much. Still, no serious person can doubt that most of the great positive evils in the world are, in themselves, removable—and that if human society keeps improving, they will eventually be pushed down into much narrower limits.

Take poverty: poverty that actually means suffering could be eliminated entirely through wise social organization, joined with ordinary prudence and planning by individuals. Even disease, that stubborn enemy, could be reduced enormously through good physical and moral education and better control of harmful conditions. And scientific progress promises even more direct victories over this ugly foe. Every step forward doesn’t just reduce the chances that our own lives will be cut short; more importantly, it reduces the chances that we’ll lose the people our happiness is tied up with.

As for the sudden blows of fortune and the disappointments tied to worldly circumstances, these usually come from one of three sources:

  • gross imprudence
  • desires that have gotten out of control
  • bad or incomplete social institutions

In short: the major sources of human suffering are, to a large extent—and many of them almost entirely—conquerable through human care and effort. The removal is painfully slow. Whole generations may die while the work is still unfinished. This world may fall far short, for a long time, of what it could easily become if we had enough will and knowledge. And yet any mind that is intelligent and generous enough to take part—even in a small and uncelebrated way—can draw a deep, honorable satisfaction from the struggle itself, a satisfaction they wouldn’t trade for any “bribe” of merely selfish indulgence.

That brings us to a clearer way to judge what critics say about “learning to do without happiness.”

Of course it’s possible to live without happiness. Most people do it involuntarily—nineteen out of twenty, even in the least barbarous parts of the world. And sometimes it has to be done voluntarily: the hero or the martyr may give up happiness for something they prize more than their own comfort.

But what is that “something,” really, if not the happiness of others, or the conditions that make happiness possible? There is something noble in being able to renounce one’s own share of happiness, or even one’s chances at it. But self-sacrifice can’t be the final goal all by itself. It has to be for some end. And if someone tells us the end is not happiness but “virtue,” as though virtue were better than happiness, a simple question cuts through the fog: would the sacrifice be made if the hero or martyr didn’t believe it would spare others from similar suffering? Would it be made if they thought their renunciation would bear no fruit—except to make everyone else’s life as bleak as their own, putting others into the same condition of having renounced happiness?

Honor, then, to those who give up personal enjoyment of life when that renunciation genuinely helps increase the total happiness in the world. But if someone claims to do it for any other purpose, they deserve no more admiration than an ascetic perched on a pillar. Such a person might be an inspiring demonstration of what human beings can force themselves to do—but they are not an example of what human beings should do.

Now, it’s true that only in a very imperfectly organized world can a person best serve others’ happiness by absolutely sacrificing their own. Yet as long as the world remains that imperfect, I’ll readily admit that the willingness to make such a sacrifice can be the highest virtue we find in a human being.

And here is a paradox that is still true: in a world like ours, being consciously able to do without happiness often gives you the best chance of attaining whatever happiness is actually available. Why? Because that awareness lifts you above the random blows of life. It lets you feel, deep down, that fate and fortune can do their worst and still not truly conquer you. Once you feel that, you stop trembling constantly over what might happen. You become less anxious about life’s miseries. And like many Stoics in the worst years of the Roman Empire, you can calmly cultivate the satisfactions within your reach, without obsessing over how long they’ll last—or over the fact that, sooner or later, they must end.

Meanwhile, utilitarians should never stop insisting that the morality of self-devotion belongs to them at least as much as it belongs to Stoics or Transcendentalists. Utilitarian ethics absolutely recognizes that human beings can sacrifice their own greatest good for the good of others. What it refuses to say is that the sacrifice itself is a good. A sacrifice that does not increase—or at least tend to increase—the total sum of happiness is, on utilitarian grounds, simply wasted. The only self-denial utilitarianism applauds is devotion to other people’s happiness, or to the means of happiness—whether that means humanity as a whole, or particular individuals, within the limits set by the overall interests of humankind.

Let me repeat a point that critics rarely acknowledge with anything like fairness: the happiness that utilitarianism uses as its standard for right action is not the agent’s own happiness, but the happiness of everyone affected. When someone’s own happiness conflicts with that of others, utilitarianism demands strict impartiality—like that of a kind, disinterested observer.

In the Golden Rule—do as you would be done by, and love your neighbor as yourself—you find the whole spirit of the ethics of utility. That ideal—treating others’ good as seriously as your own—marks the perfection utilitarian morality aims at.

How do we get as close to that ideal as human life allows? Utility gives two big directions:

  • First, laws and social arrangements should make each person’s happiness—what we might practically call their interest—line up as closely as possible with the interest of everyone.
  • Second, education and public opinion, which have enormous power over character, should use that power to build a deep, unbreakable association in each person’s mind between their own happiness and the good of all—especially between their happiness and the everyday habits (both what we refrain from doing and what we actively do) that the general happiness requires.

The goal is not just that a person can’t imagine being happy while acting against the common good, but that they feel a direct, habitual impulse to promote the common good—and that the feelings tied to this impulse take up a large, vivid place in their emotional life.

If utilitarianism’s critics pictured it to themselves in this, its real form, it’s hard to see what they could claim is missing from it compared with any other moral system. What more beautiful or more elevated development of human nature is any ethical system supposed to encourage? And what “springs of action” do other systems rely on that utilitarianism can’t also draw upon to make its commands effective?

Not every critic misrepresents utilitarianism in order to smear it. Some, who actually grasp its impartial spirit, attack it from the opposite side: they say its standard is too demanding for ordinary humanity. They claim it asks too much to require people always to act from the motive of promoting society’s general good.

But this confuses two different things: the standard of morality and the motive for action. Ethics tells us what our duties are—or how to test what our duties are. It does not demand that the only motive behind everything we do must be duty. In fact, ninety-nine out of a hundred of our actions come from other motives, and that’s entirely proper, so long as the rule of duty does not forbid them.

This misunderstanding is especially unfair to utilitarianism, because utilitarian moralists have insisted more strongly than almost anyone that an action’s morality does not depend on the motive—though the motive does affect our assessment of the person. Someone who saves a drowning person does the morally right thing whether they act from duty or from the hope of a reward. Someone who betrays a trusting friend commits a wrong even if they do it to help another friend to whom they feel even more obligated.

But even if we focus only on actions done from a sense of duty, there’s another mistake people make about utilitarian thinking: they imagine it requires everyone, in every moral decision, to fix their mind on something as vast as “the world” or “society at large.” That’s not how moral life works, and it’s not what utilitarianism asks.

Most good actions are meant to benefit particular people, not “the world.” And the world’s good is nothing more than the sum of those individual goods. When you’re deciding how to help, you usually don’t need to think beyond the people directly involved—except to make sure that, in helping them, you aren’t violating anyone else’s rights, meaning their legitimate and socially recognized expectations.

On utilitarian ethics, virtue aims at increasing happiness. But the occasions when someone—maybe one person in a thousand—can increase happiness on a large scale, as a public benefactor, are exceptional. Only on those occasions does utilitarianism call on someone to weigh “public utility” in the broad sense. In the ordinary run of life, what matters is private utility: the interest or happiness of a few people.

Only those whose actions regularly have wide social consequences need to keep society at large constantly in view.

There is one important exception worth noting: cases of abstaining—cases where a person refuses to do something, on moral grounds, even though in this particular instance it might produce good results. In situations like that, it would be beneath an intelligent agent not to recognize why abstinence is required: the action belongs to a class of actions that, if practiced generally, would be generally harmful, and that general harm is what grounds the obligation to refrain. But notice: the amount of concern for the public good involved here is no greater than what every moral system demands, since all moralities tell us to avoid what is clearly destructive to society.

These same points answer another criticism of the utility doctrine—one based on an even cruder misunderstanding of what a moral standard is, and of what we mean by “right” and “wrong.” People often claim that utilitarianism makes human beings cold and unsympathetic; that it drains warmth from our feelings toward individuals; that it tells us to look only at the hard, dry calculus of consequences, ignoring the character and emotions from which actions flow.

If that complaint means utilitarians refuse to let their judgment of an action’s rightness or wrongness be swayed by their opinion of the person who performed it, then the complaint isn’t really against utilitarianism. It’s against having any moral standard at all. No recognized ethical standard calls an action good or bad simply because it was done by a good or bad person—still less because it was done by someone charming or brave or benevolent, or by someone lacking those traits.

These points matter when you’re judging a person, not when you’re judging a particular action. Utilitarianism isn’t threatened by the simple fact that we can care about more in people than whether each of their choices was “right” or “wrong.” The Stoics—famous for their dramatic, paradox-loving way of talking—liked to say that if you have virtue, you have everything: you’re the only one who’s truly rich, beautiful, and kingly. Utilitarianism doesn’t make that kind of claim. Utilitarians know perfectly well that life contains many worthwhile goods besides virtue, and they’re happy to give those goods their full value.

They also know two important, slightly uncomfortable truths:

  • A right action doesn’t automatically prove someone has a virtuous character.
  • A blameworthy action can come from traits that are, in other contexts, genuinely admirable.

When that’s clearly the case, a utilitarian doesn’t change their verdict about the act itself—they still call it right or wrong by the utilitarian standard—but they may judge the agent more gently or more harshly depending on what the action reveals.

Even so, utilitarians typically insist on one long-run rule of thumb: the best evidence of a good character is a pattern of good actions. And they refuse to label any “mental disposition” as good if its dominant tendency is to produce bad conduct. That stance makes them unpopular with plenty of people. But anyone who takes the difference between right and wrong seriously will share some of that unpopularity, and a conscientious utilitarian doesn’t need to panic about it.

If the complaint is simply this—“Some utilitarians talk as if morality is all that matters, and they don’t give enough attention to the other forms of beauty in a person that make someone lovable or admirable”—then sure, that can happen. But it isn’t unique to utilitarianism. It’s the predictable mistake of moralists who cultivate their moral feelings while neglecting their sympathies and their sense of art. And if we’re going to err, there’s a decent argument for erring on the side of taking morality too seriously rather than too lightly.

In practice, utilitarians vary as much as anyone else. You can find every temperament in the group:

  • some are puritanically strict about moral rules,
  • others are extremely forgiving—exactly the sort of people a sinner (or a sentimentalist) would hope to meet.

Still, a moral doctrine that puts front and center humanity’s shared interest in restraining and preventing behavior that violates moral rules is unlikely to be worse than any rival at turning social disapproval against wrongdoing. Yes, different moral frameworks will sometimes disagree about what “violates the moral law.” But utilitarianism didn’t invent moral disagreement. What it does offer—whether or not it’s always easy—is a clear, graspable way to decide disputes: ask what actually promotes or harms human well-being.

It’s worth clearing up a few more common misunderstandings about utilitarian ethics—even the ones so crude you’d think nobody fair-minded could fall for them. People, including very smart people, often don’t bother to understand a view they already dislike. And because they don’t notice their own willful ignorance as a flaw, the most basic misreadings of moral theories show up again and again in the confident writings of people who want to sound deeply principled and philosophical.

One frequent accusation is that utility is a “godless” doctrine. That charge depends entirely on what you think God’s moral character is like. If you believe God wants, above all, the happiness of his creatures—if that was the point of creating them—then utilitarianism isn’t godless at all. On the contrary, it could be seen as more deeply religious than any alternative, because it takes that divine aim seriously.

If, instead, the criticism means: “Utilitarianism doesn’t treat God’s revealed will as the supreme moral law,” the utilitarian can answer: anyone who believes God is perfectly good and wise must believe that whatever God reveals about morality will satisfy the demands of utility at the highest level. Beyond that, many thinkers—not just utilitarians—have held that Christianity’s central role is not to hand us a detailed rulebook for every case, but to shape the human heart and mind so people can recognize what’s right and feel moved to do it. On that view, you still need a carefully developed ethical theory to interpret what God’s will amounts to in practice. Whether that view is correct isn’t something we need to settle here. The key point is simpler: whatever help religion can offer to ethical inquiry—whether from nature or revelation—is just as available to the utilitarian as to anyone else. A utilitarian can treat religious teaching as evidence about what tends to help or harm human life, just as other moralists treat it as evidence of a “higher law” unrelated to happiness.

Another common move is to smear utility as immoral by renaming it “expediency,” then setting “expediency” against “principle,” as if they were opposites. But in everyday speech, “expedient” (when contrasted with “right”) usually means “useful to me, right now,” like a politician sacrificing the public good just to stay in office. If “expedient” is used in a nicer sense, it still often means “useful for a short-term goal” even when it breaks a rule that’s far more valuable to follow in the long run. In that sense, “the expedient” isn’t the same as “the useful.” It’s a subset of the harmful.

A clear example: it might feel expedient, when you’re in a tight spot, to tell a lie—to escape an embarrassment or to grab some immediate advantage for yourself or even for someone else. But training yourself to feel the weight of truthfulness—and not letting that feeling erode—is one of the most useful things you can do, and weakening it is one of the most harmful. Even an unintentional slip from the truth chips away at the credibility of human testimony. And that credibility isn’t a minor social nicety; it’s a main support of social life as it currently exists. When trust in people’s word is low, civilization stalls: cooperation gets harder, institutions rot, virtue becomes costlier, and large-scale human happiness becomes harder to achieve. So someone who breaks the truth for a convenient short-term gain isn’t being “expedient” in any sensible way—he’s acting like an enemy of the very conditions that make human life go well.

At the same time, every moralist admits that even this sacred rule has possible exceptions. The most obvious cases are ones where withholding a fact—say, refusing to inform a criminal, or keeping devastating news from someone dangerously ill—would protect someone (especially someone other than yourself) from serious, undeserved harm, and where keeping the fact back can only be done by outright denial. Still, if we’re going to recognize exceptions, we should recognize them explicitly and, if possible, set boundaries around them—so the exception doesn’t spread and so the damage to trust is minimized. And if the principle of utility is good for anything, it’s good for doing exactly this kind of work: weighing one set of consequences against another and drawing the line where one side clearly outweighs the other.

A different objection says: “There isn’t time, before acting, to calculate how each choice will affect general happiness.” That’s like claiming we can’t guide our lives by Christianity because we don’t have time, before every decision, to read the entire Bible. The answer is obvious: we don’t start from scratch each time. Humanity has had plenty of time—its entire history—to learn, through experience, what kinds of actions tend to help or harm human life. People talk as if that learning process never began, and as if, the moment someone feels tempted to steal or kill, he must sit down and figure out for the first time whether theft and murder are bad for human happiness. Even then, the question wouldn’t be that puzzling. But in reality it’s already been worked out and handed down.

It’s almost comical to imagine that if everyone agreed utility is the test of morality, they’d remain unable to agree on what’s useful—or that they’d make no effort to teach those conclusions to children and reinforce them through law and public opinion. You can make any ethical standard look useless if you pair it with universal stupidity. But short of that, people will have developed settled beliefs about what actions do to human happiness. Those inherited beliefs become the practical moral rules for most people—and they remain the working rules even for philosophers, at least until philosophers manage to improve on them.

And yes: philosophers can improve on many of those rules even today. The inherited moral code isn’t handed down by divine right. Humanity still has a lot to learn about the effects of actions on overall well-being. The “corollaries” of the principle of utility—like the practical rules in any art—can be refined without limit. In a progressing society, that refinement never stops.

But two ideas need to be kept separate:

  • Improving moral rules over time is one thing.
  • Ignoring general rules entirely and trying to test every single action directly against the ultimate principle is another.

It’s a mistake to think that accepting a first principle means you can’t also rely on secondary principles. Telling a traveler the final destination doesn’t ban the use of road signs along the way. Saying “happiness is the aim of morality” doesn’t mean there shouldn’t be well-marked routes to that aim, or good advice about which route to take.

People should stop talking nonsense here—nonsense they would never tolerate in any other practical field. Nobody says navigation can’t be based on astronomy because sailors can’t pause mid-storm to do the full math from scratch. Rational sailors go to sea with the necessary calculations already done. Likewise, rational people go out into life with their minds already made up about the everyday questions of right and wrong—and about many questions of prudence that are even harder. As long as humans can foresee consequences at all, we’ll keep doing that. Whatever your ultimate moral foundation is, you need subordinate principles to apply it. Every ethical system faces that reality, so it’s no argument against utilitarianism in particular. And to argue seriously as if no secondary principles could exist—as if humanity had never learned general lessons from lived experience—is about as absurd as philosophical debate ever gets.

Most of the remaining stock arguments against utilitarianism are really complaints about ordinary human weakness and the difficulty of living conscientiously. People say, for example, that a utilitarian will be tempted to treat his own case as special, carving out exceptions for himself, and will “discover” that breaking a rule is more useful than obeying it whenever he wants to do wrong. But is utilitarianism the only view that supplies excuses for bad behavior, or clever ways to silence a guilty conscience? Of course not. Any moral doctrine that recognizes what every sane moral doctrine recognizes—that life involves conflicting considerations—can be twisted into self-serving rationalizations.

The real problem isn’t “utility.” It’s the complicated shape of human affairs. Rules can’t be written so perfectly that they never need exceptions. And almost no kind of action can safely be labeled as always required or always forbidden. Every ethical tradition softens the rigid edge of its laws by allowing some room—under the agent’s responsibility—to adapt to circumstances. And in that opening, self-deception and dishonest hair-splitting always try to sneak in.

Every moral system also encounters clear cases where duties collide—cases of genuine conflicting obligation. Those are the hard knots, both for ethical theory and for real-life conscience. People handle them better or worse depending on their intelligence and character. But it’s hard to argue that someone is worse equipped to handle them because they have an ultimate standard—a final reference point for comparing clashing rights and duties.

If utility is the ultimate source of moral obligation, then utility can be used to decide between obligations when they can’t all be satisfied. Applying the standard may be difficult, but it’s better than having no standard at all. In systems where moral rules claim independent authority with no shared court of appeal, there is no recognized umpire to settle disputes among them. Claims about which rule should come first end up resting on something close to wordplay—and unless the decision is quietly driven (as it usually is) by considerations of utility, the field is left wide open for personal desires and biases to take over.

And remember: you only need to appeal to first principles in these conflict cases. In ordinary moral life, some secondary principle is always in play. And if only one is in play, there’s usually little real doubt about which one it is—at least for anyone who genuinely accepts the principle in the first place.

Suppose a tyrant’s enemy dives into the ocean to get away, and the tyrant hauls him back to the surface—not out of mercy, but so he can drag the man home and torture him more elaborately. Does it make anything clearer to call that “a morally right action” just because, in isolation, it looks like a rescue?

Or take a classic ethics thought experiment: someone betrays a friend’s trust because keeping the promise would seriously harm that friend (or the friend’s family). Would utilitarianism force you to label the betrayal “a crime” in exactly the same way you would if the person did it for a petty, selfish reason?

Here’s the utilitarian answer: the tyrant who “saves” a drowning person in order to torture him later isn’t doing the same act as someone who saves a drowning person out of duty or kindness. It’s not merely the motive that changes. The act itself changes, because the rescue is just the opening move in a plan that’s worse than letting the person drown.

In that scenario, pulling the man out of the water isn’t the whole action—it’s the necessary first step of a much more horrifying action. And because of that, its moral status shifts with it.

If Mr. Davies had said, “Whether saving a person from drowning is right or wrong depends greatly”—not on the rescuer’s motive, but—on the rescuer’s intention, then no utilitarian would disagree. The problem is that he slips into a mistake that’s extremely common (and easy to forgive): he treats motive and intention as if they were the same thing.

Utilitarian writers—Bentham above all—have worked hard to keep these ideas separate:

  • Intention: what the person is trying to do—the outcome they are aiming at, the plan they are carrying out.
  • Motive: the feeling or inner push that makes them choose that plan—compassion, greed, fear, loyalty, cruelty.

For utilitarianism, the morality of an action depends entirely on the intention—on what the agent is choosing and aiming to bring about.

The motive, by contrast, doesn’t change the moral character of the act when it doesn’t change what the act is in practice. But motive still matters a lot in a different way: it strongly affects how we judge the person. It tells us something about their likely future behavior—whether they have a character that tends to produce helpful actions or harmful ones.

III — Of The Ultimate Sanction Of The Principle Of Utility

People often ask a fair question about any moral theory: what makes it binding? In other words:

  • Why should I obey this moral standard?
  • What motivates me to follow it?
  • Where does its “ought” actually come from?

That question is a normal part of moral philosophy. It’s also often aimed at utilitarianism as if utilitarians, uniquely, can’t answer it. But the problem isn’t special to utilitarianism. It comes up any time you ask someone to ground morality in something other than the moral code they grew up with.

Here’s what’s happening psychologically. The morality you absorbed through education and social opinion usually feels self-evidently obligatory. It arrives in your mind wearing a kind of halo: “Of course I must not lie. Of course I must not steal.” But when someone tells you that this familiar morality ultimately rests on a more general principle—especially one that doesn’t already feel sacred—the person often reacts as if you’ve pulled the rug out from under morality. The conclusions (“don’t murder,” “don’t betray,” “don’t deceive”) feel more solid than the alleged foundation. It can even seem as if the structure stands better without its base.

So the person thinks: “I feel bound not to rob or kill or cheat. But why am I bound to promote the general happiness? And if my own happiness points somewhere else, why shouldn’t I choose that instead?”

Why This Doubt Shows Up

If utilitarians are right about what a moral sense is—something shaped by education, habit, and our social life—then this doubt will keep surfacing until moral development catches up. The principle needs to sink in emotionally the way some of its everyday consequences already have. In a well-raised young person, “crime is horrible” can feel automatic. Utilitarians think the same kind of deep-rooted feeling could attach to a broader idea: a genuine sense of unity with other people—the sense that “their good is part of what matters to me.” (And yes, this is exactly the sort of moral attitude Christianity was meant to cultivate.)

Until education and culture have done that work, the question “why should I care about everyone’s happiness?” will feel like a live objection. But it’s not a uniquely utilitarian headache. It’s built into every attempt to explain morality by tracing it back to first principles. Unless the basic principle already carries the same moral weight in someone’s mind as its familiar applications, analysis can look like it’s stripping away sanctity rather than clarifying it.

Two Kinds of Moral “Sanctions”

Utilitarianism can draw on the same kinds of pressure that support any moral system. These pressures—these sanctions—come in two forms:

  • External sanctions: incentives and penalties coming from outside you
  • Internal sanctions: the feelings inside you that make wrongdoing painful

External Sanctions

External sanctions are straightforward. They include:

  • wanting approval and fearing disapproval from other people
  • hoping for reward and fearing punishment from God (for those who believe)
  • affection and sympathy for others, which can move us even when self-interest isn’t in play
  • love, reverence, or awe toward God, which can motivate obedience beyond personal gain

There’s no reason these motivations can’t support utilitarian morality as strongly as any other morality.

In fact, the motives tied to other people practically guarantee themselves as society becomes more intelligent and reflective. Whether or not “general happiness” is the true foundation of morality, people undeniably want happiness. And even if they don’t live up to it, they tend to admire behavior in others that seems to promote their happiness and resent behavior that undermines it.

As for religious motivation: if someone believes God is good, then anyone who thinks “good” means “promotes general happiness” (or even that promoting general happiness is the test of goodness) will naturally conclude that God approves of what increases overall happiness. So the whole machinery of external rewards and punishments—social and religious, physical and moral—can be harnessed to support utilitarianism as soon as the morality is recognized and taught. And the more education and culture lean into it, the more force these external sanctions have.

The Internal Sanction: Conscience

External pressures aren’t the whole story. The deeper question is about what happens inside a person.

Whatever moral standard you accept, the internal sanction of duty is basically the same: it’s a feeling in your own mind. When you violate what you take to be your duty, you feel a kind of pain—sometimes mild discomfort, sometimes a severe inner recoil. In a well-developed moral character, serious wrongdoing can feel not just unpleasant, but almost unthinkable.

When that feeling is genuinely moral—when it’s attached to the idea of duty rather than to some specific custom or superstition—this is what we mean by conscience.

In real life, though, conscience is rarely simple. The core feeling is usually layered over with all kinds of associations:

  • sympathy and love
  • fear (often the strongest ingredient)
  • religious emotions of many kinds
  • childhood memories and habits
  • pride and self-respect
  • craving other people’s approval
  • and sometimes even self-loathing

This messy mix helps explain why “moral obligation” can seem mysterious—almost magical. People notice how powerful it feels and then conclude it must be attached only to whatever objects, by some special law, happen to trigger it in their current experience. They start to think: “Conscience could never bind me to anything else.”

But the real binding force isn’t magic. It’s simpler: there’s a mass of feeling you’d have to push through in order to do what you believe is wrong. And if you push through anyway, you’ll likely meet that feeling again afterward as remorse. No matter what theory you have about where conscience comes from, that’s what conscience is in practice.

So What’s Utilitarianism’s Sanction?

Once you see that the ultimate internal sanction of morality is a feeling in our own minds, the utilitarian can answer the question “what binds us to utility?” very plainly:

The same thing that binds us to any moral standard: the conscientious feelings of human beings.

Of course, this has no grip on someone who lacks those feelings. But that’s not a special problem for utilitarianism. A person with no internal moral sensibility won’t obey any moral principle because it’s moral; you’ll reach them only through external sanctions—reputation, law, social pressure, reward, punishment.

Meanwhile, those moral feelings are undeniably real, and experience shows how strongly they can guide conduct when they’re carefully cultivated. And there’s no good reason to think they can’t be cultivated just as powerfully around the utilitarian standard as around any other moral rule.

“But Isn’t Duty an Objective Fact?”

Some people assume that if moral obligation is a kind of objective reality—something “out there,” independent of human minds—then it must command stronger obedience than if it’s seen as something rooted in consciousness.

But whatever someone believes about the metaphysics, what actually moves them is still a subjective feeling, and the force of that motive is measured by how strong the feeling is.

Think about belief in God. Many people regard God as an objective reality, but that belief—apart from expectations of reward and punishment—affects behavior only through the person’s religious feelings, and only to the extent those feelings are vivid and active. Disinterested moral motivation always lives in the mind itself.

So the “transcendental” moralist is really committed to a particular psychological hope: that the internal sanction won’t exist unless the person believes it has roots outside the mind. They worry that if someone can say, “My conscience is just a feeling,” then they might add, “If the feeling fades, the obligation disappears—and if it’s inconvenient, I can ignore it and try to snuff it out.”

But why would that danger be unique to utilitarianism? Does believing duty is external automatically make conscience impossible to silence? If anything, the opposite is obvious: moralists of every school admit—and regret—how easily conscience can be muffled in most people. Plenty of people who’ve never heard of utilitarianism still ask themselves, “Do I really have to obey my conscience?” When people with weak consciences answer “yes,” it’s rarely because they’re convinced by transcendental theory. It’s usually because they anticipate external consequences.

Is the Feeling of Duty Innate?

We don’t have to settle, here and now, whether the feeling of duty is inborn or implanted. But notice what follows either way.

If it’s innate, the interesting question becomes: what does it naturally attach to? Even philosophers who defend “intuition” have largely moved to this view: we don’t have intuitive access to countless detailed moral rules; at most, we have an intuitive sense of broad principles.

And if anything in morality is intuitively obligatory, it’s hard to see a better candidate than this: care about the pleasures and pains of others. If that’s the innate core, then intuitive ethics and utilitarian ethics converge, and the fight largely evaporates. Even as things stand, intuitive moralists already treat concern for others’ interests as central—most of everyday morality, they agree, turns on what we owe one another. So if belief in a “transcendental” source of duty adds any extra push to the internal sanction, utilitarianism already gets to benefit from that push.

If, on the other hand, moral feelings are acquired (as I believe), that doesn’t make them unnatural. Humans naturally learn to speak, reason, build cities, and farm—even though none of that arrives fully formed at birth. Moral feelings work similarly. They aren’t present in anything like equal strength in everyone, but even people who insist morality is transcendental have to admit that unpleasant fact.

An acquired moral faculty can still be a natural outgrowth of human nature—capable of emerging spontaneously in small ways, and capable of being developed to a high degree through cultivation. The darker side is that the same tools—external sanctions and early impressions—can train conscience in almost any direction. With enough social pressure and childhood conditioning, people can come to feel “conscience” about things that are ridiculous or harmful. That’s not speculation; history is full of it.

So it would be absurd to deny—against all experience—that education and social influence could also give powerful conscientious force to the principle of utility, even if it didn’t already harmonize with human nature.

Why Utilitarian Morality Doesn’t “Analyze Away”

There is, however, one legitimate concern. When moral obligations are purely artificial—when they’re just stitched together by habit and fear—intellectual progress tends to dissolve them. People start analyzing: “Why do I believe this?” And the association weakens.

So if “duty = promote general happiness” were just an arbitrary mental link, and if there were no strong natural sentiments that resonated with it—no deep part of us that found it fitting and congenial—then even a duty trained into people might eventually get “analyzed away.”

But utilitarianism has a powerful natural anchor: our social feelings.

Human beings have a real desire to be at one with other people. It’s already strong, and it tends to grow stronger as civilization advances—even without anyone explicitly preaching it. Living socially isn’t just convenient; for humans, it’s normal, necessary, and constant. Except in rare circumstances—or by a deliberate act of abstraction—people don’t even picture themselves except as members of a larger whole. And the more humanity moves away from isolated “savage” independence, the tighter this sense of belonging becomes.

That matters because a functioning society (outside of master-and-slave relations) is impossible unless everyone’s interests count. A community of equals can survive only on the understanding that everyone’s interests deserve equal regard. And in any civilized society, almost everyone has equals: unless you’re an absolute monarch, you must live on equal terms with someone. Over time, societies keep moving toward conditions where it becomes harder and harder to live permanently on any other basis with anyone.

This is how people slowly become unable to imagine a life of total disregard for others. At minimum, they internalize the need to avoid obvious harms—and, often for their own protection, they live in steady protest against such harms. They also grow up familiar with cooperation: working with others, temporarily adopting a shared aim, and pursuing a collective interest rather than a purely personal one.

And during cooperation, something important happens psychologically: people’s ends become linked. Even if only for a while, they feel as if other people’s interests are their interests.

As social ties strengthen and society grows in healthy ways, each individual gains:

  • a stronger personal stake in considering other people’s welfare
  • a deeper emotional identification with other people’s good (or at least a stronger habit of practically respecting it)

Over time, a person comes to see themselves—almost instinctively—as someone who naturally takes others into account. Other people’s good starts to feel like a basic condition of life, something you simply have to attend to, like food, shelter, and safety.

Whatever measure of this feeling someone has, they’re strongly motivated—by both interest and sympathy—to express it and encourage it in others. And even someone who lacks it has every reason to want other people to have it, because it makes social life safer and better for everyone. So even small beginnings of fellow-feeling get seized upon and nourished by sympathy and by education. Then external sanctions—praise, blame, approval, disapproval—wrap a tight net of reinforcement around it.

As civilization advances, this way of understanding ourselves and human life feels more and more natural. Political improvement makes it even easier by reducing conflicts of interest and by flattening unfair legal privileges—those inequalities that still make it “practical” for some people to ignore the happiness of whole groups.

In a steadily improving moral culture, influences continually grow that push each person toward a sense of unity with everyone else. If that sense of unity were complete, a person wouldn’t even think of seeking any benefit for themselves unless others were included in it too.

Now imagine we taught this unity the way societies have often taught religion: with education, institutions, and public opinion all aligned, so that from infancy a person is surrounded by the profession and the practice of it on every side. Anyone who can genuinely picture that world won’t worry about whether the Happiness morality has a strong enough ultimate sanction.

Comte’s two big books lay out a full-on “religion of humanity” without relying on God or Providence. I disagree strongly with his politics and moral program, but he still proves something important: you can build a system that has the psychological pull and social impact of a religion—one that grabs hold of everyday life and colors how people think, feel, and act. In fact, the real risk isn’t that such a secular “religion” would be too weak. It’s that it could be too strong—so strong that it starts squeezing out freedom and individuality.

The same basic point matters for utilitarian morality. If you accept the greatest-happiness principle, you don’t have to sit around waiting for society as a whole to evolve to the point where everyone feels its force. We aren’t that far along yet as a species. Most of us still can’t feel such complete, all-embracing sympathy with everyone else that serious conflict in how we live becomes impossible.

But we are far enough along that anyone with a reasonably developed social nature can’t honestly see other people as mere competitors—rivals fighting them for a limited supply of happiness, people who must fail so that they can win. Even now, most people have a deeply rooted sense of themselves as social beings. And that self-understanding pushes them toward a simple need: they want some harmony between what they feel and aim for, and what other people feel and aim for.

Of course, differences in opinions and education can make it impossible to share many of others’ actual feelings. Sometimes those differences even lead a person to condemn and oppose what others seem to want. Still, there’s a deeper requirement they often can’t shake: they need to know that, underneath the surface disagreements, their real purpose and other people’s real good don’t collide. They need to be able to tell themselves:

  • “I’m not fighting against what others ultimately want—their well-being.”
  • “Even if I disagree with them, I’m not treating their happiness as something I have to defeat.”
  • “At bottom, I’m trying to promote their good, not sabotage it.”

For many people, this social feeling is weaker than selfish impulses, and in some it barely exists at all. But for those who do have it, it feels like something natural and essential—not like an artificial product of schooling, and not like a rule society bullies them into obeying. It shows up instead as a human capacity they’d be worse off without.

That conviction—the felt need to be in genuine alignment with other people’s good—is the ultimate sanction of the greatest-happiness morality. It’s what makes a well-formed mind cooperate with the external pressures that encourage concern for others (the “external sanctions”). And when those external pressures are missing—or even push the other way—this inner social conscience can still bind a person powerfully, depending on how sensitive and thoughtful their character is. After all, almost no one except the morally empty could stand to plan their whole life around a single rule: “Ignore everyone else unless my private advantage forces me to pay attention.”

IV — Of What Sort Of Proof The Principle Of Utility Is Susceptible

You can’t “prove” an ultimate goal the way you prove a math theorem. That’s true of every first principle—the starting points of knowledge and the starting points of morality. Still, for facts about the world, we can point to the faculties that judge facts: our senses and our inward awareness. So here’s the natural question: when we argue about practical ends—about what we should aim for—what kind of evidence can we appeal to?

Talk about “ends,” and you’re really talking about what’s desirable—what’s worth wanting. Utilitarianism says:

  • Happiness is desirable, and it’s the only thing desirable as an end.
  • Everything else is desirable only as a means to happiness.

So what would it even mean to “back up” that claim?

Think about how we treat other basic claims. The only proof that something is visible is that people can actually see it. The only proof that a sound is audible is that people can hear it. In the same spirit, the only evidence we can offer that something is desirable is that people, in fact, desire it.

If the utilitarian end—general happiness—weren’t recognized in real human life as something people aim at, no argument could make someone treat it as an end. And the only reason we can give for why general happiness is desirable is simple: each person—so far as they think it’s possible—desires their own happiness.

That’s a fact about human psychology. And if it’s a fact, then we have all the “proof” this kind of claim allows. From it we can say:

  • Each person’s happiness is a good for that person.
  • Therefore, the happiness of everyone taken together—the general happiness—is a good for the community as a whole.

So happiness earns its place as one of the proper ends of human action, and therefore as one of the standards we use to judge morality.

But that alone doesn’t show happiness is the only standard. To prove that, you’d have to show not just that people desire happiness, but that they never desire anything else.

And plainly, they do desire other things—things we normally talk about as distinct from happiness. People desire virtue and the absence of vice, just as genuinely as they desire pleasure and the absence of pain. The desire for virtue isn’t as universal as the desire for happiness, but it’s real. That’s why critics conclude that humans pursue ends other than happiness—and so happiness can’t be the sole moral yardstick.

But utilitarianism doesn’t deny that people desire virtue, or claim virtue isn’t worth wanting. It says the opposite: virtue should be desired, and it can be desired disinterestedly—for its own sake.

Utilitarians may argue about what makes something virtuous in the first place. They do typically hold that actions and character traits count as virtuous because they promote something beyond virtue itself. But once we’ve identified what virtue is, utilitarians say two things at once:

  • Virtue is one of the most valuable means to the ultimate end (happiness).
  • And as a matter of psychological fact, virtue can also become, for an individual, a good in itself—something they love and seek without thinking about any further payoff.

In fact, utilitarians claim your mind isn’t in its best condition—most aligned with utility and most supportive of general happiness—unless you can love virtue in exactly that way: as something desirable in itself, even in cases where being virtuous doesn’t lead to the other good consequences it usually produces.

That isn’t a retreat from the happiness principle. It’s what the happiness principle actually implies once you look closely. Happiness isn’t a single, uniform ingredient. It’s a whole made up of many parts, and many of those parts are desirable in themselves, not merely as contributions to some abstract “total.”

Take music. The utilitarian view isn’t that music is valuable only as a tool for producing “happiness” in the abstract. Music is desirable directly. So is health, in the sense of being free from pain and limitation. These aren’t just routes to happiness; they’re pieces of what we mean by a happy life. And virtue can become that kind of piece as well. It isn’t originally part of happiness, but it can grow into it. For people who love virtue disinterestedly, virtue is desired not as a strategy for happiness but as part of their happiness.

To see how this works, notice that virtue isn’t the only thing that starts as a means and then turns into an end.

Consider money. There’s nothing inherently magical about money any more than there is about a pile of shiny stones. Money matters because of what it can buy—because it helps satisfy other desires. And yet the love of money is one of the strongest forces in human life. Often, people want money for its own sake; they crave having it more than using it. Sometimes that craving even grows when the other desires that money used to serve begin to fade.

In that situation, it’s accurate to say money is desired not merely for an end, but as part of the end. Something that began as a means to happiness becomes a major ingredient in the person’s very idea of happiness.

The same pattern shows up with many of the big goals people chase—power, for example, or fame. Unlike money, these usually come with some immediate pleasures that seem built in. Still, their strongest pull often comes from how much they help us get what else we want. Over time, that repeated usefulness for our purposes creates a powerful association, and the direct desire for power or fame can become so intense that it overtakes almost everything else.

When that happens, the “means” has become part of the “end”—sometimes a more important part than the original things it was supposed to help obtain. But even then, what’s going on is not a shift away from happiness. The person believes that merely having power, fame, or money will make them happy, and failing to get it makes them unhappy. The desire for these things isn’t a different kind of desire than the desire for happiness. It’s like loving music or wanting health: they are included within what we call happiness. Happiness isn’t an abstract idea floating above life; it’s a concrete whole, made of parts like these.

And life would be bleak if it weren’t possible for our minds to work this way—if nothing could become precious except the raw, immediate pleasures we started with. One of the gifts (and dangers) of human nature is that things that were originally neutral can, through association with our basic desires, become new sources of pleasure—often more lasting, more wide-ranging across a lifetime, and sometimes even more intense than the original pleasures.

Virtue fits this pattern. At first, there’s no motive for virtue except that it tends to bring pleasure or protect us from pain. But once that association forms, virtue can be felt as a good in itself and desired with as much strength as anything else.

And here’s a crucial difference. Money, power, and fame can easily make a person harmful to the people around them. But cultivating a disinterested love of virtue makes someone a benefit—often a blessing—to their community. So utilitarianism may tolerate and even approve other acquired desires up to the point where they start doing more harm than good. But it requires the cultivation of the love of virtue as strongly as possible, because nothing is more important to general happiness.

Put all of this together, and the conclusion is: in reality, nothing is desired except happiness.

That doesn’t mean we only desire “pleasure” in some narrow sense. It means that whatever we desire—when we desire it not merely as a tool for something else—counts as desired because it has become part of what happiness is to us. People don’t desire things for themselves until those things have become ingredients in their happiness.

So if someone desires virtue for its own sake, what’s driving that desire? It’s because:

  • they find the consciousness of being virtuous pleasant, or
  • they find the consciousness of lacking virtue painful, or
  • (most often) both at once.

Usually the pleasure and pain are intertwined: a person feels pleasure in the virtue they’ve achieved and pain at how far they still fall short. If virtue brought them neither pleasure nor pain—if its presence and absence felt emotionally flat—they wouldn’t love virtue for itself. At most, they’d want it for other benefits it might bring to them or to people they care about.

That answers the question of what kind of proof the principle of utility can have. If this psychological claim is true—if human beings desire nothing except what is either a part of happiness or a means to happiness—then no further proof is possible or needed that these are the only things desirable.

And if that’s true, then:

  • Happiness is the sole end of human action.
  • Promoting happiness is the test by which we should judge conduct.
  • And therefore it must be the standard of morality, since the parts are contained within the whole.

Now, is the psychological claim actually true? Do people desire nothing for its own sake except what gives them pleasure, or what spares them pain? That’s a question of fact, not wordplay. And like other questions of fact, it depends on evidence—on practiced self-awareness, careful self-observation, and observation of others.

My own view is that, if you look impartially, you’ll find that desiring something and finding it pleasant are inseparable—really two names for the same experience. Likewise, being averse to something and experiencing it as painful are two sides of one coin. To think of an object as desirable (when not thinking only about its consequences) is to think of it as pleasant. And to desire something beyond the degree to which the idea of it is pleasant is, I believe, psychologically impossible.

This may seem so obvious that few will dispute it. The more likely objection is different: people will say the will is not the same thing as desire. A person with settled virtue—or anyone with firm purposes—may carry out their plans without thinking about any pleasure in contemplating them, and may persist even when those pleasures shrink (because their temperament has changed, or their sensibilities have dulled), or even when pursuing those purposes brings them real pain.

I agree with all of that.

Will is an active capacity; desire is a more passive state of feeling. Will begins as a kind of offshoot of desire, but over time it can take root and separate from it. In fact, with established habits, the direction can flip: instead of willing something because we desire it, we may end up desiring it because we will it.

That’s simply the power of habit, and it isn’t unique to virtue. People continue doing many things out of habit that they originally did for some motive. This can happen in a few ways:

  • Sometimes people act from habit without noticing it; the awareness comes only afterward.
  • Sometimes they act with conscious choice, but the choice has become habitual and runs on “autopilot,” even against their calmer, reflective preferences—as often happens with harmful indulgences.
  • Finally, sometimes the habitual act of will isn’t at odds with one’s overall intention, but instead expresses it steadily—like the will of a person of mature virtue, or of anyone who deliberately and consistently pursues a chosen end.

Understood this way, the difference between will and desire is real and important. But what the fact amounts to is this: will, like everything else in us, can be shaped by habit. We can will by habit what we no longer desire for itself, or what we desire only because we’ve trained ourselves to will it.

None of that changes the deeper point: at the beginning, will is produced entirely by desire—including both the attraction of pleasure and the repulsion of pain.

So now set aside the person whose will to do right is firmly established. Look instead at someone whose virtuous will is weak—easily defeated by temptation and unreliable. How do you strengthen it? How do you plant or awaken a will to virtue when it doesn’t yet exist with enough force?

Only by making the person desire virtue—by helping them see virtue as pleasant, or the lack of it as painful. You do it by connecting doing right with pleasure, and doing wrong with pain, or by drawing out and making vivid—through lived experience—the natural satisfaction that comes with the one and the natural suffering that comes with the other.

That’s how you call forth a will to virtue. And once that will becomes solid, it can operate without constantly referencing pleasure or pain.

In short: will is the child of desire, and it escapes its parent only by falling under the rule of habit.

And what comes from habit is not automatically or intrinsically good. There’d be no special reason to want virtue to become independent of pleasure and pain—except for this practical fact: the pleasures and pains that steer us toward virtue are not reliable enough to produce steady action until habit comes to their support. In feeling and in conduct, habit is what creates dependability.

That dependability matters: to others, because they need to be able to rely on us; and to ourselves, because we need to be able to rely on our own character. That’s why we should cultivate the will to do right into a stable, habitual independence.

But notice what that means. This state of the will is a means to good, not a good in itself. So it doesn’t conflict with the claim that nothing is good for human beings except insofar as it is itself pleasurable, or a means of obtaining pleasure or avoiding pain.

If this doctrine is true, then the principle of utility is proved. Whether it is true or not must now be left to the thoughtful reader.

V — On The Connection Between Justice And Utility

V
On The Connection Between Justice And Utility

Across the whole history of moral philosophy, one of the biggest roadblocks to accepting the idea that utility—human happiness and well-being—is the standard of right and wrong has been the idea of justice. The word justice doesn’t just sound important; it hits most people like an instinct. It arrives with a punch of certainty that makes it feel like it must be pointing to something “out there” in the world—some absolute feature of reality. So many thinkers have assumed that the just must exist in nature as something fixed and objective, usually treated as separate from mere “expediency” (what works), and even imagined as the opposite of it—though, in practice, people also admit that justice and long-term expediency rarely stay divorced for very long.

But we need to separate two questions that people constantly tangle together:

  • Where does the feeling of justice come from?
  • Does that feeling deserve to rule our behavior?

Even if a feeling is “natural,” that doesn’t automatically make every one of its impulses trustworthy. Justice could be an instinct and still need supervision—just like hunger, fear, jealousy, or any other built-in drive. And if we have intellectual instincts (automatic ways of judging) as well as animal instincts (automatic ways of acting), there’s no reason to assume the judging instincts are any more infallible than the acting ones. Our instincts can push us toward bad actions; they can also nudge us toward bad conclusions.

Still, in real life, people do slide from “this feeling is natural” to “this feeling reveals the truth.” Humans have a strong bias: when we can’t easily explain a powerful inner experience, we treat it like a window onto an external reality. That’s exactly what happens with justice. So the task here is to ask a sharper question: What is justice, really? Is the justice or injustice of an action a special quality—unique and separate from everything else about it? Or is it a particular way of bundling and viewing other qualities an action already has?

To answer that, we have to look at the feeling of justice too. Is it sui generis—a one-of-a-kind mental sensation like color or taste? Or is it a compound emotion, built out of more basic feelings fused together? This matters because many people are willing to concede something like this: “Sure, justice usually lines up with what’s generally good for society.” But they resist the next step, because the experience of justice feels different from the experience of plain “this would be useful.” Justice doesn’t merely recommend. It demands. Except in the most extreme cases, utility doesn’t grip the mind with the same urgency. So people assume that this extra force must come from a completely different source.

If we want to clear this up, we have to identify what makes justice—and especially injustice—distinctive. (Justice, like many moral concepts, is often easiest to define by the kinds of acts we condemn as its opposite.) What is the shared feature—if there is one—across the many different things people call unjust? And what separates “unjust” acts from other acts we disapprove of, but don’t label with that particular word?

If everything we regularly call unjust shares some common element (or cluster of elements), then we can ask a further question: would that element naturally gather around itself a feeling with the special heat and intensity we associate with justice, given how human emotions work? If yes, the mystery mostly dissolves: justice becomes a particular branch of general utility, explained by familiar psychological laws. If no—if there’s no plausible common element—then the feeling of justice starts to look like something we can’t explain in ordinary terms, and we’d need a different approach.

So let’s do the sensible thing: start with examples. Survey the kinds of actions and social arrangements that broad human opinion classifies as just or unjust. They’re varied, even messy, so I’ll move through them quickly rather than forcing them into a tidy scheme too early.

1) Violating someone’s legal rights
Most people consider it unjust to take away someone’s personal liberty, property, or anything else the law recognizes as theirs. In this straightforward sense:

  • It’s just to respect the legal rights of others.
  • It’s unjust to violate them.

But even here, obvious complications show up once you look at how people actually think.

For example, someone might be said to have forfeited their rights—an idea we’ll return to later.

2) Following (or breaking) a bad law
A second complication is even more important: sometimes the “rights” a person has are granted by a law that shouldn’t exist—because the law is bad.

When that happens (or when people believe it happens), they split into camps about whether it’s unjust to disobey:

  • Some argue that no individual should disobey any law, however harmful, and that resistance should only take the form of trying to change the law through legitimate authority.
    They defend this largely on expediency: society benefits when the habit of obeying law stays strong.
  • Others argue the opposite: any law you judge to be bad may be disobeyed, even if you see it as merely unwise rather than morally unjust.
  • A third group tries to draw a boundary: disobedience is justified only for unjust laws.
  • And then there are those who collapse the distinction entirely: every inexpedient law is unjust, because every law restricts natural liberty, and restriction is an injustice unless it can be justified by serving human good.

Underneath all this disagreement, one point seems universally conceded: laws can be unjust. So law can’t be the final standard of justice. The law can grant one person a benefit or impose on another a harm that justice condemns.

But notice something about how people talk when they call a law unjust. They almost always describe it in the same pattern as a lawbreaker’s injustice: it violates someone’s right. Since it isn’t a legal right in that case, it gets a different name: a moral right. So we can state a second broad form of injustice like this:

  • It’s unjust to take from someone—or withhold from them—what they have a moral right to.

3) Giving people what they deserve (and not what they don’t)
A third, and maybe the most vivid, form of injustice in ordinary thinking is this: it’s unjust when people get treated in ways they don’t deserve.

  • It’s just that people receive the good or the bad they deserve.
  • It’s unjust when someone gets a benefit they don’t deserve, or suffers a harm they don’t deserve.

This version of justice leans on the idea of desert. So what counts as “deserving”? In general terms, people say:

  • You deserve good if you do right.
  • You deserve evil if you do wrong.

And in a more personal sense:

  • You deserve good from those you have benefited.
  • You deserve harm from those you have harmed.

But there’s an important limit people recognize: “returning good for evil” isn’t usually treated as justice fulfilled. It’s treated as something else—an intentional choice to set aside what strict justice would demand, for the sake of other values.

4) Breaking promises and betraying expectations
A fourth kind of injustice is breaking faith: violating an engagement, whether explicit or implied. That includes disappointing expectations we’ve deliberately created through our conduct—at least when we created those expectations knowingly and voluntarily.

Like the other justice-based obligations, this one is not seen as absolute. People think it can be overridden:

  • by a stronger obligation of justice pointing the other way, or
  • by conduct from the other person that seems to release us from the obligation—often framed as them forfeiting what they were led to expect.

5) Favoritism where it doesn’t belong (impartiality and equality)
A fifth widely accepted injustice is partiality: giving unfair favor or preference to one person over another in matters where favoritism has no proper role.

But here’s a revealing detail: people don’t usually treat impartiality as a value floating on its own. They treat it as a tool—something we need in order to do some other duty correctly. That’s because favoritism isn’t always condemned. In fact, most favoritism in private life is considered normal, even admirable. You’d probably be criticized, not praised, for treating your family and friends exactly like strangers when you could help them more without violating any other duty. And nobody thinks it unjust to choose one person over another as a friend or companion.

So where does impartiality become obligatory? Most clearly, where rights are at stake. A court must be impartial because it has one job: award the contested thing to the party who has the right to it, regardless of anything else.

In other contexts, impartiality means something slightly different:

  • For judges, teachers, and parents administering rewards and punishments, impartiality often means being guided by desert.
  • For public appointments, it often means being guided by the public interest.

So, broadly speaking, impartiality as a duty of justice means this: be guided only by the reasons that ought to decide the case, and resist motives that would pull you away from those reasons.

Closely tied to impartiality is equality, which many people treat as central—sometimes even as the essence of justice. But this is also where you can watch justice shift shape from person to person, in a way that tracks their beliefs about what’s useful.

People commonly insist that equality is required by justice—except where they think inequality is needed for society to work well. And that pattern shows up everywhere.

Some will defend extreme inequality in the rights people have, but still insist that whatever rights exist should be protected equally. Even in slave societies, it’s often admitted in theory that the slave’s rights (thin as they are) should be enforced as strictly as the master’s; a court that doesn’t enforce them equally is called unjust. Yet the institutions that leave slaves with almost no enforceable rights may not be labeled unjust—because the society doesn’t consider them socially harmful.

Likewise:

  • If someone believes society needs rank and hierarchy, they won’t call unequal distributions of wealth and privilege unjust.
  • If someone believes that inequality is socially damaging, they’ll call it unjust.

Even the basic inequalities created by government power illustrate the point: anyone who thinks government is necessary won’t see injustice in granting officials powers ordinary people don’t have. And even among those who favor radical leveling, disputes about “justice” multiply as disagreements about “what works” multiply.

Consider debates among communists alone:

  • Some say it’s unjust to divide the product of collective labor by anything other than exact equality.
  • Others say justice demands giving more to those with greater needs.
  • Others say those who work harder, produce more, or provide more valuable services may justly claim a larger share.

And each side can make what sounds like a powerful appeal to “natural justice.”

With so many different uses of the word—uses that people don’t treat as mere confusion—it’s hard to find the mental thread that ties them together, the thread that gives the term justice its unique emotional force. One possible clue comes from the history of the word itself.

Across many languages, the word that corresponds to just points back to an origin connected with positive law, or with its earlier ancestor: authoritative custom.

  • Justum is related to jussum: “that which has been ordered.”
  • Jus shares the same root.
  • The Greek word associated with “the just” traces back to the idea of a lawsuit.
  • Recht (source of right and righteous) becomes synonymous with law.

Some argue that recht originally meant physical straightness, and that words like wrong originally meant twisted; from that, they suggest that law took its meaning from moral rightness rather than the other way around. But whichever direction the history runs, it’s telling that words like recht and droit came to mean positive law, even though much that’s morally “straight” is not written into law. That narrowing of meaning reveals something deep about how moral ideas formed.

We still see it in modern phrases: “courts of justice” and “the administration of justice” are basically “courts of law” and “law enforcement.” In French, la justice is the standard word for the judiciary itself. All of this points to a strong conclusion: the mother idea, the primitive core out of which the notion of justice grew, was conformity to law.

Among the ancient Hebrews—up to the rise of Christianity—this was basically the whole idea of justice, which makes sense for a people whose laws were meant to cover nearly all of life and were believed to come directly from God.

But the Greeks and Romans knew their laws were made by humans. So they weren’t afraid to admit that humans can make bad laws—laws that authorize actions which, if done privately without legal backing, would be condemned as unjust. That recognition produced a shift: the feeling of injustice attached itself not to every violation of law, but to violations of the laws that ought to exist (including laws that don’t yet exist), and even to laws themselves when they were believed to contradict what law ought to be.

In other words, even when actual law stopped being treated as the standard of justice, the structure of the idea remained heavily legal. Justice still meant something like: conformity to the rules that deserve enforcement.

Of course, people also apply justice to countless areas where no one wants legal regulation. Nobody wants the state policing every detail of private life, and yet we all agree that ordinary daily behavior can still be just or unjust.

Even here, though, the old legal echo remains. We often feel a kind of satisfaction at the thought that unjust acts should be punished, even when we don’t think it would be wise to hand that job to courts. We give up that satisfaction because of the side effects and dangers. Many of us would like to see just conduct compelled and unjust conduct restrained even in tiny matters—if we weren’t, quite reasonably, afraid of giving officials unlimited power over individuals.

That’s why ordinary language so often slips into legal imagery. When we think someone is bound by justice to do something, we routinely say they ought to be compelled to do it. We’d be pleased to see the obligation enforced by anyone who had the power.

When we realize that enforcing a rule through law would backfire—maybe it would be impossible to police, too costly, or would cause even worse harms—we don’t suddenly stop caring about the injustice. We just shift gears. We mourn that we can’t fix it legally, we treat the “free pass” for wrongdoing as a real social loss, and we try to compensate by bringing the weight of personal and public disapproval down on the person who did it.

So even when the law can’t step in, the idea of legal pressure is still doing the heavy lifting in the background. It’s the seed from which our modern notion of justice grows—though that seed gets reshaped a few times before the concept becomes what it is in a mature society.

That story, as far as it goes, explains how the idea of justice develops. But notice what it doesn’t yet explain: it still doesn’t clearly separate justice from moral obligation in general.

Because here’s the inconvenient truth: the core idea behind law—punishment—shows up in all moral “wrongs,” not just in injustice. In everyday thinking, we don’t call something “wrong” unless we mean something like:

  • someone deserves to be punished somehow,
  • if not by courts, then by social condemnation,
  • and if not by other people, then by the person’s own conscience.

That’s the real dividing line between morality and mere “this seems like a good idea.”

Morality: When “should” quietly means “may be forced”

A key feature of duty—in any form—is that we think a person can be rightfully pressured into doing it. Duty is something that can be demanded, like a debt. And if we didn’t think it could be demanded, we wouldn’t call it duty at all.

Sure, there may be practical reasons not to enforce it. Maybe it would be unwise, or it would harm innocent people, or it would create bigger problems. But even then, we still assume the person wouldn’t have grounds to complain if society did press them. The obligation is real; enforcement is negotiable.

Now contrast that with another category of behavior: things we’d like people to do, maybe even admire them for doing, and might dislike them for skipping—but where we also admit they aren’t actually bound.

In those cases:

  • we praise, encourage, and urge,
  • but we don’t treat them as proper targets of punishment,
  • and that’s why we don’t call it a moral obligation.

Exactly how we got these ideas—deserving punishment versus not deserving it—can be explored later. But the distinction itself is foundational. In practice, we label an action wrong (instead of merely “gross,” “rude,” or “stupid”) when we think the person ought to be punished for it. And we call something right (instead of merely “nice” or “admirable”) when we think the person should be compelled, not merely persuaded, to act that way.

So: punishment isn’t what separates justice from other moral ideas. It separates morality as a whole from simple calculations of what’s convenient or praiseworthy.

That means we still have work to do. What makes justice a special subset of morality?

Justice: The part of morality that creates a claim

Ethicists often split duties into two groups, using the clunky labels perfect and imperfect obligations.

  • Imperfect obligations are real duties, but you get discretion over when and to whom you fulfill them. Charity and beneficence are the classic examples: you’re obligated to be charitable, but you aren’t obligated to give this person help right now.
  • Perfect obligations, in the language of legal philosophy, are duties that come with a matching right held by some particular person or persons.

That distinction lines up almost exactly with the difference between justice and the other moral virtues.

When ordinary people talk about justice, there’s nearly always an implied personal right—a specific claim someone can make, in the way the law recognizes a property right or a contractual entitlement. Look at the common ways we describe injustice:

  • taking what someone possesses,
  • breaking faith or a promise to them,
  • treating them worse than they deserve,
  • treating them worse than others who have no stronger claim.

In every case, two things are in play:

  1. a wrong has been done, and
  2. there is some identifiable person (or group) who has been wronged.

Even when “injustice” means giving someone too much—like granting unfair advantages—the wrong still lands on identifiable people: the competitors who were pushed aside.

That, to me, is the defining feature. Justice isn’t just about what’s generally right. It’s about what someone can demand from you as their moral due.

Generosity and beneficence don’t work that way. No one has a moral right to your generosity, because you aren’t morally required to be generous to any particular person at any particular moment.

And the cases that seem to challenge this definition actually reveal how solid it is. When someone claims, for example, that humanity as a whole has a right to every good deed you’re capable of doing, they aren’t keeping beneficence separate from justice—they’re collapsing beneficence into justice. They’re forced to describe our efforts as:

  • something we owe to others (a debt), or
  • something required as a return for what society gives us (gratitude),

and both debt and gratitude already fall squarely under justice.

The rule is simple: wherever there is a right, you’re in the territory of justice, not mere beneficence. And anyone who refuses to draw the line here usually ends up erasing the line altogether—turning all morality into justice.

Where the “justice feeling” comes from

Once we’ve identified what justice is, the next question is psychological: is the intense feeling that comes with justice planted in us as a special moral instinct, or can it be explained as something that grows naturally out of human nature—and, in particular, out of considerations tied to general usefulness?

My view is this: the feeling itself doesn’t start as what most people would call “a calculation of expediency.” But everything moral about that feeling does.

We can break the sentiment of justice into two essential pieces:

  • a desire to see someone punished for doing harm, and
  • the belief that some definite individual (or individuals) has been harmed.

That first piece—the desire to punish a person who harms someone—is, I think, a natural outgrowth of two deeply human tendencies that function like instincts:

  • self-defense, and
  • sympathy.

It’s natural to resent harm, and to repel it or retaliate, when it’s aimed at us—or at people we care about. You can debate whether that impulse is “instinct” or learned intelligence, but its presence is obvious across animal life. Animals attack what attacks them, or what they believe is about to attack them or their young.

Humans differ from other animals in two major ways.

First: our sympathy can expand far beyond our own offspring or immediate circle. We can care not only about family and friends, but about strangers, whole communities, and even—at least in principle—every sentient being.

Second: our intelligence scales up both our self-interest and our sympathy. We can recognize a shared interest with the wider society we live in. And once you see that connection, threats to social security don’t feel like “someone else’s problem”—they feel like threats to you. Your self-defensive impulse wakes up, not just for personal attacks, but for actions that endanger the social conditions you depend on.

That same intelligence, paired with broad sympathy, lets a person attach emotionally to the larger idea of “my tribe,” “my country,” or “humanity.” And then an injury to that collective triggers sympathy and resistance even when you personally weren’t harmed.

So the justice impulse—the urge to punish—looks like ordinary retaliation or vengeance that has been widened by intellect and sympathy to cover injuries that harm us through society, or in common with society.

By itself, that retaliatory impulse isn’t moral at all. What makes it moral is the way we discipline it—subordinating it to social sympathy, so it only acts when the general good calls for it. Our raw nature wants to lash out at whatever feels unpleasant or insulting. But when that impulse is “moralized,” it becomes selective:

  • a just person resents harms that society has a shared interest in preventing, even when the harm isn’t personally painful to them,
  • and they don’t treat every private injury—however painful—as something that automatically deserves punishment, unless it’s the sort of wrong society must repress for everyone’s sake.

“But I’m thinking about this one case, not society”

You might object: when people feel injustice, they aren’t consciously thinking about “society’s interest.” They’re thinking about this particular wrong, right here.

True—and also not a real problem for the explanation.

Lots of people get angry simply because they’ve been hurt. That’s common enough, even if it isn’t admirable. But someone whose resentment is genuinely moral—someone who checks whether an act is blameworthy before indulging the anger—still experiences their reaction as a defense of a rule meant to protect others as well as themselves. They might not say, in words, “I’m defending society,” but they feel they’re standing up for something that everyone has reason to want.

If they don’t feel that—if they’re reacting only to how the act affects them personally—then they aren’t acting from a conscious sense of justice at all. They’re just angry.

Even anti-utilitarian moralists effectively admit this. Consider Kant’s famous test: act only on a rule you could will as a law for all rational beings. That principle only makes sense if the agent, when deciding conscientiously, keeps in mind the interests of humanity collectively—or at least the interests of people generally, without special pleading for themselves. Otherwise the words go empty. After all, you can’t plausibly argue that a rule of pure selfishness is literally impossible for rational beings to adopt; it’s adopted all the time. To make Kant’s point meaningful, the idea has to be: choose a rule that all rational beings could adopt with benefit to their shared interests.

The logic of justice, step by step

Here’s the structure.

Justice contains:

  • a rule of conduct—assumed to apply to everyone and meant for everyone’s good, and
  • a sanctioning sentiment—the desire that violators should suffer punishment.

And then justice adds something more:

  • the idea of a definite person (or persons) who is harmed by the violation—whose “rights” are violated.

The justice sentiment begins as the animal urge to defend yourself and retaliate against harm, extended outward by:

  • expanded sympathy (we can care about everyone, not just “ours”), and
  • intelligent self-interest (we understand that our personal security depends on social security).

Those later elements supply the morality. The earlier, animal element supplies the heat—the insistence, the intensity, the feeling that this isn’t merely “unfortunate,” but intolerable.

What “a right” really means

All along, I’ve treated the “right” of the injured person not as a third ingredient separate from harm and punishment, but as the form those two ingredients take when we talk about justice.

If you look closely at your own thinking, “a violated right” mostly means this:

  • there’s an identifiable person harmed, and
  • society should respond with protection and sanction.

When we say something is a person’s right, we mean they have a valid claim on society to protect their possession of it—through law, or through education and public opinion (or both). If we think their claim is sufficient, whatever the reason, we say they have a right.

And if we want to show something is not theirs by right, we usually think we’ve done enough once we agree that society shouldn’t take special measures to secure it for them—it should be left to chance or to their own efforts.

For example:

  • A person has a right to pursue what they can earn through fair professional competition, because society shouldn’t let others block them from trying.
  • But they don’t have a right to earn a specific income—say, three hundred a year—just because they happen to be earning it; society isn’t obligated to guarantee that outcome.
  • On the other hand, if someone owns a financial asset that entitles them to three hundred a year, then they do have a right to that income—because society has taken on the obligation to secure the system that pays it.

So, to have a right is to have something that society ought to defend you in possessing.

If you ask why society ought to do that, I have no answer other than general utility.

If “utility” sounds too thin—if it doesn’t seem to match the punchy, almost absolute feeling we associate with rights—that’s because the sentiment isn’t purely rational. It includes an animal component: the thirst for retaliation. And that thirst draws both its intensity and its moral legitimacy from the particular kind of utility involved here: security.

Security is the most vital interest people have. Nearly every other good is optional in some sense: one person needs it, another doesn’t; many can be given up or swapped for something else. But no one can do without security. Everything we value beyond the immediate moment depends on it. If someone stronger could strip you of everything at any instant, then nothing would be worth much except whatever pleasure you could grab right now.

And because security is that fundamental, you only get it if the machinery that produces it—rules, enforcement, mutual restraint—keeps running continuously.

That’s why our sense of what we can demand from others, when it comes to the basic conditions of safety, gathers feelings far more intense than ordinary “this would be useful” thinking. A difference in degree becomes, psychologically, a difference in kind. The claim starts to feel absolute, infinite, incomparable with competing considerations. That’s what separates the moral feeling of right and wrong from the softer judgments of ordinary expediency and inexpediency.

The feelings are so strong, and we expect others to share them (because everyone is equally invested), that “ought” and “should” harden into “must.” What begins as recognized indispensability becomes moral necessity—analogous to physical necessity, and often nearly as binding.

Why justice seems “self-evident” but isn’t

If this analysis is wrong—if justice has nothing to do with utility, and is instead a standalone standard the mind can read off from pure introspection—then it’s hard to explain two things:

  • why this supposed inner oracle is so unclear, and
  • why so many issues can look just or unjust depending on how you frame them.

People constantly warn us that “utility” is a shaky standard because different people interpret it differently. They say the only safe guide is the fixed, unmistakable, self-authenticating voice of justice—supposedly independent of shifting opinions and social change.

You might think that once we define justice, arguments about it would basically disappear—that applying the rule would feel as certain as doing a math proof. In real life, it’s the opposite. People fight about what’s just with the same intensity they fight about what’s useful.

That’s true across nations and eras, but it’s also true inside a single person’s head. “Justice” isn’t one tidy principle. It’s a bundle of principles that don’t always line up. And when they clash, people usually pick between them using something outside “justice” itself—social custom, some other moral yardstick, or plain personal preference.

Look at punishment. Three views can all sound convincing:

  • “Punishment as an example is unjust.” On this view, punishment is only just if it’s meant to benefit the offender—reform them, help them improve.
  • “Punishment for the offender’s own good is unjust.” On this view, forcing an adult to suffer “for their own benefit” is basically tyranny. But punishing to protect others—like self-defense—is legitimate.
  • “Punishment is always unjust.” Robert Owen argues that criminals don’t manufacture their own character from scratch. Education and circumstances shape them. If someone couldn’t help becoming what they are, how can it be just to punish them?

If you argue this as “justice, period,” without digging into what gives justice its authority, it’s hard to see how to refute any of them. Each position leans on a genuine rule of justice:

  • The first appeals to the injustice of making one person a sacrifice for other people’s benefit without consent.
  • The second appeals to the justice of self-defense and the injustice of forcing your idea of someone’s good onto them.
  • The third appeals to the principle that it’s unjust to punish someone for what they couldn’t help.

Each speaker wins as long as they only have to consider the one rule they’ve chosen. But the moment you put the rules side by side, they collide. None of these views can be fully carried out without trampling another rule that also feels binding.

People have long felt this problem, and they’ve invented ways to dodge it rather than solve it. To escape the last view (“no one deserves punishment because they didn’t make themselves”), people invented free will in a particular sense: the idea that you can only justify punishment if the person’s character became what it is without being shaped by prior causes.

To escape the other conflicts, another popular trick was the social contract story: at some unknown time, everyone “agreed” to obey the laws and accepted punishment for disobedience, supposedly granting lawmakers the right to punish—either for the person’s own good or for society’s good—when otherwise they’d have no such right. This is then backed by an old legal maxim: volenti non fit injuria—what’s done with someone’s consent isn’t an injury, and therefore isn’t unjust.

But even if that consent weren’t mostly fictional, this “consent makes it just” rule isn’t more authoritative than the other rules it’s supposed to override. If anything, it’s a good example of how messy so-called “principles of justice” often are. This maxim clearly grew up to serve the rough practical needs of courts, which sometimes have to rely on shaky presumptions because trying to be too precise could create bigger harms. And even courts can’t stick to it consistently: they void “voluntary” agreements for fraud, and sometimes even for honest mistake or misinformation.

Now suppose we accept that punishment can be legitimate. A new tangle appears: how should punishment be measured? The most primitive, instinctive rule people reach for is lex talionis—“an eye for an eye.” Europe has mostly abandoned it in practice, but many people still feel an unspoken attraction to it. When someone happens to suffer a harm in exactly the form they inflicted, the wave of public satisfaction shows how natural this impulse is.

Beyond that, people split again:

  • Some say justice requires punishment to be proportioned to the offense, meaning matched to the offender’s moral guilt (however they measure guilt). For them, deterrence is irrelevant to justice.
  • Others think deterrence is the whole point: it’s unjust to inflict any suffering beyond the minimum necessary to stop the offender from repeating the act and to discourage imitation by others.

Or take pay and merit. In a cooperative workplace, is it just for greater talent or skill to earn higher pay?

Those who say no argue that:

  • If someone does the best they can, they deserve no less respect than anyone else.
  • It’s unjust to put a person at a disadvantage for something that isn’t their fault.
  • The more gifted already enjoy plenty of rewards—admiration, influence, and the inner satisfaction of excellence—without also claiming a larger share of material goods.
  • Justice should push society to compensate the less fortunate for unearned disadvantages, not deepen the gap.

Those who say yes respond that:

  • Society gains more from the more effective worker.
  • If someone’s work is more useful, society owes them a larger return.
  • A bigger share of the joint product is, in a real sense, their contribution; denying that claim is a kind of robbery.
  • If they’re paid the same as everyone else, you can only justly demand the same output from them—which means they may fairly give less time and effort, proportioned to their higher efficiency.

So who decides? Here justice has two faces that don’t fit neatly together. One side focuses on what it’s just for the individual to receive. The other focuses on what it’s just for the community to give. Each side, from its own angle, is hard to answer. If you try to pick between them using “justice” alone, the choice becomes arbitrary. Only social utility can settle the preference.

Taxation shows the same pattern. People appeal to multiple standards of justice, and they don’t reconcile easily:

  • One view says taxation should be in simple proportion to wealth.
  • Another insists on graduated taxation, taking a higher percentage from those who have more.
  • And a third standard—plausible as “natural justice”—could argue for ignoring means entirely and charging everyone the same absolute amount, like a club fee: everyone receives the same protection from law and government (so the story goes), so everyone should pay the same price. In ordinary commerce, we don’t call it unjust when a seller charges all customers the same price for the same item rather than adjusting the price to each buyer’s ability to pay.

Almost no one defends this flat-fee idea for taxes, because it clashes with human sympathy and with obvious social practicalities. But the principle of justice it invokes is no less real than the principles people invoke against it. That’s why it quietly shapes the arguments people make for other tax systems. For example, people often feel they must claim that the state does more for the rich than for the poor, to justify taking more from the rich. But that’s not really true. Without law and government, the rich would typically be better able to protect themselves than the poor—and would likely end up dominating them, perhaps even reducing them to slavery.

Others compromise by saying: everyone should pay an equal head tax for protection of their persons (since each person’s life matters equally), but unequal taxes for protection of property (since property holdings differ). And then someone replies: “But the whole of one person’s property matters to them just as much as the whole of another person’s property matters to them.” Again, the concepts of justice collide. The only way out is to appeal to utility.

So is the difference between “the just” and “the expedient” just an illusion? Have people been fooling themselves in thinking justice is more sacred than policy, and that you only consult practicality after justice has been satisfied?

Not at all. The account we’ve given of where the feeling of justice comes from still recognizes a real distinction—and I value that distinction as much as anyone who claims to scorn consequences in morality. What I reject is any theory that invents a standard of justice floating free of utility. The justice that is grounded on utility, I take to be the central part—by far the most sacred and binding part—of morality.

What “justice” names, in practice, is a special class of moral rules. These rules guard the essentials of human well-being more directly than other moral advice, and that’s why they feel more absolute in their authority. And the feature we found at the core of justice—a right belonging to some individual—expresses exactly this: a stronger, more binding kind of obligation.

The moral rules that forbid people from harming one another—especially including wrongful interference with freedom—matter more to human well-being than any rules, however important, that merely tell us how to manage this or that area of life.

They also have a special social role: they largely determine the emotional climate of human society. Only their general observance preserves peace. If obedience weren’t the norm—and disobedience the exception—everyone would see everyone else as a likely enemy and would live in constant guard.

And notice something else. These are the rules people have the strongest, most immediate incentive to teach and enforce in each other. You might not gain anything from giving someone prudent advice, and even encouragement toward kindness is a less urgent interest—you may or may not need other people’s help. But you always need other people not to harm you.

So the moral codes that protect each person from being hurt by others—either through direct injury or by blocking their freedom to pursue their own good—are the rules individuals care about most and the rules they have the strongest reason to broadcast and enforce.

In fact, a person’s standing as someone fit to live among others is judged mainly by whether they follow these rules. That’s what determines whether they’re a nuisance—or worse—to the people around them.

These, first and foremost, are the obligations we call justice. The clearest cases of injustice—the ones that set the emotional tone of outrage—are:

  • Wrongful aggression against someone.
  • Wrongful exercise of power over someone.
  • Wrongfully withholding what someone is due.

In each case, you harm the person: either by causing direct suffering, or by depriving them of some good they had reasonable grounds to expect—physically or socially.

The same powerful motives that push us to obey these basic rules also push us to punish those who violate them. Self-defense, defense of others, and even vengeance all rise up against such offenders. That’s why retribution—evil for evil—gets tightly fused with the feeling of justice and is widely treated as part of the idea itself.

But justice also contains another demand: good for good. Its usefulness to society is obvious, and it comes with a natural human pull. Yet it doesn’t seem, at first glance, to connect to injury in the way the most basic cases do—and that’s where the intensity of the justice-feeling originally comes from.

Still, the connection is real. Someone who accepts benefits and then refuses to return them when needed causes genuine harm: they shatter a natural, reasonable expectation—an expectation they at least quietly invited, since otherwise the benefit likely wouldn’t have been given in the first place. The seriousness of disappointed expectation shows up in how we judge two especially ugly wrongs: betraying a friendship and breaking a promise. Few blows hurt more than finding, in your hour of need, that what you relied on with full confidence fails you. And few wrongs provoke more resentment—both in the victim and in sympathetic observers—than this withholding of good.

That’s why the principle of giving each person what they deserve—good for good as well as evil for evil—belongs inside justice as we’ve defined it, and it deserves the powerful emotional weight that makes “the just” feel, to most people, higher than “the merely expedient.”

Most of the everyday “rules of justice” that people cite in real disputes are really tools for applying these deeper principles. Rules like:

  • A person is responsible only for what they did voluntarily, or could have avoided voluntarily.
  • It’s unjust to condemn someone without hearing them.
  • Punishment should be proportioned to the offense.

These rules are designed to stop the “evil for evil” principle from being twisted into cruelty—into inflicting evil without proper justification.

Many of these maxims grew out of court practice. Courts had strong reasons to spell them out more fully than ordinary moral reflection would. Courts must do two things: punish when punishment is due, and secure each person’s right. To do that reliably, they needed a more detailed machinery of rules.

That’s why the first judicial virtue—impartiality—is an obligation of justice: it’s a necessary condition for fulfilling the other duties of justice. But impartiality and equality don’t rank so high only for that reason. In one respect, they can be seen as natural consequences of the principles we’ve already laid down.

If it really is our duty to treat people according to what they deserve—to return good for good and to restrain wrongdoing with appropriate penalties—then a big consequence follows: when two people have earned the same regard from us, and no stronger obligation overrides it, we ought to treat them equally well. And on the larger scale, society should treat equally well all who have equally earned society’s respect—meaning, who have deserved equally well in general, not just relative to one group’s preferences.

That’s the highest, most abstract standard of social and distributive justice: equal treatment for equal desert. It’s the target that institutions—and the efforts of decent citizens—should try, as far as possible, to move toward.

But this duty isn’t a little add-on, derived from some secondary rule. It runs deeper than that. It’s a direct expression of the first principle of morality itself: utility, or the Greatest Happiness Principle.

Because without equality, “the greatest happiness” is just a slogan. It has no rational meaning unless one person’s happiness—when equal in amount (making reasonable allowance for kind of pleasure)—counts for exactly as much as another person’s. Once you accept that, Bentham’s famous line fits perfectly as a plain-English annotation to utilitarianism:

  • Everybody counts for one. Nobody counts for more than one.

From the point of view of both the moral thinker and the lawmaker, an equal claim to happiness implies an equal claim to the means of happiness—except where the unavoidable facts of human life, and the overall interest of society (which includes everyone’s interest), impose limits. And when we set such limits, we should interpret them strictly, not use them as a convenient excuse for favoritism.

Of course, like every other maxim of justice, this one is not applied everywhere, all the time. People bend it to match their own ideas of what “society needs.” Still, whenever they admit that the maxim applies in a case at all, they treat it as the voice of justice. The default assumption becomes:

  • People have a right to equal treatment
  • unless some widely recognized social benefit clearly requires unequal treatment instead

That’s why social inequalities have a special way of changing character over time. As long as an inequality is believed to be “useful,” critics may call it unwise or harsh, but many won’t call it unjust. The moment it stops looking useful, though, it doesn’t merely become inconvenient—it starts to look like injustice. And not just mild injustice, but something tyrannical and shocking. People then stare at the past and ask, “How could anyone ever have accepted that?”

They forget something important: they themselves are probably tolerating other inequalities right now under an equally mistaken belief about expediency. If that belief were corrected, what they currently defend might one day look as monstrous as what they’ve finally learned to condemn.

In fact, the history of social progress is largely a history of these moral reclassifications. One custom or institution after another has moved through the same stages:

  • first treated as a basic necessity for society
  • later recognized as a mere convenience (or not even that)
  • finally condemned as a widely acknowledged injustice and form of oppression

That’s happened with the divisions between slaves and free people, nobles and serfs, patricians and plebeians. And the same pattern will continue—and in some places has already begun—with hierarchies built on color, race, and sex.

Justice, then, is utility at the highest stakes. It names a set of moral requirements which—taken together—rank extremely high on the scale of social usefulness, and therefore carry a stronger, more urgent obligation than most other moral considerations. Still, there can be particular situations where another duty matters so much that it overrides one of the usual rules of justice.

Sometimes the “right” action, judged by the whole situation, looks like a violation of ordinary justice. For example: to save a life, it might not only be permissible but actually required to do things that usually count as serious wrongs—steal food or medicine, take it by force, or even abduct the only qualified doctor and compel them to treat the patient.

When that happens, we usually protect the idea that justice is always a virtue by adjusting our language. We don’t normally say, “Justice must give way.” Instead we say something like: what would ordinarily be just isn’t just in this special case, because another overriding moral reason changes what justice requires here. This verbal habit is useful: it preserves the sense that justice is something you can’t rightly set aside at will, and it spares us from having to defend the odd idea that there can be such a thing as “praiseworthy injustice.”

With that in place, the central difficulty people raise against utilitarianism loses its force. Everyone has always seen that cases of justice are also cases of expediency—they concern what promotes human well-being. The difference is the distinctive feeling that clings to justice in a way it doesn’t to ordinary calculations of convenience.

If we can explain that feeling—without inventing some mysterious special origin for it—then justice stops looking like a problem for utilitarian ethics. And we can explain it. The emotional core is the natural human impulse of resentment at injury and wrongdoing, “moralized” by extending it so that it matches the demands of the common good. When that feeling is tied to social welfare in this way, it not only does arise in the kinds of situations we label “justice,” it ought to arise there.

So justice remains what it has always been: the name we give to certain social utilities that are vastly more important—and therefore more commanding—than most others as a general class (even though, in specific circumstances, another duty can outweigh a particular rule of justice). And because these utilities are so vital, they are—and should be—protected by a sentiment that isn’t merely stronger than ordinary concern for comfort or convenience, but different in kind: marked by sharper, more definite demands, and backed by harsher, more serious sanctions.

Mr. Herbert Spencer has argued that the utilitarian commitment to perfect impartiality—treating equal amounts of happiness as equally valuable no matter who feels them—shows that utility can’t be the ultimate guide to right and wrong. He says utilitarianism must secretly rely on an earlier principle: that everyone has an equal right to happiness.

But it’s more accurate to put it this way: utilitarianism holds that equal quantities of happiness are equally desirable, whether experienced by the same person or by different people. And this is not some extra premise propping up utilitarianism from the outside. It is utilitarianism. What else is the principle of utility, if not the claim that “happiness” and “desirable” are, in the relevant moral sense, the same idea?

If any “prior” principle is implied at all, it’s only this: that we can apply the truths of arithmetic to happiness the way we apply them to any other measurable quantity—that adding and comparing quantities makes sense here, too.

Spencer has also clarified, in private correspondence, that he isn’t an enemy of utilitarianism. He agrees that happiness is the ultimate end of morality. His disagreement is methodological: he thinks you can reach moral truth only partially by generalizing from observed results of conduct, and fully only by deducing—จาก the laws of life and the conditions of existence—what kinds of actions tend to produce happiness and what kinds tend to produce unhappiness.

Apart from his use of the word “necessarily,” there’s little to quarrel with in that. Remove that one word, and I don’t know of any modern utilitarian who would reject the approach. Bentham, in particular, was anything but unwilling to deduce the effects of actions on happiness from human nature and the general conditions of life. If anything, the common criticism of Bentham is the opposite: that he leaned too heavily on deduction and gave too little weight to generalizations drawn from specific experience.

The sound position, in my view—and, I gather, in Spencer’s as well—is that ethics, like every other serious science, needs both methods. We need conclusions reached by careful observation, and we need deductions from broader laws and conditions. And we need them to converge—each correcting, corroborating, and verifying the other—before a moral principle earns the kind and level of evidence that counts as genuine scientific proof.

I — To What Extent Forms Of Government Are A Matter Of Choice

When people argue about the “best” form of government, they’re usually talking past each other because they’re starting from two very different pictures of what government even is.

Two competing ways to think about government

1) Government as a tool you can design.
Some thinkers treat government like engineering. Decide what you want it to accomplish, pick the design that seems to achieve the most good with the least harm, and then convince everyone to adopt it. In this mindset, political institutions are basically human inventions—no different in spirit (scale aside) from designing a new piece of machinery. The sequence is:

  • Define the goals government should serve.
  • Figure out which system best serves those goals.
  • Persuade the public to agree.
  • Mobilize them to demand it.

2) Government as something that grows.
Other thinkers say that’s fantasy. They see a government less like a machine and more like an organism—something that develops out of a people’s history, habits, and character. On this view, you don’t “choose” institutions the way you choose a tool. You mostly inherit them. People respond to immediate pressures with improvised fixes; if those fixes fit national feelings and ways of life, they stick, pile up over time, and eventually become a coherent political system. Try to impose a constitution on a society that hasn’t grown into it, and you’re likely to fail—not because the constitution is abstractly bad, but because the society’s circumstances never generated it.

The truth is: both extremes are wrong—and both contain something important

Taken as absolute claims, either doctrine becomes ridiculous. In real life, hardly anyone believes either one all the way.

Even if you think governments are “designed,” you still know you can’t just hand any people any constitution and expect it to work. It’s like buying the “best” machine without asking whether you have the skills, materials, and trained operators to run it.

And even if you talk as though institutions “grow,” you don’t actually believe humans have no choice. You don’t deny that consequences matter, or that some systems are better than others, or that people can sometimes change what they live under.

So the sensible approach is to dig down to what each side is really getting at—and keep the truth, not the exaggeration.

First: political institutions are made by people

However often people forget it, governments don’t just appear. Nobody wakes up to find a constitution sprouted overnight like a plant. At every stage, political institutions are shaped by human actions and decisions. That means:

  • They can be built well or badly.
  • Skill and judgment can improve them—or the lack of those can ruin them.

And if a society didn’t get to build institutions through the slow, practical method of fixing one abuse after another—maybe because of outside pressure or weakness—that delay is a serious disadvantage. But it does not prove that a system that worked well elsewhere couldn’t have worked for them, too, or couldn’t work later if they decide to adopt it.

Second: political “machinery” only works if people can and will run it

Even if a constitution is brilliantly designed, it doesn’t operate by itself. People have to work it—ordinary people, not saints or geniuses. That means any form of government has to fit the real capacities and character of the population available.

In practice, that requires three conditions:

  1. Acceptance: The people must be willing to accept the system—or at least not so hostile that they make it impossible to establish.
  2. Maintenance: They must be willing and able to do what’s needed to keep it in existence.
  3. Operation: They must be willing and able to do what the system demands so it can achieve its aims.

And “do” here includes both action and restraint: not only taking part when needed, but also holding back when self-control is required.

If any one of these conditions fails, then no matter how promising the system looks on paper, it doesn’t fit that particular society—at least for the time being.

Why these conditions fail (with real-world examples)

1) People can simply reject a form of government.
This is so common it hardly needs explaining. A community that deeply hates a particular political order won’t accept it without coercion.

  • A tribal society used to loose social control won’t submit to the discipline of a regular, “civilized” state unless forced from outside.
  • The peoples who swept into the Roman world needed centuries, plus changed circumstances, to develop steady obedience even to their own leaders.
  • Some nations accept only rulers from particular families, because tradition grants those families the right to lead.
  • Some societies won’t tolerate monarchy; others refuse a republic.

Sometimes the resistance is so strong that the system isn’t just undesirable—it’s effectively impossible.

2) People may want a free government but can’t sustain it.
A society might genuinely prefer liberty and still be unfit to keep it. If citizens won’t make the effort freedom requires—if they won’t defend it when attacked, if they’re easily tricked into giving it away, or if fear and panic can stampede them into surrendering their rights—then freedom won’t last. The same is true if a burst of admiration for a single “great man” leads them to hand him powers that let him dismantle the very institutions meant to protect them. In those conditions, a free government might still have been beneficial for a time, but it probably won’t endure.

3) People may be unable (or unwilling) to live by the restraints civilized government requires.
Some communities understand the benefits of order and prosperity but can’t yet practice the everyday self-restraint that makes those benefits possible. If pride and passion are so strong that people can’t stop settling disputes through private violence, then a truly “civilized” government—one that secures peace and predictable rules—may need to be significantly despotic at first: a system the people don’t control, one that forcibly restrains them until social habits change.

A related problem shows up when citizens won’t cooperate with law enforcement and public authorities. If people:

  • hide criminals instead of helping catch them,
  • lie under oath to protect someone who harmed them (as described here of the Hindoos),
  • refuse to intervene even when a stabbing happens in the street because “that’s the police’s job,”
  • feel horror at an execution but shrug at an assassination,

then government needs stronger coercive powers to secure the bare minimum of civilized life. These attitudes often trace back to earlier bad government—people learn to see the law as serving someone else’s interests, and officials as enemies. Even if that’s not their fault, and even if better rule can eventually change these habits, a society with them cannot be governed with the same light touch as a society that sympathizes with the law and actively helps enforce it.

4) Elections can fail morally, not just technically.
Representative institutions can become useless—or even a tool of oppression and manipulation—when most voters aren’t engaged enough to vote, or when they vote for private reasons rather than public ones. If people sell votes for money, or vote on command because someone controls them, or vote to curry favor, then elections don’t prevent misrule. They just become one more moving part in the misrule’s machinery.

5) Sometimes the barrier is mechanical: communication and administration.
Even where people have the right attitudes, the physical and logistical conditions can block certain systems.

In the ancient world, popular government couldn’t really function beyond a single city because there wasn’t a way to form and spread public opinion across distances—except among those who could gather in one place to debate, like an agora. The representative system is supposed to solve that, but it took the rise of the press—and especially newspapers—to play something like the role that the Pnyx and the Forum played for mass political discussion.

Likewise, there were times when even monarchy couldn’t hold together a large territory. Authority would fracture into small principalities or loose feudal arrangements because rulers lacked the administrative “reach” to enforce orders far away. They depended on personal loyalty even for their armies, and they couldn’t reliably raise enough taxes to sustain the force needed to compel obedience across a wide realm.

And in all these cases, the obstacle can come in degrees: it might make a system work badly without making it impossible, and it might still be better than any available alternative. But deciding that requires another question the author hasn’t reached yet: how different systems tend to promote Progress.

What the “institutions grow” school gets right—and where it goes too far

At this point we can restate the core lesson: those three conditions—acceptance, maintenance, and operation—are the real foundation for whether a government can take root and last.

If the so‑called naturalistic theorists mean only this—that no government lasts unless the people accept it, can keep it standing, and can at least to a significant extent make it work—then the point is undeniable.

But anything beyond that starts to wobble. Much of what people say about needing an “historical basis,” or perfect “harmony” with national character and custom, often collapses into either:

  • these same three conditions said in fancier language, or
  • sentimental slogans that don’t actually guide action.

Practically speaking, the “fit” between an institution and a people’s habits and character is valuable mainly because it makes the three conditions easier to meet. If a system aligns with existing opinions and routines, people accept it faster, learn it sooner, and cooperate more readily in both preserving it and making it deliver its best results. A wise reformer will absolutely use these advantages when they exist.

But it’s a mistake to treat these conveniences as iron necessities. People do learn new practices. Familiarity helps—but familiarity can also be created. Repetition can make a once-strange idea feel normal. Entire nations have sometimes wanted bold, untested changes.

A society’s ability to adapt—its capacity to learn new political habits—is itself part of what must be assessed. Different nations, and different stages of civilization, vary enormously on that score. You can’t settle the question with a universal rule. You need knowledge of the particular people, plus practical judgment.

And there’s one more crucial point: sometimes people aren’t ready for good institutions, but building a desire for them is part of how they become ready. Advocating a form of government, explaining its benefits, and making the case for it as persuasively as possible can be one of the main ways—sometimes the only available way—to educate a population not just to accept or demand an institution, but to operate it.

That, for example, is what Italian patriots in recent generations had to do if they wanted to prepare Italy for a unified freedom: they had to stir people to want it first.

Those who take on this kind of political reform need to keep two things in view at the same time: not just the advantages of the constitution they recommend, but the human capacities required to make it actually work—moral habits, education, and the willingness and ability to act. Otherwise, they risk lighting a fire of expectation that the society doesn’t yet have the tools to control.

So here’s the takeaway: within the boundaries set by the three conditions discussed earlier, institutions and forms of government really are, in a meaningful sense, a matter of choice. Asking what the best form of government is “in the abstract” isn’t a fantasy. It’s one of the most practical jobs a serious mind can take on. And trying to introduce—into any particular country—the best institutions that its current state can support (even imperfectly) is one of the most rational goals practical people can pursue.

Yes, there are limits. But that’s not a special embarrassment for politics. Everything anyone says to mock the power of human purpose in government could be said about human purpose anywhere else.

In every field, human power hits hard walls. We can’t act by magic. We can only act by harnessing forces that already exist in nature and society—and those forces will behave according to their own laws. We can’t make a river run backward. But we don’t conclude from that that watermills “aren’t made, they just grow.” The right lesson is simpler:

  • A design can be brilliant and still fail if it has no power source.
  • In mechanics, the engine needs fuel or a current or falling water.
  • In politics, an institution needs real social forces—motives, loyalties, habits, resources—that can keep it running.

If those forces aren’t available, or aren’t strong enough to push through the predictable obstacles, the institution will stall. That’s not a flaw unique to government. It’s just the same condition that governs every art and craft: you can’t build a working machine without something to drive it.


A common objection: “Fine, but the big political forces aren’t steerable.”
The claim goes like this: the government of a country is basically settled in advance by how social power is distributed. Whatever group is strongest in society will end up holding governing authority. And any constitutional change won’t last unless the underlying distribution of power changes first (or at least changes alongside it). So, on this view, a nation can’t really choose its government. It can fiddle with the details—procedures, offices, organization—but the core reality, the location of ultimate power, is fixed by social circumstances.

There’s some truth here. But to make it useful, we have to say it precisely and keep it within limits.

Start with the central phrase: “the strongest power in society.” What counts as power? It can’t mean brute muscle. If it did, then pure democracy would always win, because the majority usually has more physical force than the minority. Add two more factors—property and intelligence—and we’re closer to reality, but still not there.

Because history shows something counterintuitive over and over:

  • A larger group can be dominated by a smaller one.
  • Even when the larger group has more property overall, and even when many of its members are individually just as intelligent—or more intelligent—it can still be kept down.
  • Sometimes the minority controls through force; sometimes through law, habit, fear, coordination, or the ability to act as one.

The missing piece is organization. Raw resources don’t automatically become political influence. To matter politically, the elements of power have to be coordinated—turned into disciplined action. And the advantage in organization usually belongs to the group that already holds the machinery of government. Once the state’s tools—offices, patronage, police, courts, revenue, military command—are placed on one side of the scale, a party that is weaker in almost every other respect can still outweigh its rivals.

This kind of dominance can last a long time on that basis alone. But it often comes with a warning label: it resembles what engineers call unstable equilibrium—like balancing an object on its narrow end. It can remain upright for a while, but once it’s knocked, it doesn’t naturally right itself; it tends to fall farther and faster away from its previous position.


An even deeper problem with the “social forces determine everything” slogan: it usually counts the wrong things.

The kind of social power that turns into political power isn’t idle power—wealth that never moves, numbers that never organize, intelligence that never speaks. It’s active power: power that gets used. And only a small portion of the power that exists in society is ever actually brought into action.

Here’s the key point: a huge part of political power is will.

So how can anyone claim to “compute” the elements of political power while leaving out everything that shapes people’s will? If you argue that it’s pointless to influence government by changing opinion—because “those with power will end up ruling anyway”—you’ve forgotten something obvious and enormous: opinion itself is one of the greatest active social forces.

A single person with genuine belief can be as potent, socially speaking, as ninety-nine people who have only self-interest. If you can create a widespread conviction that one form of government—or any major social arrangement—deserves to be preferred, you’ve already taken an almost decisive step toward lining up the rest of society’s energies on its side.

History makes this plain. When the first Christian martyr was stoned in Jerusalem, and the man who would later become the great missionary to non-Jews stood by approving, who would have guessed that the followers of the executed man were, in any meaningful sense, “the strongest power” in society? And yet the outcome shows they were—because their belief was the strongest active force in the long run.

The same dynamic made a lone monk from Wittenberg, facing the Diet of Worms, a greater social force than Emperor Charles V and the assembled princes. You might say: “Sure, but religion is special. Religious conviction has unusual strength.” Fine—then look at politics where religion was, if anything, mostly on the losing side.

Consider the era when Europe was crowded with “enlightened” rulers and reformers—liberal kings, reforming emperors, and even, improbably, liberal popes. Think of the age of figures like Frederic the Great, Catherine II, Joseph II, Peter Leopold, Benedict XIV, Ganganelli, Pombal, and Aranda; when even the Bourbons of Naples styled themselves as reformers, and the energetic minds of the French nobility were filled with ideas that would soon cost them dearly. That period is hard to explain if you imagine that social power is nothing but physical force and economic interest. What changed wasn’t the basic distribution of material interests. What spread was a new climate of thought—a set of moral and political convictions that reshaped what elites believed was legitimate and necessary.

The same pattern shows up in major reforms closer to the ground. The end of Black slavery in the British Empire and elsewhere wasn’t driven primarily by some sudden reshuffling of material interests; it was driven by the spread of moral conviction. The emancipation of Russian serfs, even if it wasn’t rooted in pure duty, reflected a growing belief—among influential minds—that the policy better served the state’s true interests.

In other words: what people think determines how they act.

Yes, the average person’s opinions are often shaped more by their personal situation than by careful reasoning. But those opinions are still influenced—sometimes powerfully—by people in different situations, and by the combined authority of the educated and informed. When educated opinion broadly comes to label one arrangement as good and another as bad—one as desirable and another as condemnable—something major has happened. Social force begins to flow toward the former and away from the latter, and that shift can be enough to let an institution endure or to make it crumble.

So the maxim “a country’s government is whatever existing social forces compel it to be” is true only in a sense that should encourage, not discourage, deliberate choice. It means this: among the forms of government that are genuinely practicable in a society’s current condition, rational selection matters—and can change outcomes.

II — The Criterion Of A Good Form Of Government

When a society really does have options about what kind of government it can adopt (within some real-world limits), the next question is straightforward to ask and surprisingly hard to answer: by what standard should we choose? What marks out the kind of political system that best serves a particular society’s interests?

You might think we should start by defining “the proper functions of government.” After all, government is a tool, and you judge tools by whether they fit the job. But that framing doesn’t get us as far as it seems, and it actually leaves part of the problem out.

For one thing, what government ought to do isn’t fixed. It changes with the condition of society. In a less developed or less stable society, government may need to do far more; in a more advanced society, it can often do less because other institutions and habits carry more of the load.

And there’s a second, even more important point: you can’t evaluate a government just by checking whether it stays inside its “legitimate” lane. A government’s goodness is limited by what it can rightly do—but its badness isn’t. Any kind of harm humans can suffer, a government can inflict. And any good that social life could produce—security, prosperity, knowledge, character, cooperation—can only be achieved to the extent that the political system makes room for it rather than smothering it. Even if you ignore the indirect ripple effects, the direct reach of public authority has no natural boundary except human life itself. So if we want to judge government honestly, we have to judge it against something as broad as the whole range of human interests.

That puts us in a bind. “The aggregate interests of society” is a complicated target. Naturally, we’d love a clean checklist: break social well-being into a few definite categories, figure out what conditions each category requires, and then say, “The best government is the one that combines those conditions most completely.” In that ideal world, political theory would be built up the way a math proof is built: one clear result after another, each tied to a distinct element of a good society.

But making a usable inventory of social well-being—one detailed enough to yield tidy “theorems”—isn’t easy. Modern political thinkers have felt this need, but as far as I know the progress so far mostly stops at one simple move: dividing society’s needs into two buckets, Order and Progress (or, in another vocabulary, Permanence and Progression). It’s tempting because it looks like a crisp opposition and it speaks to two very different emotional instincts. Still, as a scientific way to describe what good government requires, I think this contrast is misleading.

Start with the easy half: “Progress” seems to mean improvement. That’s at least a recognizable idea. But what does “Order” mean? Depending on who’s talking, it can mean a little or a lot—but almost never “everything society needs besides improvement.”

In its narrowest sense, Order = obedience. A government “keeps order” if people do what it says. But obedience comes in degrees, and not every degree is admirable. Only outright despotism demands that every person obey every command without question. At minimum, we have to mean obedience to general rules laid down as law, not to arbitrary personal orders. Understood this way, order—obedience to law—is certainly necessary. If a government can’t get its laws followed, it isn’t really governing.

But notice what that shows: obedience is a condition of government, not its goal. Getting people to obey is valuable because it enables government to do something else. So we still need to ask: what is that “something else” that every government ought to accomplish, whether society is changing quickly or hardly changing at all?

Widen the definition and Order becomes peaceful social life—the end of private violence. Order exists when people, as a general rule, stop settling disputes by force and instead bring their conflicts and injuries to public authorities for resolution. That’s a genuine step up from mere obedience. But even here, “order” still names a prerequisite more than a purpose. A population might reliably submit to government and bring everything to court or the administration—and yet the way the government decides those disputes, and the way it uses its power in general, can still range from excellent to atrocious.

So if we want Order to cover everything society expects from government other than Progress, we’re pushed toward the broadest meaning: Order = preserving the good we already have, while Progress = increasing it. Between them, that does seem to cover the whole field.

The trouble is that this broad contrast still doesn’t help us design or judge institutions, because the two sets of requirements aren’t opposites. They’re largely the same. The forces that maintain existing social goods are the very forces that expand them—only in different degrees. Progress typically demands more intensity, more consistency, more reach, but not a fundamentally different kind of cause.

Take the everyday moral and practical virtues that keep a society functioning well. What personal qualities help maintain a community’s existing stock of good behavior, good management, prosperity, and general success?

  • Industry
  • Integrity
  • Justice
  • Prudence

But those are also exactly the qualities that drive improvement. In fact, any widespread growth in these virtues is itself among the greatest improvements a society can make. So whatever features of government encourage industry, integrity, justice, and prudence support both “Permanence” and “Progress”—the difference is mainly how much encouragement is needed to shift from merely holding steady to clearly advancing.

Or consider traits that seem, at first glance, more specifically tied to progress: mental activity, enterprise, and courage. Aren’t those the engines of change? Yes—but they’re also essential to keeping what you already have. If there’s one reliable lesson in human affairs, it’s that you keep valuable gains only by sustaining the same energies that produced them. Leave things unattended and they decay. People who relax their habits of care, thoughtfulness, and willingness to face discomfort after they’ve “made it” rarely keep their good fortune at its peak for long.

The quality that looks most exclusively “progressive” is originality—invention, creative problem-solving. Yet even that is no less necessary for preservation. The world doesn’t stand still. New dangers and inconveniences constantly appear, and if you want life to continue going even as well as before, you need new tools, new plans, new adaptations. So whatever in government encourages activity, energy, courage, and originality is required for stability as well as improvement—again, typically in a lower dose for the former than for the latter.

Move from inner qualities to outward institutions and policies, and you find the same pattern. It’s hard to point to any political arrangement that supports Order only, or Progress only. What genuinely strengthens either one strengthens both.

Consider something as ordinary as policing. It sounds like the most “order” focused institution imaginable. But if effective policing actually reduces crime and makes people feel their lives and property are secure, what could be more helpful to progress?

  • Security of property encourages investment and production—the most familiar kind of material progress.
  • Reduced crime doesn’t just prevent harm; it gradually weakens the habits and dispositions that lead to crime, which is moral progress of a higher kind.
  • Less fear and anxiety frees people’s time and attention for creative work and improvement.
  • Greater trust and social attachment makes people less likely to see others as enemies and more likely to develop kindness, cooperation, and interest in the common good—core ingredients of social advancement.

Or take taxation and public finance, which many people file under “orderly administration.” But the same features that make a financial system good for stability make it good for improvement.

  • Economy preserves existing wealth and helps create more.
  • Fair burden-sharing publicly models moral seriousness and trains the community’s conscience—strengthening moral sentiment and sharpening moral judgment.
  • Tax collection that doesn’t choke industry or unnecessarily restrict liberty supports growing wealth and encourages people to use their abilities more actively.

And the reverse holds too: major financial mistakes that block moral and economic improvement, if serious enough, also actively destroy wealth and character.

So, in general, if you use “Order” and “Permanence” in the widest possible sense—stable possession of present advantages—then the requirements of Progress are mostly just the requirements of Order in a stronger form, and the requirements of Permanence are mostly the requirements of Progress in a weaker form.

At this point someone might object: “But progress in one area can come at the cost of order in another. We can grow richer while becoming less virtuous.” Fair enough—but what does that really show? Not that Progress is a fundamentally different thing from Permanence, but that wealth is different from virtue. Progress means keeping what you have in the same dimension and adding more. The fact that progress in one respect doesn’t guarantee stability in every respect is no more surprising than the fact that progress in one respect doesn’t guarantee progress in every respect.

And whenever “Permanence” is sacrificed to a particular kind of “Progress,” you’re sacrificing other progress as well—because you’re losing good somewhere. If that trade wasn’t worth making, then it wasn’t only the interest of stability that was ignored; the true interest of progress was misunderstood.

If we insist on using these contrasted terms at all, a more accurate approach would be to drop “Order” and say: the best government is the one that most promotes Progress. That’s because, properly understood, progress already contains order. Order does not contain progress. Progress is simply a greater degree of what order, at best, expresses in a lesser degree. And “order,” in any other meaning, names only a piece of what good government requires, not its essence.

In fact, order belongs more naturally among the conditions of progress. If you want to increase your stock of good, one of your first rules is to protect what you already possess. If you aim to become richer, you must begin by not wasting your current resources. Seen this way, order isn’t a competing goal that must be “balanced against” progress; it’s one of the main ways progress is achieved. If you gain in one place by suffering a greater loss in the same or another place, that isn’t progress. So “conducive to progress,” in this broad and careful sense, would cover the whole excellence of government.

Still, even if that definition is philosophically defensible, it’s not a very fitting one, because it gestures at only part of what we mean. The word progress suggests forward motion. But what we need from government is just as much not sliding backward. The same social forces—beliefs, feelings, institutions, and habits—are required to prevent decline as to produce advance. Even if no improvement were possible, life would still be an unending struggle against deterioration; in many ways, it already is.

In fact, this is how the ancients largely thought about politics. They believed people and their institutions naturally tend to degenerate, and that only good institutions, well administered, could hold that decay at bay for a long time. We may not fully share that outlook today. Many modern people believe the overall drift of history is toward improvement. But we shouldn’t forget that there is always a steady current pulling the other way—made up of human folly, vice, neglect, laziness, and complacency. That current is held back only by the efforts some people make continuously, and others make in bursts, toward worthwhile ends.

It badly understates the value of those efforts to imagine they matter mainly because of how much visible “improvement” they produce—and that if they stopped, we’d simply stay where we are. Even a small drop in that exertion would not merely halt improvement; it would reverse the direction. Once decline begins, it tends to accelerate and becomes harder and harder to arrest, until society reaches a condition familiar from history—and still visible in many places—where people sink so low that it can seem as if only something close to superhuman power could turn the tide and start the climb again.

For reasons like these, Progress is no better than Order or Permanence as the foundation for a clean, scientific classification of what a good form of government requires.

The sharp opposition those phrases point to isn’t really about the issues themselves. It’s about the kinds of people who respond to them.

Some people are wired for caution: they feel the risk of losing what they already have more strongly than the promise of gaining something better. Others lean toward boldness: they’re more excited by future improvements than protective of today’s comforts. Both types want a good society, and the path to that goal is basically the same—but they tend to drift off it in opposite directions.

That matters a lot when you’re building any political body. You want both temperaments in the room, so each checks the other when it starts to go too far. Usually you don’t need special rules to force that balance. A natural mix of:

  • older and younger people
  • those with established reputations and those still building them

does the job—so long as you don’t wreck it with clever but clumsy “balancing” schemes.


What Good Government Mostly Depends On

The common way people classify “what society needs” doesn’t actually help much here, so we need a better organizing idea.

Start with the big question: what does good government—at every level, from basic competence to true excellence—really depend on?

More than anything else, it depends on the qualities of the people who make up the society being governed.

Take justice, for example. Courts are one area where procedures and institutional design matter enormously—rules, formats, checks, the whole apparatus. And yet, even here, the system matters less than the human beings working inside it.

Ask yourself:

  • What good are perfect legal procedures if witnesses routinely lie, and judges (and court staff) take bribes?
  • How can you get competent local government if decent people won’t serve—because they don’t care, or they see no reason to—and the jobs get filled by people chasing private advantage?
  • What’s the point of the most broadly democratic voting system if voters don’t try to elect the best candidate, but pick whoever spends the most money to win?
  • How can a legislature do good work if its members can be bought—or if they’re so undisciplined they can’t deliberate calmly, break into fistfights on the floor, or even start shooting at each other?
  • How can any joint undertaking run well when envy is so intense that the moment someone looks likely to succeed, others quietly coordinate to make sure they fail?

Whenever people are generally disposed to focus only on their selfish interests, and don’t think about (or care about) their share in the common interest, good government becomes impossible.

And the damage caused by lack of intelligence hardly needs an example. Government is just human beings doing work—making choices, appointing officials, supervising them, criticizing them, and correcting them. If the officials, the voters, the watchdogs, and the wider public are mostly ignorance mixed with stubborn prejudice, then nearly every act of government will go wrong. As people rise above that—becoming more informed, more thoughtful, more fair-minded—government improves, up toward an ideal that’s imaginable but nowhere fully achieved: public officials of real ability and integrity, surrounded and constrained by a virtuous, enlightened public opinion.


The First Test: Does Government Improve the People?

If the first ingredient of good government is the virtue and intelligence of the community, then the most valuable trait any political system can have is the ability to raise the virtue and intelligence of the people themselves.

So the first question to ask of any political institutions is this: Do they cultivate good qualities in citizens? Not just morally and intellectually, but also in the ability to act effectively—to do things in the world, not merely think or feel.

A government that best develops these qualities has the strongest claim to be best overall, because everything else government tries to do depends on these qualities being present among the public.

That gives us one clear criterion for judging a government:

  • How much does it increase the sum of good qualities in the people—both as a group and as individuals?

After all, the people’s well-being is the whole point of government, and their character is the engine that makes any governmental “machine” run.


The Second Test: Does the Machinery Use What’s Good in Society?

But there’s a second ingredient: the quality of the machinery—the institutional design itself. Specifically:

  • How well is the system built to use whatever virtue and intelligence exists, and direct it toward good public outcomes?

Go back to courts. Suppose you hold the judicial system fixed. The quality of justice you get depends on two things working together:

  • the quality of the judges
  • the quality of the public opinion that shapes, restrains, or supports them

So what separates a good judicial system from a bad one? It’s the set of practical devices that bring the society’s moral and intellectual resources to bear on what courts do.

That includes:

  • how judges are chosen so you get the highest average integrity and competence
  • procedures that help courts reach correct outcomes
  • publicity that allows observation and criticism when something is wrong
  • free discussion and criticism through the press
  • rules of evidence that are well (or badly) designed to draw out the truth
  • how easy it is for people to access the courts
  • the arrangements for detecting crimes and catching offenders

These are not the “power” itself. They’re the machinery that brings power into contact with the problem. Machinery can’t move by itself—but without it, even abundant moral energy and intelligence in the community will be wasted.

The same basic distinction applies to the executive departments—the everyday administration of government. Administrative machinery is good when it sets up, for example:

  • clear tests for who is qualified for office
  • sensible rules for promotion
  • a convenient division of labor
  • an orderly method for conducting business
  • accurate, understandable records
  • clear responsibility, so each person knows what they answer for and others know who to hold accountable
  • well-designed checks against negligence, favoritism, and corrupt “deals”

But checks don’t enforce themselves any more than a bridle steers a horse without a rider. If the overseers are as corrupt or lazy as the people they’re meant to oversee—and if the public, the main spring of the whole checking system, is too ignorant, passive, or inattentive—then even the best-designed apparatus will deliver little.

Still, good machinery is always better than bad machinery. It lets whatever limited “driving” or “checking” force exists work at maximum advantage; without it, even a great deal of force may fail to accomplish anything. Publicity, for instance, won’t stop wrongdoing or encourage good behavior if nobody pays attention—but without publicity, how could the public judge what it isn’t allowed to see?

The ideal design for any public office is one where the official’s self-interest fully aligns with duty. No set of rules can magically guarantee that—but achieving it is far less likely without a system thoughtfully built to push incentives in that direction.


Two Ways to Judge a Government

What’s true of detailed administration is even more obviously true of a government’s overall constitution. Any government that aims to be good is, at bottom, an organization that tries to gather and coordinate some part of the community’s better qualities to manage shared affairs.

A representative system, in particular, is a way to bring:

  • the community’s general level of intelligence and honesty, and
  • the exceptional intellect and virtue of its wisest members

to bear more directly on government than most other arrangements do. Under any system, whatever influence good people and sound judgment have is the source of all that is good in government—and the barrier against much of what would otherwise be evil. The more successfully institutions organize these good qualities, and the better the method of organizing them, the better the government will be.

So we end up with a useful two-part standard for judging political institutions. Their merit consists partly in:

  1. how far they promote the community’s overall development—growth in intellect, virtue, and practical effectiveness; and
  2. how well they organize the moral, intellectual, and active worth already present, so it can act with maximum effect on public affairs

In plain terms, judge a government by two kinds of impact:

  • its impact on people: what it makes citizens become
  • its impact on things: what it accomplishes for them, and through them

Government is both a major force shaping the human mind and a set of organized arrangements for handling public business. In the first role, its benefits are often indirect—but no less essential. Its harms, on the other hand, can be painfully direct.


How These Two Roles Connect (Without Collapsing Into One)

These two roles aren’t just “more or less of the same thing.” They’re different in kind. Still, they’re closely connected.

Institutions that manage public affairs as well as possible, given the society’s current level of development, tend—by that very success—to help raise that level further. A people with the best laws it can use, a pure and efficient judiciary, an enlightened administration, and a fair, not overly burdensome financial system (all appropriate to its current moral and intellectual state) is well positioned to move quickly to a higher stage.

In fact, there may be no more effective way for institutions to improve a people than by simply doing their direct job well. And the reverse is also true: when governmental machinery is badly built and public business is badly done, the damage spreads everywhere—lowering moral standards, dulling intelligence, and draining energy and initiative.

But the distinction still matters, because competent administration is only one pathway by which institutions shape the mind and character of a people. The broader ways governments educate—or deform—the public remain a separate and much larger subject.


What Changes Across Societies, and What Doesn’t

There are two broad ways a government affects a society’s welfare:

  • as a kind of national education (shaping character and capacity), and
  • as a set of arrangements for running collective affairs at the level of education the people already have

The second—the day-to-day conduct of public business—varies less across countries and stages of civilization than the first. It also depends less on a government’s fundamental constitution.

Many of the best practices for practical administration under a free constitution would also be best under an absolute monarchy. The difference is that an absolute monarchy is simply less likely to follow them.

Consider areas like:

  • property law
  • rules of evidence and court procedure
  • taxation and financial administration

These don’t have to differ radically just because the government’s overall form differs. Each of these topics has its own principles and deserves its own study. General jurisprudence, civil and criminal lawmaking, and financial and commercial policy are each sciences in themselves—or, more accurately, branches of the larger art of government.

The most enlightened doctrines in these fields won’t be equally likely to be understood or applied under every political form. But when they are understood and applied, they tend to be beneficial under almost any form.

Of course, you can’t apply them unchanged everywhere. Different societies and different states of mind require adjustments. Still, for a society advanced enough to have rulers capable of understanding these doctrines, most required changes are mostly details, not fundamental reversals.

A government for which these principles would be wholly unsuitable would have to be one so bad in its structure, or so hated by public feeling, that it couldn’t even keep itself alive without dishonest methods.


Education Is Different: Institutions Must Fit the Stage

Everything changes when we shift to the part of public interest that concerns the training of the people themselves—their character, habits, and capacities.

For that purpose, institutions often need to be radically different, depending on the stage of development already reached. Recognizing this—often by experience more than philosophy—is one of the main ways modern political thinking improves on the thinking of the previous age. Earlier theorists would argue for representative democracy in England or France with reasons that, if taken seriously, would “prove” it was equally suitable for Bedouins or Malays.

But human communities span an enormous range of development. At the lower end, some conditions are only slightly above the highest animals. The upper range is also wide, and what humans might become in the future is wider still.

A community can move from a lower stage to a higher one only through many influences working together—and one of the most powerful influences is the government it lives under. In every stage of human improvement we’ve yet seen, the nature and degree of authority over individuals, the distribution of power, and the conditions of command and obedience have been among the strongest forces—second only to religious belief—in shaping what people are, and what they can become.

A people’s progress can be stalled at any point if its government isn’t adapted to the stage it’s in. And there is one indispensable merit that can excuse almost any other flaw, so long as progress remains possible: the government’s effect on the people must be favorable—or at least not hostile—to the next step they need to take to rise to a higher level.

So, for example, imagine a people living in what you might call “savage independence”—each person for himself, mostly free from external control except in occasional bursts. Such a people can’t make real progress in civilization until it learns a first, basic lesson: obedience.

That means the essential virtue of any government that establishes itself over such a people is simple: it must be able to make itself obeyed. And to do that, its constitution must be nearly—or entirely—despotic. A government that is even partly popular, relying on voluntary restraint by individuals, would fail to enforce the first lesson required at that stage.

Accordingly, when a society like this becomes “civilized” without simply copying a neighboring, already-civilized culture, it’s almost always because a single powerful ruler drags the community forward—using authority grounded in religion, military success, and often outright foreign conquest.

Here’s the uncomfortable part: many “uncivilized” peoples—especially the bravest, most energetic ones—tend to resist steady, repetitive work that doesn’t feel exciting. But real civilization has a price tag, and that price is continuous, disciplined labor. Without it:

  • people don’t develop the habits of self-control and long-term planning that civilized life depends on, and
  • the material world—fields, roads, tools, storage, trade—doesn’t get built into something civilization can actually run on.

To get a whole population to accept that kind of work takes an unusually lucky combination of circumstances, and usually a very long time—unless they’re compelled for a while. That’s why, in the earliest stages of social development, even personal slavery could (in the author’s view) sometimes speed the transition away from a “freedom” defined mainly by fighting and raiding. By forcing large numbers of people into regular productive work, slavery could create the first rough draft of an industrial life.

But that “defense” of slavery applies only—if it applies at all—to the most primitive stage of society. Once you’re dealing with a civilized people, there are other ways to spread civilization to those under their influence. And slavery clashes so violently with government by law—the bedrock of modern life—and corrodes the ruling class so thoroughly once they’ve been exposed to civilized standards, that bringing slavery into any modern society is not just immoral but a collapse into something worse than barbarism.

Still, historically, almost every people we now call civilized once lived in a world where most of the population were slaves. And a society built on slavery needs a different kind of government than a society of independent “savages.”

In some unusual cases, improvement might be as simple as emancipation. If enslaved people are naturally energetic, and if the society also contains a hardworking middle group that is neither slave nor slave-owner (as in parts of ancient Greece), then simply making the enslaved free might be enough. Once freed, they may quickly become capable of full citizenship—like many Roman freedmen. But when slavery is in its “normal” form, that isn’t what you see. In fact, when emancipation works that smoothly, it usually means slavery is already dying out.

In the harshest, most typical form, a “proper” slave is someone who hasn’t learned how to act for himself. He’s a step above a savage in one crucial respect: he has learned to obey. But what he has learned is obedience to a person’s command, not obedience to a rule.

That’s the key distinction. People shaped by slavery, the author claims, often can’t reliably guide their conduct by general principles—by law. They can follow orders, and they can do it fast, but only under direct supervision. If someone they fear is standing over them, they work; if that person turns away, the work stops. The motives that move them aren’t long-term interests or judgment, but reflexes—immediate hope of reward or immediate terror of punishment.

So while a despotism might “tame” a savage, despotism—precisely because it is despotism—tends to freeze enslaved people into the very incapacity that slavery produced. At the same time, handing them self-rule too soon wouldn’t work either; a government fully under their control would be, in the author’s view, unmanageable at that stage.

Their improvement, then, can’t start from within. It has to be imposed from outside. The decisive step—the only path forward—is moving them from a government of will (the ruler’s personal choice, command by command) to a government of law (stable rules that apply generally). What they need at first is not raw force, but guidance: training in self-government, which initially means learning to act on general instructions instead of waiting for a shouted order.

But because they’re not yet able to follow guidance from anyone except those they associate with power, the government best suited to them is one that possesses force but rarely uses it. Think of a parental despotism or a guiding aristocracy—something like certain technocratic-utopian visions: it supervises society broadly, keeps the sense that real power stands behind the rules, and can compel obedience when necessary. Yet it can’t micromanage every detail of work and life, and that limitation is actually a feature: it forces people to do many things on their own, building the habits they lack.

This kind of “training wheels” government—what the author calls government by leading-strings—may move such a people fastest through the next stages of social development. He points to the Incas of Peru and the Jesuits’ communities in Paraguay as examples of this general idea. But the warning is obvious: training wheels are only justified if they’re used to teach people to ride, not to keep them permanently dependent.

At this point, the author steps back. Fully mapping which form of government fits every known social condition would require writing a general treatise on political science, not a discussion aimed at representative government. For the narrower purpose here, we only need a few broad principles.

To decide what kind of government suits a particular people, you have to look at that people’s defects and limitations and identify the ones that most directly block progress—the specific thing that “stops the road.” The best government for them is the one that most effectively supplies what they’re missing, the thing without which they can’t advance at all—or can advance only in a crippled, lopsided way.

But there’s an essential caveat whenever your goal is improvement: in trying to add what’s missing, you must avoid damaging what already exists, or at least minimize the damage. You might need to teach a “savage” population obedience, but you must not teach it in a way that turns them into slaves. More generally, a government can be very effective at pushing a people through the next stage, yet still be a terrible choice if it does so in a way that blocks the stage after that—if it trains them into permanent dependence, conformity, or fear.

History is full of these tragedies. The Egyptian priesthood and the paternal despotism of China were, in the author’s view, well-designed tools for bringing those societies up to the level of civilization they reached. But once there, they stalled. Why? Because the next step required mental liberty and individuality—and the very institutions that had made earlier progress possible also made acquiring those qualities nearly impossible. And since those institutions didn’t collapse or give way to new ones, further improvement stopped.

To see the opposite pattern, the author points to a smaller and often underestimated Eastern people: the Jews. They too had an absolute monarchy and a powerful religious hierarchy, and their institutions were as clearly priest-made as those of India. Like other Eastern systems, these institutions disciplined the people into industry and order and gave them a durable national life.

But there was a crucial difference: neither kings nor priests ever gained exclusive control over shaping the nation’s character. Their religion made room for inspired individuals—people of strong moral seriousness and genius—to be recognized as speaking with divine authority. That produced an immensely valuable, informal institution: the Prophets.

Protected (though not always effectively) by their sacred status, the Prophets became a national power—often strong enough to confront kings and priests—and kept alive an internal clash of forces. And for the author, that kind of ongoing tension is the only real safeguard of continuous progress. In many places, religion becomes the stamp of approval on whatever already exists: it sanctifies the status quo and blocks reform. In this case, religion also created an engine of critique.

One observer described the Prophets as, in both church and state, the equivalent of the modern free press. That comparison is fair, the author says, but still doesn’t capture the full scale of what they did. Because “inspiration” was never treated as a closed, completed canon, the most gifted and morally intense individuals could publicly condemn what they believed deserved condemnation—claiming divine authority—and could also offer higher, better interpretations of the national religion that then became part of the religion itself.

That’s why, if you stop reading the Bible as though it were one single uniform book, you can’t miss the enormous gap between the morality and religious outlook of the Pentateuch (and even the historical books, which the author attributes to conservative priestly writers) and the morality and religion of the prophetic writings. The distance is as great, he says, as the distance between the Prophets and the Gospels. With conditions like that—critique built into the culture—progress had unusually fertile ground. And accordingly, the Jews, rather than remaining stationary like many other Asian societies, were—next to the Greeks—the most progressive people of antiquity. Together with the Greeks, they became the starting point and main driving force behind what we now call modern culture.

All of this leads to a bigger conclusion: you can’t understand how forms of government fit different stages of society if you focus only on the very next step. You have to consider the whole sequence of steps a society still has ahead of it—both what we can predict and the much larger unknown future.

So, if we want to judge governments properly, we have to build an ideal model: the form of government that is best in itself—meaning, the one that would do more than any other to encourage and sustain progress in every direction, at every level, if the right conditions existed for it to work.

Once we have that ideal in view, we can ask two practical questions:

  • What mental and moral capacities does a people need for that government to deliver its benefits?
  • What specific deficiencies make a people unable to profit from it?

From there, we could work out when it makes sense to introduce the best form, and when it doesn’t, which “lesser” forms are most likely to carry a community through the intermediate stages until it becomes ready for the best.

The second problem—choosing the best stopgap forms—isn’t the focus here. But the first problem is central. And the author says we can already state, without overconfidence, a guiding thesis that will be defended in the pages ahead: the ideally best form of government will turn out to be some version of the representative system.

III — That The Ideally Best Form Of Government Is Representative Government

For a long time—maybe for as long as Britain has thought of itself as free—people have repeated a tempting line: If only you could guarantee a good despot, then despotism would be the best government. I think that idea gets the whole point of “good government” backward. And until we shake it, it poisons almost everything we try to reason out about politics.

Here’s what the slogan is really assuming. Give absolute power to one exceptional person and, like magic:

  • good laws will be made and enforced
  • bad laws will be repaired
  • the most capable people will be placed in positions of trust
  • courts will be fair
  • taxes will be light and sensibly arranged
  • every department will be run cleanly and intelligently, as well as the nation’s circumstances allow

Fine. For the sake of argument, let’s grant all of that. But notice what we just smuggled in. “A good despot” isn’t merely a morally decent ruler. To pull off this fantasy, he’d have to be something close to all-seeing.

He would need accurate, detailed information about how every branch of government is functioning in every district of the country. And he’d need to supervise this sprawling system in real time—using the same twenty-four hours a day available to a king as to the poorest laborer. Or, if he can’t personally watch everything, he must at least have the rare ability to do two extraordinary things at once:

  • find a large supply of honest, competent people to run every part of the administration under his eye
  • identify the much smaller set of truly exceptional people who can be trusted to operate with little supervision and to supervise others well

That’s an almost impossible workload, requiring such unusual energy and judgment that you can barely imagine anyone volunteering for it—except as an emergency measure, a stopgap to escape some unbearable crisis, and a bridge to something better.

But we don’t even need to fight over that huge assumption. Let’s pretend the difficulty is solved. What do we end up with?

One person, working at superhuman mental speed, running the entire life of a people who are mentally passive.

That passivity isn’t an accident; it’s built into the very idea of absolute power. Under despotism, the nation—and every individual in it—has no real say over its own fate. People don’t exercise a will about their shared interests. Decisions come from a will that isn’t theirs, and disobedience isn’t just risky—it’s a crime.

So ask the obvious question: What kind of human beings does that produce? What happens to people’s ability to think, to initiate, to act?

On abstract topics, maybe they’re allowed to speculate—as long as they stay far from politics, or at least far from anything that touches real political practice. On practical matters, at most they can “suggest.” And even under the mildest despot, only those with an already recognized reputation for superiority can expect their suggestions to be heard at all, much less taken seriously by the people who actually run things.

Most people don’t do hard thinking for fun. Someone needs an unusual love of pure mental exercise to do serious work in their head when it can’t possibly make a difference in the world—especially if they’re training themselves for roles they will never be permitted to hold. For almost everyone, the strongest incentive to sustained intellectual effort is the hope that it will be used.

This doesn’t mean a despotic nation will have no intelligence whatsoever. Ordinary life still forces each household to plan, choose, and solve problems, and that draws out practical ability—but only within a narrow band of concerns. You might also see:

  • a small class of scholars who pursue science for its practical payoff, or simply for the pleasure of discovery
  • a bureaucracy (and people preparing for it) trained in the “how-to” rules of administration—often as rote know-how rather than deep understanding
  • a deliberate effort to concentrate the country’s sharpest minds in one favored direction, usually the military, to increase the ruler’s power and prestige

But the public as a whole remains uninformed and uninterested about the big questions of collective life. Or, if they know something, it’s often the shallow, spectator kind of knowledge—like the way someone can talk about engineering without ever having built anything.

And the damage isn’t only intellectual. It’s moral, too.

When you artificially shrink what people are allowed to do, you also shrink what they are capable of feeling. Our emotional life grows through action. Even family affection is kept alive by freely chosen acts of care. Give someone nothing to do for their country, and they won’t care much about it. People have long said that under despotism there is, at most, only one patriot: the despot himself. Harsh as that sounds, it captures a real effect of being completely subjected—even to a wise and benevolent master.

“What about religion?” someone might ask. Surely that can lift people’s minds above the dust of daily life.

But even if religion escapes being twisted into a tool of despotism, it still tends—under these conditions—to stop being a shared public concern and collapse into a private transaction between an individual and God, where the main stake is personal salvation. And religion in that reduced form can sit comfortably alongside narrow selfishness; it doesn’t necessarily bind a person emotionally to the rest of humanity any more than raw sensuality does.

So what does good despotism really mean, in human terms? It means a government where—so far as the ruler can manage—officials don’t openly oppress. But it also means:

  • the people’s collective interests are managed for them
  • the thinking connected to those interests is done for them
  • people’s minds are shaped into accepting, even consenting to, this surrender of their own energies

Leaving everything to “the Government,” like leaving everything to “Providence,” becomes another way of saying you don’t really care about public affairs. If outcomes turn out badly, you treat them like natural disasters—unavoidable “acts of nature,” not the results of human choices you could challenge or change.

The predictable result is that, aside from a small group of scholars who love speculation for its own sake, a whole people’s intelligence and feeling get narrowed down to private, material concerns—and, once those are met, to private pleasures and decorations.

And if history teaches anything, it teaches this: that is the beginning of national decline—assuming the nation ever achieved anything high enough to decline from. If it never rose above what Mill’s contemporaries called the “Oriental” condition, it will simply stagnate there. But if it once reached something higher—think of Greece or Rome, lifted by civic energy, patriotism, and a broadness of mind that grow only in freedom—then it tends to slide back within a few generations into that stagnant state.

And “stagnant” doesn’t mean safe. It doesn’t mean calm security against getting worse. It often means being overrun, conquered, and reduced to household slavery—either by a stronger despot, or by neighboring “barbarous” peoples who, despite their roughness, still possess the energetic habits that freedom produces.

These aren’t just common tendencies of despotism. They are, in large part, its built-in necessities—unless the despotism agrees, in practice, not to be despotism. That is: unless the ruler chooses not to use his power, keeping it in reserve while allowing the everyday business of government to proceed as if the people were governing themselves.

We can imagine, however unlikely, a despot who voluntarily follows many of the rules of constitutional government. He might allow enough press freedom and open discussion for real public opinion to form and express itself about national affairs. He might let local matters be handled by local people without constant interference. He might even surround himself with councils freely chosen by the whole nation (or a substantial part of it), while still keeping in his own hands taxation, supreme lawmaking power, and ultimate executive authority.

If he did all that—if he “abdicated,” to that extent, as a despot—he would remove a large share of the distinctive evils of despotism. Political capacity wouldn’t be blocked from developing throughout the population. Public opinion would no longer be a mere echo of the government.

But notice what happens next. The moment public opinion becomes independent, it will sometimes support the ruler—and sometimes oppose him. Every government displeases many people. Once citizens have regular ways to organize and speak, criticism will be voiced, and often loudly. Now the crucial problem arrives:

What does the monarch do when the majority disagrees with him?

  • Does he change course and defer to the nation? Then he is no longer a despot. He becomes a constitutional king—essentially a permanent chief minister of the people.
  • Does he refuse? Then he must either crush opposition with force, or endure a lasting standoff between an entire people and a single man—a conflict that can end only one way.

No doctrine of passive obedience, no “divine right,” can hold back the long-run consequences of that situation. Sooner or later the monarch must yield and accept constitutional limits—or be replaced by someone who will.

And even in the meantime, a despotism that is only “despotic” in name has few of the supposed advantages of absolute monarchy, while it achieves only a partial version of the benefits of free government. Citizens might enjoy a great deal of day-to-day liberty, but they can never forget the core fact: their liberty is a permission, not a right. It is a concession that could be withdrawn. In law, they remain slaves—just slaves of a cautious or indulgent master.

Given all that, it’s not surprising that impatient reformers sometimes dream of a strong hand. When you’re trying to push urgent public improvements, the obstacles can feel endless: ignorance, indifference, stubbornness, and, worst of all, coalitions of selfish interests that can use the very tools of free institutions to block change. In those moments, it’s easy to sigh for someone who can sweep the obstacles aside and force a resistant people to be well governed.

But that hope leaves out the main ingredient in good government: the improvement of the people themselves.

One great benefit of freedom is that rulers cannot improve a people’s condition while bypassing their minds. Under free institutions, you don’t just “fix things” for people; you draw people into the habits and capacities that make a better public life possible.

If it were possible to govern people well in spite of themselves, that good government would be as short-lived as the freedom of a country that has been “liberated” by a foreign army without its own active participation. It’s true that a despot can educate the people—and if despotism ever has an excuse, that’s the best one. But education that truly forms human beings, rather than training machines, eventually teaches them to claim control over their own actions.

History makes that clear. The leading French philosophers of the eighteenth century were educated by Jesuits; even that education was real enough to awaken their appetite for freedom. Anything that strengthens people’s faculties, even a little, creates a stronger desire to use those faculties without a leash. Popular education fails if it prepares people for any political condition other than the one it will almost certainly lead them to want—and very likely to demand.

None of this means that absolute power is never justified. In extreme emergencies, I’m not condemning the temporary assumption of absolute authority in the form of a dictatorship. Free nations have sometimes granted such power by choice, treating it as a harsh medicine for diseases in the body politic that couldn’t be cured gently.

But even then, accepting dictatorship—even for a strictly limited time—can be excused only if the dictator uses the whole power he takes to remove what blocks the nation from freedom, as figures like Solon or Pittacus were said to do.

As a standing political ideal, “good despotism” is a mirage. In practice—except as a temporary tool for a specific purpose—it becomes one of the most foolish and dangerous fantasies people can cling to. In fact, in a country that is at all advanced in civilization, a good despotism can be worse than a bad one: it softens and drains the people more thoroughly. Augustus’s mild despotism prepared the Romans for Tiberius. If nearly two generations of gentle slavery hadn’t first crushed their spirit, they might have had enough energy left to resist the harsher rule that followed.

All of this clears the ground for the real claim:

The best government, in principle, is the one in which sovereignty—the ultimate controlling power, the authority that decides in the last resort—belongs to the whole community. That means every citizen has a voice in how that ultimate authority is used, and, at least from time to time, is called on to take a real part in governing by personally carrying out some public function, local or national.

To test that claim, we have to look at it through the two lenses laid out in the previous chapter—two questions that together capture what it means for a government to be “good”:

  • How well does it manage society’s affairs using the existing moral, intellectual, and practical abilities of its members?
  • What does it do to those abilities over time—does it improve them, or damage them?

When I say “the ideally best form of government,” I do not mean a system that is practical or desirable at every stage of civilization. I mean: where it is practical and desirable, which form produces the greatest benefits, both now and in the future?

On that standard, only a fully popular government can make a serious claim to being the ideal. It excels in both departments. It tends to produce better government in the present, and it tends to build a stronger, higher national character than any alternative.

Its superiority, as far as immediate well-being goes, rests on two principles that are about as broadly true as anything we can say about human affairs:

  1. The rights and interests of any person are secure only when that person is able—and in the habit—of standing up for them.
  2. Prosperity rises higher and spreads wider as more kinds of personal energy are recruited to create it.

Put more concretely: people are protected from harm by others only to the extent that they have the power—and the practiced disposition—to protect themselves. And people succeed in the long struggle with nature in proportion as they are self-reliant: depending on what they can do, individually or together, rather than depending on what others will do for them.

The first of these claims—that each person is the safest guardian of their own rights and interests—is one of those basic rules of prudence that everyone who can manage their own life follows instinctively whenever their own stake is on the line.

Many people recoil from the idea that politics should be built around self-interest, as if it were a creed of pure selfishness. But notice what that complaint quietly assumes: that human beings, in general, do put themselves first, and they care more about people close to them than strangers far away. If that stopped being true—if people reliably cared about everyone else as much as themselves—then communism wouldn’t just be workable; it would be the only morally defensible way to organize society, and it would almost certainly happen.

For my part, I don’t buy the caricature of “universal selfishness.” I can easily imagine communism working right now among the best and most disciplined people, and I can imagine it becoming workable for many more over time. Interestingly, though, the loudest critics of the “self-interest dominates” idea are often the same defenders of the status quo—and since that idea is inconvenient for them, their outrage makes me suspect they actually believe it’s true: that most people, most of the time, put themselves first.

But we don’t even need that claim to justify a more democratic principle: everyone has a right to a share of sovereign power. You don’t have to assume that an exclusive ruling class will consciously and deliberately exploit everyone else. A far simpler risk is enough:

  • When a group has no seat at the table, its interests are easy to forget.
  • And even when those interests are noticed, they tend to be judged through the lens of people whose lives aren’t directly affected.

Take Britain as an example. The “working classes,” as people called them, were effectively shut out of direct political influence. I don’t believe the classes who governed generally wanted to harm working people. In fact, their attitude had flipped from earlier times. There was a period when the intention to keep labor down was explicit—think of the long efforts to suppress wages by law. But in Mill’s present, many lawmakers were willing to make real sacrifices, including financial ones, to help the poor. He even suggests that few ruling groups in history have been more sincerely determined to do their duty toward the less well-off.

And yet—here’s the problem—did Parliament, or almost any individual member of it, ever truly see an issue through a working person’s eyes?

When a question comes up where laborers have a direct stake, is it treated from their point of view—or almost entirely from the point of view of employers?

Mill isn’t claiming that workers are always right. Often they aren’t. But sometimes they’re just as close to the truth as their employers, and in every case their perspective deserves serious attention. Instead, it’s not merely dismissed; it’s treated as if it doesn’t exist.

Look at strikes. Mill doubts there’s even one leading figure in either House who doesn’t feel sure—absolutely sure—that the whole “reason of the matter” is on the masters’ side and that the workers’ case is basically ridiculous. Anyone who has actually studied the issue, he says, knows how wrong that is—and how much deeper and less superficial the debate would become if the striking classes could really make themselves heard in Parliament.


Here’s a hard rule of political life: good intentions don’t substitute for power.

No matter how sincere someone is about protecting other people’s interests, it is never “safe” to arrange things so that those people must depend on someone else’s judgment and generosity. And there’s an even more obvious truth: lasting improvements in people’s lives are secured mainly by their own efforts, not by others acting for them.

Together, those two principles explain a striking historical pattern: free communities have tended to be

  • less soaked in social injustice and crime, and
  • more prosperous and energetic,

than societies ruled by monarchs or oligarchs—or than those same communities became after they lost their freedom.

Mill invites you to compare like with like: governments that existed at the same time.

  • The free Greek city-states versus the Persian satrapies.
  • The Italian republics, and the free towns of Flanders and Germany, versus feudal monarchies.
  • Switzerland, Holland, and England, versus Austria or pre-revolutionary France.

Their economic and civic advantages were so visible they were hard to deny. And their superiority wasn’t just about wealth; it showed up in the texture of daily life and government. Even if you grant every exaggeration about the noise and disorder of public life in the free states, it doesn’t begin to match what was routine in many monarchies: the casual, contemptuous trampling of the majority, the petty tyrannies inflicted by officials, the relentless predation hidden inside “fiscal arrangements,” and the terrifying secrecy of courts that called themselves justice.


At the same time, Mill makes an important admission: the “freedom” that existed in history was usually partial freedom. The benefits of liberty were won by extending political privilege to some people, not to everyone. A government that extends those benefits impartially to all is still, in his view, an unrealized ideal.

Still, every step toward that ideal matters. In many historical situations, the general level of education and public spirit made “full equality” impossible to achieve all at once. But the guiding picture remains clear: the most complete idea of a free government is one in which everyone participates in the protections and advantages of freedom.

Because whenever anyone—no matter who—is excluded:

  • their interests lose the ordinary guarantees that protect everyone else,
  • and they have less room and encouragement to develop their abilities in ways that help both themselves and the wider community.

And since a society’s prosperity is always tied to how much human energy it can unlock and aim well, exclusion doesn’t just hurt the excluded. It drags down the whole.


So far, the argument has been about how well a society is managed right now—the practical welfare of the living generation. But if we shift the focus from outcomes to character, Mill thinks the case for popular government becomes even stronger.

Everything turns on a deeper question: which kind of character do we want to be most common, for humanity’s sake?

  • The active type or the passive type?
  • The kind that fights evils or the kind that merely endures them?
  • The kind that bends to circumstances or the kind that tries to bend circumstances to itself?

Most moral clichés—and most people’s instincts—favor the passive type. We say we admire energy, but we often prefer submissiveness in the people around us. A compliant neighbor feels safer. Their quiet makes it easier for us to get our own way. A passive person, if we don’t need their initiative, can feel like one less obstacle in our path. Contentment doesn’t threaten us; it doesn’t compete.

But Mill insists on something close to a law of social progress: nearly all improvement comes from the discontented—from people who refuse to accept what’s wrong and try to change it. And there’s another asymmetry: it’s easier for an active person to learn patience than for a passive person to learn genuine energy.

If we break “mental excellence” into three kinds—intellectual, practical, and moral—Mill says the advantage of the active type is obvious for the first two.

Intellectual excellence is built from effort. Clear thinking, discovery, and understanding don’t appear by accident; they’re produced by sustained work. The desire to act, to experiment, to attempt new things—whether for oneself or others—creates not only practical talent but even speculative insight. By contrast, the kind of “culture” compatible with the passive type tends to be thin: a mind that stops at entertainment or mere contemplation.

For Mill, the test of real thinking isn’t dreamy abstraction; it’s whether thought can be made precise and used. Thinking that never aims at application often dissolves into fog—mystical systems and wordy metaphysics, like the more obscure traditions associated with the Pythagoreans or the Vedas.

Practical improvement makes the contrast even sharper. The character that improves life is the one that wrestles with nature and habit, not the one that simply yields. The traits that help individuals—initiative, persistence, the willingness to struggle—are tied to the traits that, in the long run, raise the whole community.


The harder case, at first glance, is moral excellence. People often associate passivity with goodness: with harmlessness, with virtue, with the famous moral charm of “contentment.” Mill sets religion aside here, though he notes that many religions have historically praised an inactive temperament as closer to submission. He also suggests that Christianity, at its best, has the ability to correct such distortions rather than be trapped by them.

Even without religion, it’s tempting to think: a passive person may not be useful, but at least they won’t hurt anyone.

Mill calls that a mistake. First, contentment doesn’t naturally follow from passivity. And when passivity exists without true contentment, the moral consequences can be ugly. If someone wants advantages they don’t have, but they don’t cultivate the energy required to pursue them, it’s easy for desire to curdle into bitterness—hatred toward those who do have what they want.

The person who strives, who sees a path to improve their condition, usually feels goodwill toward others on the same path—even those who’ve succeeded. In a society where many people are trying, failure tends to be interpreted in the language of effort, opportunity, or bad luck.

But those who desire what others possess and yet don’t seriously try to obtain it fall into two common moods:

  • endless complaint that “fortune” isn’t doing for them what they won’t attempt for themselves, or
  • envy and ill-will toward anyone who has what they want.

And the more a society believes that success comes from sheer accident—fate, luck, favoritism—rather than exertion, the more envy becomes a national habit.

Mill claims envy is especially intense in “the East,” as he understood it: in Eastern moral literature, the envious man is a stock character; and in daily life, envy is treated as dangerous, almost supernatural. The fear that an envious glance can harm you—the superstition of the evil eye—spreads precisely because envy is assumed to have power.

Next, he says, come parts of southern Europe. Spain, in his telling, hounded its great men with envy, soured their lives, and often cut short their successes.

France is a more complex case. Mill describes the French as essentially a southern people whose temperament is impulsive and energetic, yet whose history—shaped by despotism and Catholicism—trained habits of submission and endurance, so that “patience” became their admired ideal. If envy and hostility to superiority aren’t worse there than they are, he argues, it’s because French character contains strong counterweights—especially a remarkable individual energy that emerges whenever institutions allow it, even if that energy is less steady and more intermittent than in the self-reliant, struggling Anglo-Saxons.


Of course, there are genuinely contented people everywhere—people who don’t just refrain from seeking what they lack but truly don’t want it. Those people don’t resent others for having more.

But Mill thinks most “contentment” is counterfeit: it’s real dissatisfaction mixed with laziness or self-indulgence. It doesn’t rise by legitimate effort; instead, it takes pleasure in pulling others down to the same level.

Even when contentment is innocent, we only admire it under specific conditions. We admire it when the indifference is only toward external advancement—money, status, comforts—while there is still an active drive toward inner growth, moral improvement, or a sincere, unselfish commitment to help other people.

A person—or a family—who feels no ambition to make anyone else happier, to serve their neighborhood or country, or to improve themselves morally doesn’t earn admiration. We naturally interpret that kind of “content” as a lack of spirit.

The contentment worth praising is something else entirely:

  • the ability to do cheerfully without what can’t be had,
  • a clear sense of what different desires are worth,
  • and a willing choice of the greater good over the lesser when they conflict.

And here’s Mill’s twist: these virtues tend to arise most naturally in people who are actively engaged in improving their lives or someone else’s. When you regularly test your strength against real obstacles, you learn which barriers you truly can’t move and which ones you could move but aren’t worth the cost. When your mind is occupied with useful, achievable projects, you’re less likely to brood over glittering prizes that aren’t worth pursuing—or that don’t actually fit your life.

So the active, self-helping character is not only better in itself; it’s also the type most likely to absorb whatever is genuinely excellent in the passive type.


Mill then turns to England and the United States. Their restless “go-ahead” spirit can be criticized, he says, mainly because it often aims at trivial goals—comfort, money, display. But that energy, in itself, is one of humanity’s best resources.

Someone once observed, Mill notes, that when things go wrong the French instinct is to say, “We need patience,” while the English instinct is to say, “What a shame.” The people who react to failure with “shame”—who immediately assume the problem could and should have been prevented—are the ones who, over time, do the most to improve the world.

Even if that energy is misdirected toward low aims, it still expands human control over material conditions. And that expansion isn’t nothing: it builds tools, systems, and capacities that later make bigger intellectual and social achievements possible. As long as the energy exists, some people will turn it—and will keep turning it more and more—toward improving not just outward circumstances but human character itself.

Because the most deadly obstacle to progress isn’t even misused energy. It’s no energy at all: inactivity, lack of aspiration, absence of desire. When that becomes widespread, it doesn’t just stall improvement; it’s the only condition under which a small energetic minority can badly misdirect an entire society. Mill thinks this is a central reason why so much of humanity remains, in his words, in a savage or semi-savage state.


All of this ties back to government. Mill’s conclusion is blunt: rule by one or a few tends to cultivate the passive type of character, while rule by the many tends to cultivate the active, self-reliant type.

Irresponsible rulers need their subjects to be quiet more than they need them to be energetic—except for the kinds of activity rulers can command and harness. For people excluded from political participation, every government teaches the same lesson: treat the orders of men as if they were laws of nature. Yield. Accept. Obey.

And people don’t become mere tools in the hands of rulers if they have a strong will, a sense of self, a spring of inner activity in the rest of their lives. But when they show those traits, despots don’t encourage them; at best, they tolerate them grudgingly, as something that must be “forgiven.”

Even when rulers aren’t consciously afraid of their subjects’ mental activity, their very position suppresses it. Effort is discouraged not only by punishment or explicit repression, but by something more effective: the belief that effort is powerless—that no matter what you try, nothing will change.

Between being ruled by someone else’s will and building the habits of self-reliance and self-government, there’s an obvious clash. The tighter the control, the deeper the clash.

Different rulers meddle to different degrees. Some leave people a little room to act; others run their lives for them. But the difference is mostly about how far they go, not what kind of rule it is. In fact, the “best” despots—those who genuinely mean well—often go the farthest in tying up people’s freedom. A bad despot may be content, once he’s comfortable, to stop interfering. A good despot can’t resist “improving” the public—by forcing people to conduct their own affairs the way he thinks they should. The famous regulations that locked major French industries into fixed, state-approved methods were created under Colbert.

Now compare that with what happens to a person’s mind and character when the only outside limits he feels are:

  • the hard facts of nature, and
  • social rules he has a real hand in making—rules he’s free to criticize in public and work to change if he thinks they’re wrong.

Even in a partly popular government, people who don’t enjoy full citizenship can sometimes act with a surprising amount of freedom. Still, there’s a special boost that comes from starting on equal footing—from not having to think, “My future depends on how successfully I flatter or impress a ruling body that doesn’t count me as one of them.”

Being excluded from the constitution doesn’t just sting; it drains energy. It’s discouraging for an individual, and even more crushing for an entire class, to be forced to plead from outside the door—to have their fate decided about them, not with them. Freedom has its strongest effect on character only when a person either is a full citizen already or can realistically expect to become one, with the same standing as anyone else.

But the biggest advantage isn’t even the emotional one. It’s the practical training that comes from occasionally being required, in your turn, to carry some public responsibility.

Most people’s daily lives don’t naturally stretch their minds. Work tends to be routine. It’s usually done for the plainest kind of self-interest—meeting everyday needs—not because the work itself inspires larger thoughts. The task and the way it’s done rarely pushes the mind beyond private concerns. Even if good books are available, there’s often no strong reason to read them. And many people never spend time with anyone significantly more educated than themselves.

Give someone real public duties, though, and you start filling those gaps. If circumstances let the share of public work be substantial, it can function as an education.

That’s one reason Athens produced such unusually capable citizens. Whatever we may say about the flaws of ancient society and ancient moral ideas, the regular civic life of Athens—their courts and their popular assembly—raised the intellectual level of the average Athenian citizen far above what we see in any other large population, ancient or modern. You can feel the evidence everywhere in the best histories of Greece. And you don’t even need to look that far: think of the level of argument Athenian orators believed was necessary to persuade ordinary citizens. They didn’t talk down to them. They couldn’t.

A similar kind of benefit exists in England, though on a smaller scale. Many people in the lower middle class may be called to serve on juries or take on local parish offices. It doesn’t happen to everyone, it isn’t continuous, and it doesn’t expose people to as wide a range of big public questions as Athenian democracy did. Still, it makes those who serve meaningfully different—in the breadth of their ideas and the development of their abilities—from people whose whole lives have been spent only at a desk copying documents or behind a counter selling goods.

Even more valuable is the moral education that comes from taking part—however rarely—in public functions. In that role, a private citizen has to practice habits that ordinary life doesn’t demand:

  • weighing interests that aren’t his own,
  • judging conflicts by rules other than personal preference, and
  • applying principles that exist for one reason: the common good.

And he rarely does this alone. He usually works alongside people more practiced in public-minded thinking, people who can supply reasons that clarify his judgment and awaken his concern for the general interest. Little by little, he learns to experience himself as part of a larger “we.” What benefits the public starts to feel, emotionally and practically, like it benefits him.

Where this “school of public spirit” doesn’t exist, people barely imagine that ordinary private citizens owe society anything beyond obeying laws and submitting to government. There’s no unselfish identification with the public. Interest and duty collapse inward—into the individual and the family.

In that condition, a person doesn’t think about shared aims pursued together. If he thinks of collective matters at all, it’s mainly as contests—goals to be won against others, often at their expense. Since he never works with neighbors on common projects for mutual benefit, a neighbor isn’t an ally or a partner. A neighbor is a rival. And when that becomes the default attitude, even private morality weakens while public morality dies outright. If this were the universal and only possible arrangement, the highest hope of any legislator or moral teacher would shrink to something like this: turning the population into a harmless flock, grazing side by side without biting.

Put all of this together and the conclusion is hard to escape:

  • The only government that can meet the full demands of social life is one in which the whole people participate.
  • Any participation—even in the smallest public task—does real good.
  • Participation should be as wide as the community’s general level of development can responsibly sustain.
  • Ultimately, nothing less than admitting everyone to a share of the state’s sovereign power can fully satisfy what a good society ought to aim for.

But there’s a practical limit. In any community larger than a small town, it’s impossible for everyone to take part personally in more than a tiny fraction of public business. And once you accept that fact, the “ideal” form of a perfect government points in one direction: it must be representative.

IV — Under What Social Conditions Representative Government Is Inapplicable

IV
Under What Social Conditions Representative Government Is Inapplicable

We’ve treated representative government as the best political arrangement humans have come up with—at least for societies that are ready for it. In general, the more a population has developed its education, prosperity, public spirit, and institutions, the more naturally this kind of government fits. As you move down that ladder, representative government usually becomes less workable. Not always, though. Whether a people can sustain representative government depends less on their “rank” in some grand scale of humanity than on whether they have certain specific prerequisites. Those prerequisites tend to travel with overall development, so big mismatches are possible but uncommon.

So where, exactly, does representative government stop being a realistic option—either because it can’t function at all, or because something else would work better? Let’s walk down the scale and find the breaking point.

When Representative Government Can’t Last

Representative government, like any government, is a bad fit wherever it can’t survive over time—that is, wherever it can’t meet the three basic conditions named earlier:

  1. The people must want it.
  2. The people must be willing and able to protect it.
  3. The people must be willing and able to do the work it requires of them.

1) When people won’t accept it

This question becomes practical mainly in one situation: when someone with power—an unusually enlightened ruler, or a foreign power that has taken control—offers representative government as a “gift.” For individual reformers, the objection “the public doesn’t support it” is almost beside the point, because changing public opinion is exactly what reformers are trying to do.

And even when public opinion resists, it usually isn’t because people have carefully weighed representative government and rejected it. More often, they resist because they don’t like change. There are exceptions: sometimes religion has encouraged loyalty to a particular ruling dynasty and made any limit on that dynasty’s power feel like a sacrilege. But generally, the old doctrine of “passive obedience” didn’t mean devotion to monarchy as such; it meant obedience to whoever currently holds power—king or crowd.

In most cases where representative government is even on the table, the real obstacles aren’t active hatred. They’re indifference and incomprehension—people don’t care much about representative institutions, and they don’t understand how they work or what they demand. And that’s just as deadly. It’s often easier to redirect a strong feeling than to create commitment where there was only apathy.

Here’s why that matters: if people don’t value a representative constitution and feel attached to it, they have almost no chance of keeping it.

The executive branch—whatever form it takes—is the part of government that holds immediate power and deals with the public face-to-face. People aim their hopes and fears at it. It’s where favors come from, where punishments come from, and where the spectacle and prestige of authority is concentrated. If the bodies meant to restrain the executive aren’t supported by a strong public mood—an “effective opinion and feeling”—the executive will always have ways to shove them aside or make them obedient.

In the end, representative institutions last only if people are prepared to defend them when they’re threatened. If they aren’t valued enough for that, they usually never take root. And if they do, they’re very likely to be toppled the moment a leader at the top—or any ambitious faction leader who can gather force for a quick strike—decides it’s worth taking a small risk to grab absolute power.

2) When people won’t (or can’t) do their part

Even if representative government can be introduced and doesn’t immediately collapse, it fails in another way when the public lacks either the will or the ability to play the role the system assigns them.

Representative government depends on enough people caring about national affairs to form something like a public opinion. When that interest is missing—when only a small fraction pays attention—elections don’t become a tool for governing well. They become a tool for:

  • private advantage,
  • local favoritism, or
  • loyalty to patrons (people you depend on or belong to).

In that climate, the small group who manages to dominate the representative body usually uses it as a career ladder—a way to get rich, get offices, get deals. What happens next depends on the strength of the executive:

  • If the executive is weak, the country dissolves into endless fights for jobs and influence.
  • If the executive is strong, it becomes despotic cheaply—by buying off representatives (or at least the troublemakers) with a share of the spoils.

The only “product” of national representation then is this: you get the same real rulers as before, plus an assembly living off the public—and any abuse that benefits part of that assembly is unlikely to be fixed.

Sometimes, even this sorry version of representation is still worth the cost, because it tends to bring publicity and debate—not guaranteed, but a natural companion of even nominal representation. Modern Greece is an example. If its assembly is largely made up of office-seekers who do little to govern well and don’t greatly limit executive arbitrariness, it still helps keep alive the idea of popular rights and supports the genuine press freedom that exists there.

But that benefit, in such a case, depends on having a hereditary king alongside the representative body. Why? Because the factions compete for the king’s favors instead of fighting for the top office itself. If those same selfish groups were battling to seize the chief power directly, they would—like the factions in Spanish America—keep the country in a permanent fever of coups, revolutions, and civil war. You wouldn’t get stable constitutional liberty. You’d get alternating tyrannies: not even lawful despotism, but rule by raw violence, exercised by one political adventurer after another. The “name and forms” of representation would mainly achieve one thing: preventing despotism from ever becoming stable—and stability is the one condition under which despotism’s harms can be softened or its few possible advantages (order, predictability) can show up at all.

When Representative Government Might Exist—but Shouldn’t (Yet)

So far, we’ve covered cases where representative government can’t endure. There’s a second category: situations where it could possibly exist, but where another regime would do better—especially when the society needs to learn something essential for civilization, and representative politics would get in the way of learning it.

The first lesson: learning obedience

The most obvious case is one already discussed: when a people still need the earliest lesson of civilized life—obedience to a common authority.

Imagine a group trained by harsh conditions to be brave, energetic, and fiercely independent—fighting nature, rival groups, and scarcity—but not yet accustomed to steady submission to any shared superior. That group is unlikely to develop the habit under a government run by itself. A representative assembly drawn from them would mostly mirror their own turbulence. It would resist any policy that feels like restraint—exactly the kinds of restraints that might improve them by turning raw independence into social order.

Historically, such tribes usually accept the first basic conditions of civilization through warfare—because war makes military command necessary, and military command requires despotic authority. They’ll submit to a war leader when they won’t submit to anyone else, except sometimes to a prophet believed to be inspired, or a sorcerer credited with miracles. Those figures can gain temporary dominance, but because their influence is personal rather than institutional, it rarely reshapes a people’s general habits—unless the prophet is also a military leader who spreads a new religion by force, or unless military chiefs harness the prophet’s influence to support their own rule.

The opposite problem: a people too submissive

A people can also be unready for representative government for the opposite reason: not wildness, but deep passivity—a readiness to accept tyranny.

If such a population somehow received representative institutions, they would predictably elect their oppressors as “representatives,” and the mechanism that looks, at first glance, like it should lighten the burden would actually make it heavier.

Many peoples have climbed out of this condition in another way: through a central authority that first competes with local tyrants and eventually masters them—above all, because the central authority is single. French history, from Hugh Capet through Richelieu and Louis XIV, is one long illustration. Even when the king was not much stronger than some major feudal lords, French historians have noted his advantage: there was only one king, while there were many local oppressors.

That made the king a focal point for hope. People abused in one region couldn’t appeal to every neighboring lord, but they could look to the crown. Across the kingdom, victims of local domination sought refuge and protection from the king—first against one lord, then another. The king’s rise was slow, but it was steady, because the opportunities that presented themselves belonged uniquely to him. And as he succeeded, the oppressed gradually unlearned the habit of accepting oppression as natural.

The king’s interest aligned with these partial liberations. He benefited when serfs loosened themselves from their lords and came into direct dependence on the crown. Under royal protection, many communities formed that recognized no superior but the king. Compared with the nearby castle lord, obedience to a distant monarch can feel like freedom itself. And for a long time, the king’s position forced him to act less like a master over the newly freed classes and more like their ally, because he needed them.

In this way, a central power—despotic in theory but often limited in practice—helped carry a society through a necessary stage of improvement that a truly functioning representative government would probably have blocked. In Russia, nothing short of despotism—or a general massacre—could have achieved the emancipation of the serfs.

Breaking the grip of extreme localism

The same stretch of history highlights another problem: one of the strongest barriers to progress, up to a fairly advanced stage, is an entrenched spirit of locality.

A people can be capable of freedom in many respects and still be unable to merge into even a small nation. Jealousies and rivalries may make voluntary union impossible. And even if you could draw a boundary around them and declare them one country, they might still lack the habits and feelings that make unity real. They may resemble the citizens of ancient city-states or the residents of traditional Asian villages: skilled at running local affairs, sometimes even with effective popular government at that small scale, but with thin sympathy for anything larger—and no practice handling interests shared across many towns.

History offers no clear example of many such “political atoms” fusing into one people—developing a shared national identity—without first being subjected to a common central authority. By repeatedly deferring to that authority, learning its plans, and serving its purposes, people begin to grasp the idea of large, shared interests across a broad territory.

Those large interests are, by necessity, central in the ruler’s mind. And through the connections the ruler gradually builds with localities, the wider perspective becomes familiar to everyone else.

The most favorable situation for this stage of development would be one where you create representative institutions without full representative government: local representatives exist and advise, but mostly serve as allies and instruments of the central power rather than trying to block or control it. People are “taken into council,” so they learn politics beyond the village, but they don’t hold supreme power.

This arrangement can deliver political education more effectively than a purely centralized system, because it reaches local leaders and the population through their own representatives. At the same time, it keeps alive the tradition that government ought to rest on general consent, or at least prevents the opposite tradition—government without consent—from becoming sanctified by habit. Once that darker tradition hardens into custom, it often turns promising beginnings into grim endings and becomes a common reason why improvement stalls so early: one era structures power in a way that blocks the essential work of later generations.

Meanwhile, there’s a hard political truth here: an irresponsible monarchy, more than representative government, is what can weld a mass of tiny political units into a people—creating shared cohesion, enough collective strength to resist conquest or foreign aggression, and a public life large and complex enough to occupy and expand the society’s political intelligence.

Why Early Societies Often Begin with Kings

For these reasons, kingly government—not controlled by representative bodies (though it may be supported by them)—is often the best form of polity in the earliest stages of a community. That includes even small city-communities like those of ancient Greece. Historically, kings came first: real public opinion existed, but without any formal, constitutional mechanism to restrain the monarch. Only after an unknown (and probably very long) period did free institutions appear, and even then they often emerged gradually through oligarchies of a few families.

Other Weaknesses—and When Foreign Rule Can Accelerate Progress

You could list dozens of other flaws that make a people less able to use representative government well. But in many of those cases, it’s not obvious that rule by One or a Few would actually cure the problem.

Consider:

  • strong prejudices,
  • stubborn attachment to old habits,
  • deep defects in national character,
  • widespread ignorance and lack of education.

If these are common in a population, their representative assembly will usually reflect them faithfully. And if the executive—the day-to-day administrators—happens to be made up of people who are relatively free of these defects, they can often do more good when they aren’t constantly forced to win the voluntary assent of a representative body shaped by those same weaknesses.

Still, unlike the earlier cases, simply being “the ruler” doesn’t automatically create motives that push in the right direction. The One and his advisers, or the Few, are usually not immune to the society’s general limitations—unless they are foreigners from a more advanced civilization.

In that case, rulers can be far more developed than the population they govern. And although foreign domination carries unavoidable evils, it can sometimes be a major advantage—moving a people quickly through several stages of progress and clearing obstacles that might have persisted indefinitely if the population had been left to its own internal tendencies and accidents.

If the rulers aren’t foreigners, the only comparable source of improvement is rare: the accident of a monarch of extraordinary genius.

There have been a few leaders in history who—luckily for everyone else—stayed in power long enough to make their reforms stick, because an entire generation grew up inside the new system and could keep it going after the leader was gone. Charlemagne is one example. Peter the Great is another.

But cases like that are rare enough that they belong in the category of “fortunate accidents”—the kind of timing that can decide, at one crucial moment, whether a whole society makes a leap forward or slides backward toward barbarism. Think of how much hung on having someone like Themistocles during the Persian invasion, or a William of Orange at the right moment.

It would be ridiculous to design political institutions around the hope of getting lucky like that. And in any case, people with that level of talent don’t usually need absolute power to make an outsized difference; the last examples show that plainly.

The situation institutions actually have to grapple with is much more common: a society where a small but leading group—because of race, origin, or some other circumstance—stands noticeably above the rest in education, social habits, and general civic character.

In a country like that, rule by the majority’s representatives risks cutting the whole population off from what the more advanced minority could contribute: better administration, broader knowledge, higher standards of public conduct. But rule by the minority’s representatives creates the opposite disaster: it can lock the majority into permanent humiliation, until their only hope of being treated decently is to throw off—or drive out—one of the most valuable sources of future improvement.

So what arrangement offers the best chance of progress in such a divided society? Paradoxically, it’s often this: a chief ruler from the dominant class whose authority is constitutionally unlimited, or at least practically decisive. Why? Because he alone has an incentive to raise the condition of the mass. He isn’t threatened by them; he is threatened by his peers. Strengthening the broader population gives him a counterweight against the rival elites who might otherwise hem him in.

And if good fortune adds one more ingredient—a representative body drawn from the superior caste, placed beside him not as a true controller but as a subordinate partner—things can improve further. Even limited, such a body can matter:

  • It can question and object, forcing reasons into the open.
  • It can have occasional outbursts of independence, keeping alive the habit of collective resistance.
  • Over time, it can be expanded gradually into something genuinely national.

That slow expansion is, in substance, what happened with the English Parliament. Under these conditions, a community built on sharp internal inequality may have about the best prospects for improvement it can reasonably hope for.


Not every obstacle to representative government makes it impossible. Some traits don’t make a people unfit for it, but they do make it much harder for representative institutions to deliver their full benefits. One tendency in particular deserves special attention.

There are two impulses that are completely different in spirit, but often push people and nations in the same political direction:

  1. The desire to exercise power over others.
  2. The reluctance to have power exercised over oneself.

How these two motives balance, from one society to another, is one of the most important forces shaping history.

In some nations, the hunger to rule others is so much stronger than the love of personal independence that people will trade away real freedom for the mere appearance of collective dominance. The psychology is a lot like the private soldier in a victorious army: he hands over his own freedom of action to the general, as long as the army wins and he can tell himself he’s part of a conquering force. The comforting idea that he personally shares in the domination over the conquered is mostly an illusion—but the feeling is enough.

A people like this won’t enjoy a government that stays in its lane—one that is strictly limited, avoids interference, and lets most of life run without official supervision. They don’t want a restrained state. To them, authorities can hardly do too much, as long as positions of authority are open to competition.

For an average person in this type of society, the distant chance of getting a slice of power over fellow citizens is more attractive than the sure, everyday benefit of living under a government that refuses to meddle unnecessarily. That mindset produces a politics dominated by office-seeking—what the older writers called “place-hunting.” In such a country:

  • Politics becomes a scramble for posts and patronage.
  • People care about equality of access, but not much about liberty.
  • Party conflict turns into a fight over which group gets to run the all-controlling machine.
  • “Democracy” gets reduced to one thin idea: opening government jobs to everyone, instead of reserving them for a few.
  • And the more “popular” the institutions become, the more offices get created—until you end up with over-government everywhere: everyone supervising everyone, and the executive supervising all.

It would be unfair and ungenerous to claim this is a perfectly accurate portrait of the French people. Still, to the extent that this trait has been present, it helps explain two failures: first, why representative government restricted to a limited class collapsed into extreme corruption; and second, why the attempt at representative government by the whole adult male population ended by handing one man the power to send any number of others—without trial—to penal colonies like Lambessa or Cayenne, so long as he left everyone believing they weren’t permanently shut out from the possibility of receiving his favors.

By contrast, the national character that more than anything else suits the people of this country for representative government is almost the reverse. They are intensely sensitive to any attempt to exercise power over them that isn’t backed by long-established practice and by their own sense of what’s right. But they generally don’t care much about exercising power over others.

They have no real sympathy with the lust for governing. And because they’re only too familiar with the private interests that often drive people to seek office, they’d rather have public authority handled by those who acquire it without chasing it—through social position and established standing.

If foreigners understood this, it would make several “contradictions” in English political life much easier to read. Englishmen are often perfectly willing to be governed by higher classes, yet they show remarkably little personal submissiveness to them. No people are quicker to resist authority when it pushes past the accepted limits, or more determined to make rulers remember a basic condition: they will be governed only in the way they themselves consider proper.

That’s why office-hunting, as a national passion, is relatively rare in England. Apart from the few families and circles for whom government employment is the obvious career path, most people’s idea of “getting ahead” points somewhere else—success in business or in a profession. They have a strong dislike for political life as a mere contest for jobs, whether fought by parties or by individuals. And few things bother them more than the multiplication of public posts.

On the Continent, by contrast—especially in countries burdened by bureaucratic habits—expanding government employment is often popular. People would rather pay higher taxes than reduce, even slightly, their own or their relatives’ chance of securing a position. And when they call for “retrenchment,” they often don’t mean eliminating offices at all; they mean cutting the salaries of the offices that are too well paid for the ordinary citizen to imagine ever being appointed to them.

V — Of The Proper Functions Of Representative Bodies

When you talk about representative government, you have to separate two things:

  • the core idea—what “representative government” means in principle, and
  • the particular arrangements different countries ended up with, thanks to history, accidents, and whatever political fashion dominated at the time.

At its heart, representative government means this: the people—all of them, or at least a large and meaningful portion—hold the ultimate power to control the state, but they exercise that power through deputies they choose in regular elections. However the constitution is designed, that final controlling authority has to live somewhere. In a representative system, it lives—practically speaking—in the people acting through their representatives.

And that final authority has to be real, not symbolic. The people (through their representatives) must be able, whenever they choose, to become the “master switch” for the whole government—to steer it, stop it, or replace the people running it. A written constitution doesn’t have to spell this out explicitly. Britain’s constitution, for example, doesn’t hand Parliament a neat clause saying, “You are supreme.” Yet in practice it delivers the same result.

Here’s the key point: even in a government that advertises itself as mixed or balanced, the power that ultimately decides conflicts cannot be permanently divided. In the long run, final control is singular—just as it is in a straightforward monarchy or a straightforward democracy. That’s the real insight behind an old claim, revived by respected modern thinkers, that a “perfectly balanced” constitution can’t exist.

You can have something like balance most of the time. But the scales never hang perfectly still.

And you often can’t tell, just by reading the formal rules, which side will win when there’s a serious clash. Britain is a good example. Formally, each of the three parts of sovereignty—the Crown, the House of Lords, and the House of Commons—has powers that, if pushed to the limit, could jam the entire machine. On paper, each can block the others. And if any one of them could improve its position by using that blocking power, ordinary human nature gives you no reason to expect restraint. If one branch were attacked by the others, there’s no doubt it would use every legal tool it had to defend itself.

So why doesn’t the system constantly collapse into mutual sabotage?

Because Britain runs on more than written law. It runs on unwritten constitutional rules—what you might call the country’s political morality: shared expectations about what you may legally do versus what you should do if you want the system to keep working. If you want to know who is truly supreme in a constitution like this, you don’t just read statutes. You look at the living code of political behavior that actually governs how officials use their legal powers.

Take the Crown. By strict constitutional law, the monarch can refuse assent to an Act of Parliament. The Crown can also appoint and keep a minister in office even if Parliament protests. But Britain’s constitutional morality makes those powers functionally unusable. They exist on paper, yet the country’s political habits and expectations prevent them from being exercised. And because those unwritten rules require that the head of the government be, in substance, chosen by the House of Commons, the Commons becomes the real sovereign power.

But notice what’s doing the heavy lifting: those unwritten limits survive only as long as they match the real distribution of strength in society. Every constitution has a strongest force—the one that would win if the usual compromises broke down and the players actually tested one another. Unwritten maxims stay in place, and actually guide behavior, when they give practical dominance to the institution that already has the greatest power “out of doors”—in the country at large, in public opinion, in organized social force.

In England, that strongest force is popular power. So if Britain’s formal laws plus its unwritten conventions did not end up giving the popular element real supremacy—supremacy across every part of government, proportional to its actual strength—then Britain wouldn’t have the stability it’s known for. Something would have to give. Either the written laws would change, or the unwritten conventions would. A durable constitution can’t permanently contradict the reality of who holds power in the society beneath it.

In that sense, Britain qualifies as representative government in the strict, correct meaning of the term. The powers left in hands not directly answerable to the people are best understood as safety measures—guardrails the ruling power tolerates to protect itself against its own mistakes. Well-designed democracies have always had these. Athens had many such checks. So does the United States.

Still, even if representative government requires that the representatives of the people hold practical supremacy, it doesn’t settle a different question: what, exactly, should the representative body do with its hands? What work should it perform directly, and what work should it merely supervise?

There’s a fundamental difference between:

  • controlling the business of government, and
  • actually doing the business of government.

A person—or a body—can be positioned to control everything, yet be totally incapable of doing everything. In fact, in many cases, control becomes stronger when the controlling body does less of the hands-on work itself. Think of an army commander. If he’s down in the trenches swinging a bayonet, he can’t effectively direct a campaign. The same logic applies to groups. Some tasks can only be done well by a collective body; other tasks are made worse the moment a crowd tries to perform them.

So we have two separate questions:

  1. What should a popular assembly have the power to control?
  2. What should it personally do?

It should control all the operations of government in the last resort. But to decide how that control should work day to day—and which tasks the assembly should keep in its own hands—you have to ask a practical question: What kinds of work can a large body actually do well?

The assembly should personally take on only what it is competent to do well. For everything else, its proper job is not to perform the work, but to make sure the work is done well by others.

Consider the duty most people associate with an elected assembly: voting taxes. Even so, no country expects the representative body to draft the detailed budget estimates by itself, whether directly or through officers it commands. In Britain, while the House of Commons alone can vote “supply,” and its approval is required to allocate revenue to specific categories of spending, the settled constitutional rule is that money can be granted only on the proposal of the Crown—that is, on the initiative of the executive government.

Why would a representative system adopt a rule that seems, at first glance, to tie Parliament’s hands?

Because it reflects a realistic view of responsibility. You’re more likely to get moderation in totals and careful judgment in details when the executive—the machinery that will actually spend the money—is made answerable for the plans, assumptions, and calculations behind that spending. As a result, Parliament is not expected—and is not even allowed—to originate taxation or expenditure proposals directly. It is asked for consent. Its one decisive power in this domain is the power to say no.

If you take the principles behind that arrangement seriously, they point toward a broader rule about what representative assemblies should and should not do. The first is simple and widely accepted wherever the representative system is understood in practice:

Large representative bodies should not administer.

That isn’t just a political theory preference; it’s basic business sense. Action—real, coordinated action—requires organization and command. A group of people, unless structured under leadership, is not fit for execution in the strict meaning of the word. Even a small committee of specialists is usually a worse instrument than a single capable person chosen from among them, made chief, and backed by subordinates. What groups do better than individuals isn’t execution. It’s deliberation.

When you need conflicting views to be heard and weighed—when it matters that many perspectives get real consideration—a deliberative body is invaluable. That’s why councils and boards can be useful even in administrative matters. But typically they’re most useful as advisers, because administration is usually best done under the clear responsibility of one person.

Even a joint-stock company, whatever its formal theory, tends in practice to revolve around a managing director. The quality of management depends mainly on one person’s competence. Other directors, when they add value, do so by offering suggestions, watching the manager, and restraining or removing him if he misbehaves. Pretending they are equal partners in day-to-day management isn’t a benefit. It’s a cost. It weakens the sense—both in the leader’s mind and in the public’s—that there is one identifiable person who stands plainly responsible.

A popular assembly is even less suited to administration—or to issuing detailed orders to administrators. Even when it acts in good faith, its interference almost always does harm.

Public administration is skilled work. Each department has its own principles and its own stock of hard-earned rules, and many of those rules are not truly understood—at least not in a usable way—except by people who have actually done the work. That doesn’t mean government is mystical, reserved for some priesthood. The principles of administration are understandable to any sensible person if they have an accurate picture of the real conditions they’re dealing with. But that picture doesn’t appear by intuition. You get it through experience, study, and exposure to the concrete constraints of the job.

Every field—public or private—contains rules that matter enormously, yet a newcomer doesn’t even know exist. They were invented to prevent dangers or nuisances that wouldn’t occur to someone seeing the system from the outside. I’ve watched talented public men—ministers with more than average intelligence—walk into a department for the first time and confidently announce, as if they had discovered a neglected truth, something that was probably the first thought of everyone who ever considered the subject… and which everyone abandoned once they made it to thought number two. Their subordinates laughed, not because the minister was stupid, but because the minister was new.

Yes, a great statesman knows when to break with tradition and when to follow it. But it’s a serious mistake to think he’ll do that better by being ignorant of tradition. You can’t judge when an exception is necessary unless you understand what ordinary practice is and why experience has endorsed it.

The stakes in administrative decisions—who is affected, what consequences flow from one method rather than another—often require a specialized kind of judgment. It’s rare in people who haven’t been trained in the work, just as it’s rare to find real law-reforming skill in people who have never studied law professionally. A representative assembly that tries to decide particular administrative acts is almost guaranteed to ignore these realities. At best, it becomes inexperience judging experience, ignorance judging knowledge—and ignorance that doesn’t even realize what it doesn’t know tends to be both careless and contemptuous. It dismisses, and sometimes resents, the idea that anyone else might have a judgment more worth hearing.

That’s the picture even when nobody has a selfish motive.

When self-interest enters, things get uglier. Then you get jobbery—favor-trading and opportunism—more brazen than the worst corruption that can occur inside a public office operating under the glare of publicity. And you don’t need a corrupt majority for this to happen. In many cases, it’s enough that two or three members have a strong personal or factional interest. Those few will have far more incentive to mislead the rest than the rest will have incentive, or ability, to correct them. Most members may keep their hands clean, yet they cannot keep their minds alert and their judgments sharp in technical matters they don’t understand. And a lazy majority—like a lazy individual—belongs to whoever works hardest to steer it.

Parliament can sometimes restrain a minister’s bad policies or bad appointments because ministers have to defend themselves and opponents are eager to attack them. That creates something like balance in argument. But the harder question remains: who watches the watchers? Who checks Parliament itself?

A minister, or a department head, feels at least some responsibility. An assembly, in detailed administrative matters, feels almost none. When did an MP lose his seat because of a vote on some specific administrative detail? A minister also has to worry about how his actions will look later, after consequences have unfolded. An assembly, by contrast, often treats the applause of the moment as a full acquittal—especially if the outcry was sudden, easily stirred up, or deliberately manufactured. And an assembly doesn’t personally feel the inconvenience of its bad decisions until they swell into national disasters. Ministers and administrators, meanwhile, see the trouble coming and have to endure the daily grind of trying to prevent it.

So what is a representative assembly supposed to do about administration?

Not to decide administrative details by its own votes, but to ensure that the people who decide them are the right people.

Even then, the assembly shouldn’t generally do this by naming individuals itself. Few actions demand a stronger sense of personal responsibility than appointing someone to a job. And experience shows that the average conscience is strangely dull here. People are often least sensitive precisely where they should be most careful. They pay little attention to qualifications—partly because they don’t know the differences, and partly because they don’t care enough to learn them.

When a minister makes what is intended as an honest appointment—meaning he isn’t openly handing the post to a relative or a party crony—an outsider might assume he will pick the most qualified person. Usually, he won’t. The typical minister considers himself extraordinarily virtuous if he chooses someone of “merit” or someone who has some public claim, even if that merit or claim has nothing to do with the work. It’s the old story: they needed an accountant, so they hired a dancer—and the minister feels not merely innocent but admirable, provided the dancer dances well.

And there’s a deeper reason for this failure: recognizing the qualifications needed for specific tasks requires knowledge of the individuals, or at least a deliberate practice of evaluating people based on their work and reliable testimony from those positioned to judge. If even great officials—who can, in principle, be held responsible—treat these obligations lightly, how much worse will it be for assemblies that cannot be made responsible in any focused way?

Even now, some of the worst appointments happen because they’re meant to buy support or neutralize opposition inside the representative body. Imagine how much worse it would be if the body itself made the appointments.

Large groups, as a rule, don’t seriously weigh special qualifications at all. Unless someone is plainly unfit in the most extreme sense, he is treated as roughly as suitable as anyone else for almost any post he wants. When appointments by a public body aren’t decided—almost always—by party ties or private favoritism, they tend to go to someone with a general reputation (often undeserved) for “ability,” or, just as often, for no better reason than personal popularity.

For that reason, it has never been considered wise for Parliament to nominate even the members of a Cabinet. It’s enough that Parliament, in practice, decides who will be prime minister—or, at most, narrows the field to the two or three people from whom the prime minister will be chosen.

In a system like this, a vote isn’t really the legislature “choosing” a specific person. It’s acknowledging something simpler: this person is the candidate of the party whose overall program the assembly is prepared to back. In practice, Parliament decides only one thing—which party (or, at most, which small set of parties) will supply the executive government. Once that decision is made, the party itself—through its internal judgment and politics—decides which of its members is most fit to lead.

Under the working habits of the British Constitution, this arrangement is about as sound as it can be. Parliament doesn’t formally appoint ministers. Instead:

  • The Crown appoints the head of the administration in line with the general wishes Parliament has clearly shown.
  • The rest of the ministers are appointed on the recommendation of that chief.
  • And each minister carries the full moral responsibility to appoint capable people to the non-permanent offices under their control.

In a republic you’d need different machinery. But the closer a republic can come, in real day-to-day practice, to this English pattern, the better it’s likely to run. There are basically two workable routes:

  • As in the United States, the chief executive is elected through a process independent of the representative body.
  • Or the representative body limits itself to naming a prime minister, and then holds that person responsible for choosing colleagues and subordinates.

In theory, most people will agree with this. In practice, representative bodies have a strong pull in the opposite direction: they keep trying to meddle more and more in administrative details. The reason is predictable—the more power you have, the more you’re tempted to overuse it. And that temptation is one of the real, future dangers facing representative government.

But there’s another truth that’s only recently—slowly—been admitted: a large assembly is just as poorly suited for the direct work of making laws as it is for running administration.

Law-making is one of the most demanding kinds of intellectual labor. It isn’t just about having opinions. It requires experience, practice, and long, disciplined study. That alone is enough to show why laws can rarely be written well except by a small committee.

There’s a second reason, just as decisive: every clause in a law needs to be drafted with a clear, far-seeing understanding of how it interacts with every other clause. And the finished law must fit coherently into the entire body of existing law. You cannot meet those conditions when a mixed crowd votes on legislation clause by clause, improvising as it goes.

This would be obvious to everyone if our laws weren’t already, in structure and drafting, such a mess that people assume they can’t be made much worse. But even with that low bar, the system’s unfitness becomes more visible every year.

Start with time. Simply getting bills through the process consumes so many hours that Parliament becomes less and less able to pass anything except small, disconnected measures. And if someone drafts a bill that tries to deal with an entire subject—something you must do if you want to legislate properly on any part of it—it often drags from session to session because there just isn’t time to finish it.

It doesn’t even matter if the bill is excellent. It might be carefully prepared by the person everyone agrees is most qualified, or by a select commission of experts who spent years studying and digesting the measure. Still, it may not pass—because the House of Commons won’t give up what it treats as a precious privilege: to tinker with the bill using its own clumsy hands.

Recently, Parliament has sometimes tried a partial fix. After a bill’s principle is affirmed on the second reading, it’s referred in detail to a Select Committee. But this rarely saves much time later, because the ideas that were rejected by knowledge tend to demand a second hearing before the larger, less expert audience. In fact, this committee practice has been adopted more by the House of Lords—whose members are generally less busy, less eager to meddle, and less jealous of the importance of each individual voice—than by the elected House.

And when a long, complex bill does get discussed in detail, the result is often a wreck. You can almost predict the damage:

  • Essential clauses get dropped, breaking how the remaining parts are supposed to work.
  • Odd, mismatched clauses get inserted to appease a private interest—or to buy off some eccentric member threatening to delay the bill.
  • Provisions get shoved in by someone with only a surface-level grasp of the subject, triggering consequences no one anticipated at the time.
  • Then, almost inevitably, the next session needs an amending act to clean up the mess.

One especially harmful feature of this whole method is that the job of explaining and defending a bill—its overall logic and its specific provisions—is rarely done by the person who actually conceived and drafted it, who often doesn’t even sit in the House. Instead, the defense falls to a minister or MP who didn’t write it and has to “cram” for most of the arguments. That person doesn’t know the strongest version of the case, doesn’t know the best reasons to offer, and is often helpless when objections arise that weren’t anticipated.

For government bills, this problem can be fixed—and in some representative constitutions it has been—by allowing the government to be represented in either House by trusted people who have the right to speak, even if they don’t have the right to vote.

Now imagine something else: suppose the still-large majority in the House of Commons—the members who never want to propose amendments or make speeches—stopped leaving the whole management of business to the minority who do. Suppose they reminded themselves that skill in legislation does not equal a smooth speaking voice, or the talent of winning an election. If they did, it would quickly become obvious that, in legislation as in administration, the only work a representative assembly can reliably do is not doing the work itself, but making sure the work gets done well.

That means a representative assembly should focus on three responsibilities:

  • Decide who (or what kind of people) should do the drafting work.
  • Give or withhold national sanction once the work is done.
  • Hold the process accountable without trying to do the technical labor itself.

Any government suited to a highly civilized society, in my view, would treat as a fundamental element a small body—no larger than a Cabinet—serving as a Legislative Commission whose appointed job is to draft laws.

And if our laws are revised—as they surely will be—and reorganized into a connected, coherent system, then the Commission of Codification that performs that work should not disappear afterward. It should remain permanently to guard the system against decay and to propose improvements as often as needed.

No one should want this body to enact laws on its own. The Commission would supply the element of intelligence in construction; Parliament would supply the element of will. Nothing would become law without Parliament’s explicit approval. Parliament (or either House) would be able not only to reject a bill, but also to send it back to the Commission for reconsideration or improvement. Either House could also take the initiative by referring a subject to the Commission with instructions to draft a law.

Of course, the Commission would not be free to refuse to help with legislation the country wants. If both Houses agreed on instructions to draft a bill to accomplish a particular purpose, those instructions would be binding—unless the Commissioners chose to resign.

But once the Commission produced a draft, Parliament should not be able to rewrite it clause-by-clause. Parliament’s power should be limited to pass or reject—or, if it dislikes parts, send it back for revision.

As for appointments: the Crown would appoint the Commissioners, but they would serve for a fixed term—say five years—unless removed by an address from both Houses, based either on personal misconduct (as with judges) or on refusing to draft a bill in obedience to Parliament’s demands. At the end of the term, a Commissioner would step down unless reappointed, creating an easy way to retire those who proved unequal to the work and to bring in younger, more capable minds.

This kind of arrangement isn’t just a modern idea. Even the Athenian democracy felt the need for something like it. In its height, the popular assembly—the Ecclesia—could pass psephisms (mostly decrees on particular matters of policy). But “laws,” properly speaking, could be made or changed only by a different and smaller body, renewed each year, called the Nomothetae. Their job included revising the whole body of laws and keeping the parts consistent with one another.

In England, it’s hard to introduce something that is completely new in both form and substance. But it’s often possible to reach new goals by adapting existing institutions and traditions. That’s why I think we could enrich the Constitution with this improvement by using the machinery of the House of Lords.

A commission to prepare bills would be no more alien to our Constitution than the Board administering the Poor Laws or the Inclosure Commission. And if, in recognition of the dignity of the trust, it became a rule that every person appointed to the Legislative Commission—unless removed by address from Parliament—would be made a peer for life, then a familiar pattern would likely repeat itself. Just as good sense has left the judicial functions of the peerage largely to the law lords, so it would likely leave the craft of legislation—except where major political principles or interests are at stake—to professional law-drafters.

Under such a system:

  • Bills originating in the Upper House would always be drafted by the Commission.
  • The government would naturally hand over the framing of all its bills to them.
  • Private members of the House of Commons would gradually find it easier to advance their measures by obtaining leave to introduce them and refer them to the Commission—rather than dropping a draft straight into the House and forcing it through the current chaos.

And nothing would prevent the Commons from referring not just a topic, but a specific proposal—or even a full draft bill—if a member believed they could produce one worth passing. The House would almost certainly refer such drafts to the Commission anyway, if only as raw material and as a source of useful suggestions. For the same reason, written amendments or objections proposed after a bill left the Commission’s hands could be referred back for evaluation.

In time, the practice of rewriting bills in Committee of the whole House would fade away—not because it was formally abolished, but because it would fall into disuse. The right wouldn’t be surrendered; it would be stored, like other old weapons of constitutional conflict—the royal veto, the right to withhold supplies, and similar instruments. Nobody wants to use them, but nobody wants to throw them away either, in case an extraordinary emergency ever makes them necessary.

With arrangements like these, legislation would take its proper shape: skilled labor, grounded in special study and experience. And the nation’s most important liberty—being governed only by laws assented to by its elected representatives—would be fully preserved. In fact, it would be strengthened, because it would no longer be tangled up with the serious (and entirely avoidable) drawback we currently accept: ignorant and ill-considered lawmaking.

So if a representative assembly shouldn’t try to govern directly—and shouldn’t try to draft laws directly—what should it do?

Its proper office is to watch and control the government:

  • Shine the harsh light of publicity on what the government does.
  • Demand a full explanation and justification of any act that someone thinks is questionable.
  • Censure actions that deserve condemnation.
  • And if the people in power abuse their trust, or govern in a way that clashes with the nation’s settled judgment, remove them from office and—explicitly or implicitly—install their successors.

That is an immense power. And it’s enough to protect a nation’s liberty.

But Parliament has another function that’s at least as important: it should be, at the same time, the nation’s Committee of Grievances and its Congress of Opinions. It is the arena where the country’s general opinion—and the opinions of every section of it, and as many of its outstanding individuals as possible—can present itself openly and be tested in debate.

In a good representative chamber:

  • Every person in the country can expect that someone will voice their view, as well as—or better than—they could themselves.
  • Those views are expressed not only to friends and allies, but directly in front of opponents, where they must endure cross-examination.
  • People who lose a vote can still feel that their position was genuinely heard, and set aside not by raw power but by what the majority considers better reasons.
  • Every party can take a clear measure of its true strength—and be cured of fantasies about how many followers it has.
  • The prevailing national opinion becomes visible as prevailing, organizing itself in the government’s presence.
  • The government is therefore enabled—and compelled—to yield to that opinion as soon as its strength is clearly shown, without needing the country to “use force” in any literal sense.
  • Statesmen can read, more reliably than by almost any other signal, which currents of opinion and power are rising and which are fading, and shape policy with an eye not only to today’s emergencies but to longer trends already unfolding.

Enemies of representative assemblies love to mock them as places of mere talk. That ridicule is badly misplaced. What better use can a representative body make of itself than talk—when the talk is about the nation’s public interests, and when every sentence expresses either the view of some significant portion of the country or of a trusted individual speaking on its behalf?

A chamber where every interest and shade of opinion can make its case—even passionately—in the face of the government and of every rival interest; a chamber that can force the government to listen, and either comply or explain clearly why it won’t—this, even if it served no other purpose, would still be among the most important political institutions imaginable, and one of the greatest benefits of free government.

This kind of “talk” would never be despised if it weren’t allowed to crowd out “doing.” And it wouldn’t—if assemblies understood and admitted that discussion is their job, while execution—turning decisions into workable administration and coherent law—is the job of trained individuals.

The assembly’s role is to make sure those individuals are chosen honestly and intelligently, and then to hold them to account—not by micromanaging their work, but by giving them wide room for suggestion and criticism, and by granting or withholding the final seal of national assent.

When assemblies lack this disciplined restraint, they try to do what they cannot do well: govern and legislate directly. And because they build no machinery beyond themselves for much of that work, every hour spent in debate becomes, inevitably, an hour pulled away from the actual business.

What makes a representative assembly bad at writing laws is exactly what can make it excellent at its other job.

A legislature made up of elected representatives usually isn’t a handpicked team of the country’s sharpest political thinkers. And even if it were, you still couldn’t reliably read “the nation’s mind” from what a small set of brilliant people believe. When the system is set up well, a representative body is something else: a miniature portrait of the public—a mix of abilities, experiences, and ways of thinking, drawn from everyone who legitimately deserves a say in public affairs.

That’s why its core purpose isn’t to micromanage policy details or draft technical statutes. Its proper functions are these:

  • Name what people need. Bring problems to the surface and keep them from being ignored.
  • Speak public demands out loud. Act as the channel through which the country presses its claims on government.
  • Host real disagreement in public. Provide a place where every view about public matters—big or small—gets argued, challenged, and tested.
  • Hold the executives to account. Criticize the officials who actually run the machinery of government, and, when necessary, withdraw support from them (or from those they appoint).

Only by keeping representative bodies within these rational limits can a society get the best of popular control without sacrificing something just as essential—competent lawmaking and administration. And that second requirement only grows more urgent as society becomes larger, more interconnected, and more complicated.

There’s no magic trick that lets you fully combine both benefits inside the same institution. The workable solution is structural: separate the functions.

  • Put oversight, criticism, and the power to withhold support in the hands of representatives—the voice of the Many.
  • Entrust the actual conduct of affairs to a comparatively small number of people—the Few—who have the training, experience, and practiced judgment the job demands, while keeping them under strict responsibility to the nation.

So far, we’ve been talking about what should belong to the nation’s central representative assembly—the body that, at the highest level, controls lawmaking and the administration of national affairs. The next step would be to ask what functions should belong to smaller representative bodies that deal only with local matters. That question is crucial to this book, and we’ll return to it—but it has to wait until we’ve first worked out what the main representative body should look like, and how it should be composed.

VI — Of The Infirmities And Dangers To Which Representative Government Is Liable

Every system of government can fail in two basic ways: by coming up short (negative defects) or by doing active harm (positive evils).

A government is negatively defective when either:

  • It doesn’t put enough real power in the hands of officials to do the basic jobs of governing—keeping order, protecting rights, and making steady progress possible.
  • It doesn’t do enough to develop its citizens—by giving people practice using their moral judgment, their intelligence, and their capacity to act together as a society.

At this point, we don’t need to spend much time on either. Still, it helps to see where the weak points usually come from.

Negative Defect #1: Not Enough Governing Power

When a government can’t maintain order or support progress, that problem usually has less to do with which constitution you picked and more to do with what kind of society you have. In a rough, unsettled society, people often cling to a fierce, “nobody tells me what to do” independence. If they can’t tolerate the amount of authority they actually need for their own good, the society simply isn’t ready for representative government yet.

But when a society is ready, the main representative body—the sovereign assembly—will inevitably have enough power available for the essential purposes of government. If the executive branch seems weak, it’s usually because the assembly refuses to entrust it with power. And that kind of refusal typically comes from jealousy or suspicion—especially in countries where the assembly’s constitutional ability to remove ministers hasn’t yet become solid and routine.

Once the assembly’s right to dismiss ministers is fully recognized and genuinely effective, the assembly has little reason to fear giving its own ministers whatever authority is truly needed. In fact, the danger flips: the assembly may hand over power too easily, and in terms that are too vague or too broad. Why? Because the minister’s power is ultimately the power of the very body that appoints him and can keep him in office.

Still, one of the classic risks of a “controlling” assembly is a particular kind of inconsistency:

  • It grants authority in big, sweeping chunks.
  • Then it interferes constantly in how that authority is used.
  • It gives power wholesale, then claws it back piece by piece through endless one-off interventions in daily administration.

When a legislature starts trying to govern directly instead of doing its proper job—criticizing, checking, and holding the executive accountable—the result is exactly the set of harms already discussed in the previous chapter. There’s no mechanical constitutional trick that can fully prevent this. The only real safeguard is a strong, widespread public understanding that this kind of meddling is damaging.

Negative Defect #2: Failing to Develop Citizens

The other negative defect—failing to exercise and strengthen the people’s moral, intellectual, and practical faculties—is the signature weakness of despotism. But if we compare one kind of popular government with another, the advantage goes to the system that most broadly spreads real civic participation.

That happens in two ways:

  • Broad suffrage: exclude as few people as possible from voting.
  • Broad participation in public work: as far as it can be done without sacrificing other equally important goals, open up the details of public business to ordinary citizens across classes—through things like jury service, local and municipal offices, and, above all, maximum publicity and freedom of discussion.

With open discussion, something important happens: not just a rotating set of leaders, but the public as a whole becomes, to a degree, a participant in government—and gains the education and mental training that comes from that participation. The full benefits of this, and the limits that must be respected, will be easier to explain later when we get to administration in detail.


Now for the positive dangers—the harms representative government can actively produce. These fall into two main categories:

  1. Insufficient competence in the controlling body—not necessarily stupidity, but a shortage of the mental qualities needed to judge public questions well.
  2. Misaligned interests—the controlling body may be driven by interests that don’t match the general welfare.

Many people assume that popular government is especially vulnerable to low political capacity. The story goes like this: a single monarch brings energy, an aristocracy brings steadiness and prudence, while democracy brings flip-flopping and short-term thinking.

But that contrast isn’t as solid as it looks.

Representative Government vs. Monarchy

Compared with a simple monarchy, representative government has no disadvantage here. Outside of a crude “rough-and-ready” era, hereditary monarchy—when it truly is monarchy and not aristocracy wearing a crown—often produces the very kinds of incapacity people like to blame on democracy, and to a greater extent.

Why the exception for rude ages? Because in a genuinely turbulent society, the monarch is forced into capability. He constantly runs into resistance from stubborn subjects and powerful rivals. The society doesn’t offer him many chances to rot in luxury. His main sources of excitement are mental and physical activity—politics, war, leadership. And among violent chiefs and lawless followers, he won’t keep his authority, or even his throne, unless he has real personal daring, skill, and energy.

That’s why the “average talent level” among certain famous English kings looks high: the weak ones didn’t merely govern badly; they often ended tragically, and their failures triggered civil wars and upheaval. Likewise, the Reformation era produced several remarkable hereditary monarchs. But many of them were shaped by hardship—raised in adversity, coming to power unexpectedly, or having to fight through enormous early challenges.

Once European life became more settled, something changed. Hereditary kings who rise above mediocrity became rare, and the average often sank even below mediocrity—both in talent and in strength of character.

In modern times, a constitutionally absolute monarchy usually survives not because the king is brilliant, but because the state is effectively run by a permanent bureaucracy. Russia and Austria, and even France in its “normal” condition, function largely as oligarchies of officials, with the head of state doing little more than choosing the top administrators—though, of course, the ruler’s will still shapes many particular decisions.

Representative Government vs. Aristocracy

The governments with the most consistently strong talent and drive in managing public affairs have usually been aristocracies. But look closely at which aristocracies those were: they were, in practice, aristocracies of public officials.

The ruling class was so small that each influential member could treat public business as a serious profession—and did. The two standout examples across generations are Rome and Venice.

  • In Venice, even though the privileged order was large, actual power was rigidly concentrated in a small inner oligarchy. Those people devoted their lives to studying and directing state affairs.
  • In Rome, the system was more open, but the real governing body—the Senate—was mainly made up of people who had already held public office or were preparing for high office, with severe consequences for incompetence or failure. Once in the Senate, a man’s life was effectively pledged to public business. He couldn’t even leave Italy except on public duty. And unless expelled for disgraceful conduct, he kept his authority and responsibility for life.

In a structure like that, each member’s sense of personal importance becomes tied to the dignity and reputation of the state he helps run—and to his own performance in its councils.

Here’s the catch: that “dignity and reputation” is not the same thing as the prosperity or happiness of ordinary citizens. It can even conflict with it. But it is tightly linked to the state’s external success and expansion. So it makes sense that Roman and Venetian aristocracies, for all their faults, developed a consistently shrewd collective policy and produced individuals with formidable governing skill—especially when the goal was aggrandizement and power.

So Where Does Sustained Governing Skill Usually Come From? Bureaucracy

If we strip away the romance, the pattern is clear: the only non-representative governments that regularly display high political ability—whether monarchies or aristocracies—have essentially been bureaucracies.

That means: the work of governing is done by people who govern as a profession. Whether they’re professionals because they were trained for the job, or trained because the job is already theirs, matters a great deal in many ways—but it doesn’t change what the system is.

By contrast, aristocracies like England’s—where the ruling class had power mainly because of social status, not because it was specially trained or wholly devoted to public work—have been, in intellectual terms, much like democracies. They show high governing ability in any large measure only in those periods when a single great and popular talent rises to the top.

The great democratic leaders—Themistocles, Pericles, Washington, Jefferson—were exceptions. But the great ministers of Britain’s representative aristocracy, like Chatham and Peel, were also exceptions. Even in an aristocratic monarchy like France, standout ministers were rare. A great minister, in modern European aristocratic governments, is almost as unusual as a great king.

So, when we compare intellectual strength, the meaningful comparison is not “democracy versus monarchy” or “democracy versus aristocracy.” It’s representative democracy versus bureaucracy. Everything else is mostly noise.

What Bureaucracy Does Better—and What It Ruins

A bureaucratic government has real advantages:

  • It accumulates experience.
  • It develops tested traditions and workable rules of thumb.
  • It ensures that people managing public business have appropriate practical knowledge.

But it has a major weakness: it doesn’t nourish independent mental energy. Its chronic disease—often the one it eventually dies from—is routine.

Rules harden into dogma. Methods become fixed. Worse, the system turns mechanical: when work becomes pure routine, it loses its living intelligence. It keeps spinning like a machine even as the real work it exists to do goes undone.

A bureaucracy also tends to slide into a kind of pedant rule—a government of rule-followers. When the bureaucracy truly runs the state, “the spirit of the corps” overwhelms individual excellence. Even highly capable people can be pressured into conformity, as happens in tightly organized bodies.

In government, as in other professions, most people default to “what they were taught.” That means it often takes popular government to let an original, inventive mind break through the obstructive force of trained mediocrity. Without that outside democratic pressure, the system can smother innovation.

The example is telling: only with the backing of popular government could a reformer like Sir Rowland Hill overcome the Post Office. Popular government put him there, and made the institution, against its own instincts, accept the drive of someone who combined expertise with energy and originality.

Rome’s aristocracy avoided bureaucracy’s deadliest routine largely because it contained a popular element. The offices that led to Senate membership, and the offices senators sought, were awarded by popular election. That external pressure helped keep the system from sealing itself off completely.

Modern Russia illustrates both sides of bureaucracy vividly:

  • Its fixed principles, pursued with almost Roman persistence across generations.
  • Its skill in pursuing those principles.
  • Its horrifying internal corruption.
  • Its organized, long-term resistance to improvements from outside—so strong that even a vigorous autocrat rarely defeats it.
  • The long-run victory of patient institutional obstruction over the bursts of energy of a single ruler.

China, as far as outsiders understand it, offers another example: a mandarin bureaucracy showing the same mixture of strengths and defects.

Why You Need Opposing Forces

In human affairs, you need conflicting forces to keep each other alive and effective—even for their own purposes. When a system chases one good goal in isolation, it doesn’t just overshoot; it tends to decay and lose even the thing it cared about.

A government of trained officials can’t do for a country what a free government can do. You might think, though, that it could at least do its own specialized job better than freedom can. But even here, the evidence points the other way: bureaucracy needs an outside element of freedom to do its work well and to keep doing it over time.

And freedom has its own vulnerability. It can’t reliably produce its best effects—and it often collapses—unless it finds a way to combine popular control with trained and skilled administration.

If a people are at all ready for representative government, there’s no real contest between it and even the most perfectly designed bureaucracy. Representative government is better.

But the real goal of political design is more ambitious than choosing one or the other. It is to combine as much of the best of both as compatibility allows:

  • Skilled administration by people trained for it, treating governing as a serious intellectual profession.
  • Real general control exercised by institutions that genuinely represent the entire people.

A major step toward that combination is drawing a clear line—already discussed in the previous chapter—between:

  • The work of governing (which requires specialized cultivation to do well), and
  • The work of selecting, watching, and controlling the governors (which properly belongs to those for whose benefit the governing is done).

No progress toward a “skilled democracy” is possible unless democracy accepts a basic truth: work that requires skill must be done by people who actually have that skill.

Democracy already has plenty to do in mastering its own essential task—supervision and restraint. It must cultivate enough collective competence to judge, monitor, and correct those who administer public affairs.

What Happens When the Representative Body Isn’t Competent Enough

How to obtain and secure that level of competence is one of the key questions when designing a representative assembly. When an assembly’s composition fails to provide enough mental fitness for its supervisory role, predictable failures follow.

The assembly will start trespassing into executive work through special, piecemeal interventions. It will:

  • Drive out good ministries or elevate and protect bad ones.
  • Wink at abuses of trust—or fail to notice them.
  • Fall for false pretenses—or refuse support to ministers who are honestly trying to do their duty.
  • Endorse or impose policies that are selfish, impulsive, shortsighted, ignorant, and prejudiced—at home and abroad.
  • Repeal good laws or pass bad ones; open the door to new evils or cling stubbornly to old ones.
  • And, under misleading impulses—temporary or lasting, originating in the assembly or in its constituents—perhaps even tolerate or connive at actions that set aside the law entirely in cases where equal justice would offend popular feeling.

Such are some of the risks baked into representative government when the system doesn’t reliably put enough knowledge and intelligence in the legislature.

Now comes a different set of problems: the damage caused when a representative body develops habits of action driven by what Bentham memorably called sinister interests—interests that clash, in one way or another, with the public good.

We already recognize this pattern in monarchies and aristocracies. A huge share of their worst behavior flows from exactly this: the ruler’s interest, or the ruling class’s interest (either as a group or as individuals), lines up—at least in their own minds—with doing things the community as a whole would not choose.

Here’s what that looks like in practice:

  • The government’s interest is to tax heavily; the public’s interest is to be taxed as little as good government can allow.
  • A king or aristocracy’s interest is to hold unlimited power and demand everyone conform to the rulers’ will; the people’s interest is to live under as little control as is compatible with legitimate government.
  • A ruler’s (real or imagined) interest is to block criticism, especially criticism that could threaten power or constrain freedom of action; the people’s interest is to have full freedom to criticize every public official and every public act.
  • A ruling class’s interest is to pile up unfair privileges—sometimes to grab money from the public, sometimes simply to elevate themselves by pushing others down.

And if the public is restless—which under that kind of regime is likely—those in power often see it as “smart” to keep people less educated, to stir up divisions, and even to keep them from becoming too comfortable, because prosperity can make people harder to manage. That’s the ugly logic behind the old Richelieu-style maxim: don’t let the people “grow fat and kick.”

From a narrowly selfish viewpoint, all of this “makes sense” for kings and aristocracies unless a strong counter-pressure exists—mainly, fear of resistance. And historically, where rulers have been strong enough to float above public opinion, these sinister interests have produced exactly these results. Given that position, it would be irrational to expect consistently better behavior.

So far, none of this is controversial. What’s more questionable is the easy assumption some people make next: that democracy, by contrast, is somehow immune.

If you think of democracy in the common way—as rule by the numerical majority—then it can absolutely be captured by sectional or class interests that point toward policies very different from what an impartial concern for everyone would recommend.

Run a few simple thought experiments:

  • If the majority is white and the minority is Black (or the reverse), is the majority likely to deliver equal justice to the minority?
  • If the majority is Catholic and the minority Protestant (or the reverse), doesn’t the same risk show up?
  • If the majority is English and the minority Irish (or the reverse), isn’t similar injustice plausible?

And beyond identity and religion, every society contains an even more basic split: a majority who are poor and a minority who are rich. On many questions, these groups have what looks like a direct conflict of interest.

Suppose, for the sake of argument, that the poor majority is intelligent enough to understand one important truth: undermining property rights ultimately hurts them, and arbitrary seizure would weaken the security of property. Even granting that, the danger doesn’t disappear. There is still a serious risk that they would:

  • shift an unfair share—or even all—of taxation onto people with “realized property” and larger incomes,
  • then raise that load further without hesitation,
  • and spend the revenue in ways believed to benefit the laboring class.

Or take another divide: a minority of skilled workers and a majority of unskilled workers. Experience with many trade unions (assuming the reports aren’t wildly unfair) supports a worry here too: the majority might try to impose equality of earnings as a rule and stamp out the arrangements that let higher skill and effort earn higher pay—piecework, hourly pay, and anything else that lets superior ability translate into superior reward.

From the same class-interest perspective, you can easily imagine political pushes for:

  • laws meant to raise wages,
  • limits on competition in the labor market,
  • taxes on or restrictions against machinery and improvements that reduce demand for existing labor,
  • perhaps even protectionism to shield domestic producers from foreign competition.

I’m not claiming these outcomes are inevitable. I’m saying they are natural expressions of class feeling when the governing majority consists of manual laborers.

At this point someone will object: “But those policies aren’t really in the long-term interest of the largest class.” Fair. But notice what that objection quietly assumes—that people in power will reliably act on their real, ultimate interest, not on what seems beneficial right now.

If that were how human beings behaved, monarchy and oligarchy wouldn’t be as terrible as they often are. After all, it’s not hard to argue that a king or a ruling senate would enjoy the best life when they govern justly and vigilantly over a people who are active, prosperous, educated, and high-minded. In that scenario, the rulers get stability, admiration, and the comforts of a thriving country.

And yet, kings only occasionally, and oligarchies essentially never, have consistently taken that elevated view of self-interest. So why should we expect something more noble—more clear-sighted and self-denying—simply because the people in power are now the laboring classes?

The crucial point isn’t what someone’s interest “really” is. It’s what they think it is. Any theory of government collapses if it assumes that the numerical majority will routinely do what almost no holder of power is expected to do except in rare cases: act against their immediate, obvious interest in the name of their ultimate interest.

In fact, many of the harmful measures listed above—and many others just as bad—could plausibly serve the immediate interest of unskilled laborers as a group. They might even benefit the selfish interest of the current generation of that class. The long-run damage—less industry and initiative, less saving, weaker incentives—might not bite hard within a single lifetime. Some of the most disastrous political transformations have been immediately comforting on the surface.

A striking example is Rome. The rise of the Caesars’ despotism brought real short-term relief to the generation that lived through it. It ended civil war. It cut down on the corruption and cruelty of provincial governors. It encouraged refinements of culture and intellectual work—so long as that work wasn’t political. And it produced dazzling monuments of literature that can mislead shallow readers into thinking the regime must have been a triumph, forgetting a key fact: many of the writers and thinkers whose work made Augustus’s age “brilliant” were formed in the freer generation that came before. The same is true, in their own contexts, of Lorenzo de’ Medici and Louis XIV.

For a time, the empire also lived off stored-up capital: the wealth, energy, and mental momentum built across centuries of freedom didn’t vanish overnight. The first generation of the newly powerless still benefited from the old accumulation.

But that momentary glow marked the beginning of a long decline. Over time, the civilization Rome had gained faded almost unnoticed, until an empire that had once held the world in its grip became so militarily ineffective that invaders—people a few legions used to intimidate—could overrun and occupy enormous territories. Christianity’s arrival provided a new moral and intellectual impulse just in time to keep arts and letters from dying out, and to keep humanity from slipping back into what might have been a very long darkness.

This leads to a deeper lesson about “interest” itself. When we say a person or a group acts out of self-interest, it’s a mistake to focus mainly on what an unbiased observer would calculate as best for them. That’s often the least important part.

As Coleridge puts it: the man makes the motive, not the motive the man. What counts as “interest,” in practice, depends less on external circumstances than on what kind of person someone is—on the settled habits of their feelings and thoughts. To know what someone will actually treat as their interest, you need to know what they habitually care about.

Everyone carries at least two sets of “interests”:

  • interests they care about, and interests they don’t care about;
  • selfish interests and unselfish interests;
  • present interests and distant interests.

A selfish person has trained himself to care about the first kind and to ignore the second. An improvident person is someone whose mind clings to the present and can’t bring itself to care about the future, even if the future matters more on any correct calculation. If their habits fix their wishes on what is immediate, the larger, later benefits may as well not exist.

That’s why preaching “long-term happiness” so often fails. You can’t reason a violent, domineering man out of beating his wife or mistreating his children by telling him he’d be happier living in love and kindness. He would be happier—if he were the kind of person who could enjoy that life. But he isn’t, and it may be too late for him to become that kind of person. Given who he is, the thrill of domination and the release of a brutal temper feels, to him, like a greater good than any affection he doesn’t value. He takes no pleasure in their happiness and doesn’t care for their love. If you try to convince him that a gentler neighbor is happier, you probably won’t reform him—you’ll just irritate him and sharpen his spite.

On average, someone who genuinely cares about other people—about their country, about humanity—is happier than someone who cares only about comfort or money. But what good is that sermon to a person who can’t feel anything beyond their own ease or pocket? They can’t care just because they’re told they should. It’s like lecturing a worm on the advantages of being an eagle.

Now connect that to politics. There are two especially dangerous tendencies:

  1. preferring one’s selfish interests over the interests one shares with others, and
  2. preferring one’s immediate, direct interests over interests that are indirect, distant, or long-term.

And these two tendencies are famously strengthened by one thing above all: power.

As soon as a person—or a class—finds power in their hands, their private interest, or their group’s separate interest, suddenly feels vastly more important. People treat them with reverence, and they begin revering themselves. They start to believe they count for a hundred times more than other human beings. At the same time, the ease with which they can do what they want—without feeling consequences right away—quietly erodes the mental habits that make people anticipate consequences even for themselves.

That’s what the old saying “power corrupts” really means. It’s not mystical. It’s a generalization grounded in everyday experience.

And it would be absurd to infer from how someone behaves in private life how they will behave as an unchecked ruler. Put a man on a throne, surround him with flatterers, and remove restraints, and the worst parts of his nature—previously kept down by circumstance and by other people—get encouraged and fed. The same logic applies to groups. It would be just as foolish to think that “the people” (the demos) will remain modest and reasonable once they become the strongest power, simply because they were modest and reasonable while someone stronger held them in check.

Government has to be designed for human beings as they are—or as they can become in the near future. And at any level of development humanity has yet reached, or is likely soon to reach, people who are thinking mainly in terms of self-interest will be guided almost entirely by interests that are obvious at first glance and tied to their present condition.

What reliably draws groups toward distant or less obvious interests is something different: a disinterested concern for others, especially for those who come after them—posterity, the country as a continuing project, humanity as a whole—whether that concern comes from sympathy or from conscience. But no system of government is rational if it requires these high-minded motives to be the dominant, controlling forces in the conduct of average people.

Yes, a community mature enough for representative government can be expected to contain a reasonable amount of conscience and public spirit. But it would be laughable to expect so much of it—plus such fine intellectual judgment—that it would be immune to every attractive argument that makes class advantage look like justice and the general good.

We all know how convincing sophistries can be when they promise the “benefit of the many.” Plenty of people who aren’t fools or villains have believed it was justifiable to repudiate a national debt. Plenty of able, influential people think it fair to load all taxation onto savings—rebranded as “realized property”—while letting those who, and whose parents before them, spent everything they earned go untaxed as a kind of reward for their “virtuous” wastefulness.

We also know how persuasive arguments can be—more dangerous precisely because they contain some truth—against inheritance, against the power to leave property by will, and against almost any advantage one person seems to have over another.

And we know something else: how easily people can be talked into dismissing whole domains of knowledge as pointless. To those who don’t have it, almost every branch of learning can be “proved” useless: language study, ancient literature, scholarship of any kind, logic, metaphysics, poetry, the fine arts, political economy—some capable minds have even called history useless or harmful. If people felt even slightly encouraged to disbelieve in these things, the only knowledge that would reliably keep its status is the kind of hands-on familiarity with nature that directly helps produce necessities or sensory pleasures.

So ask yourself: is it reasonable to expect that even minds more cultivated than the average majority will be—once they hold power—so scrupulously fair, so delicately conscientious, and so willing to see what cuts against their own apparent interest, that they will reject these and countless other fallacies pressing in from every direction? Fallacies designed to make them follow selfish impulses and short-sighted ideas of “their good,” even when that means injustice—at the expense of other classes and of the future?

This is why one of the greatest dangers of democracy—just like every other form of government—is the sinister interest of those who hold power. In democratic form, it shows up as class legislation: government aimed (whether it truly succeeds or not) at the immediate advantage of the dominant class, with lasting harm to society as a whole.

And that brings us to one of the most urgent design questions for any representative constitution: how do you build real, effective safeguards against this danger?

If you treat a “class,” politically, as any group of people who share the same sinister interest—meaning their obvious, immediate self-interest pushes them toward the same kind of harmful policies—then the goal is straightforward: no single class, and no likely coalition of classes, should be able to dominate the government.

In a modern society that isn’t split apart by intense hostility over race, language, or nationality, you can usually (with plenty of exceptions at the edges) divide the public into two big camps whose apparent interests tend to pull in opposite directions. Call them, in broad strokes:

  • Workers (people who earn their living by labor), and
  • Employers of labor (people who hire labor).

But those labels need unpacking.

On the “employer” side, you should include more than active business owners. You also need to count:

  • retired capitalists,
  • people living on inherited wealth, and
  • highly paid workers—like many professionals—whose education and lifestyle align them with the wealthy, and whose ambitions often point toward joining that class.

On the “worker” side, you should include not only manual laborers, but also many small-scale employers who, by their interests, habits, and educational influences, feel and think more like workers than like the rich—this includes a large share of small shopkeepers and petty traders.

Now imagine the representative system could be made ideally perfect, and—just as importantly—kept that way. In that case, the system should be arranged so these two broad groups balance each other: workers and their allies on one side, employers and their allies on the other, each shaping roughly an equal number of parliamentary votes.

Why? Because if you assume that, when these groups clash, most people in each group will mostly follow their class interest, then the real safeguard comes from the minority within each class who won’t. In every camp there are some who put reason, justice, and the common good above their immediate advantage. If the system is balanced, that principled minority on either side can join forces with the other camp and defeat demands from “their own” majority when those demands shouldn’t win.

This is also why, in any decently organized society, justice and the general interest often win out in the end. It’s not because people are saints. It’s because selfish interests rarely line up perfectly. Some people have a private stake in what’s wrong—but others, for their own reasons, have a private stake in what’s right. The people motivated by higher principles are usually too few to overpower everyone else directly, but after enough public debate and pressure, they can often become decisive by tipping the outcome toward the side whose private interests happen to match what’s just.

A well-designed representative system should preserve that dynamic. It shouldn’t let any one sectional interest grow so strong that it can steamroll truth and justice plus all the other interests combined. There should always be enough balance among competing interests that any faction, to win, must carry with it at least a substantial share of people who act from broader motives and longer-range views.

VII — Of True And False Democracy; Representation Of All, And Representation Of The Majority Only

Representative democracy runs into two big hazards.

  • First, the people who sit in the legislature (and the public opinion steering them) can sink to a low level of knowledge and judgment.
  • Second, a numerical majority that mostly comes from one social class can use the lawmaking power for class legislation—rules that quietly (or not so quietly) tilt benefits toward “us” at the expense of “them.”

The question is whether we can design a democracy that keeps what’s best about democratic government—public control, accountability, legitimacy—while shrinking those two dangers as much as human ingenuity can manage.

A familiar answer is: limit the vote. Give the franchise only to some portion of the public, and you’ve supposedly restrained the democratic “excess.” But there’s a crucial point people often miss, and it changes the whole debate: many of the worst effects we blame on “too much democracy” are actually made far worse because most modern democracies aren’t truly equal in the first place. They’re systematically unequal in favor of the dominant group.

Two Very Different Things We Call “Democracy”

People regularly mash together two ideas that should never be confused:

  • True democracy: government of the whole people by the whole people, with everyone equally represented.
  • Democracy as commonly practiced: government of the whole people by a bare majority of the people, with that majority exclusively represented.

The first is what equality means in politics. The second is a kind of privilege—not privilege for a king or an aristocracy, but privilege for the biggest bloc. Under the usual winner-take-all voting method, minorities are not just outvoted; they’re often erased. They don’t merely lose debates. They lose seats.

And that’s the core mistake: people assume there are only two options.

  1. Either the minority gets the same power as the majority (which feels unfair).
  2. Or the minority gets wiped out entirely (which we’ve gotten used to).

Habit makes the second option feel “natural,” even though it’s simply unnecessary.

Being Outvoted Isn’t the Same as Being Unrepresented

In any legislature that actually debates and decides, the minority will be overruled on many votes. That’s normal. In an equal democracy, the majority of voters will usually send a majority of representatives, and those representatives will outvote the minority’s representatives.

But none of that implies the minority should have no representatives at all.

Ask the question plainly: if the majority ought to win, does it follow that the majority must have all the seats and the minority none? Must the minority not even be heard?

A genuinely equal democracy represents groups proportionally:

  • A majority of voters gets a majority of seats.
  • A minority of voters gets a minority of seats.
  • Person for person, the minority’s vote counts as much as the majority’s.

If that isn’t true, you don’t have equal government. You have rule by one part of the people over the rest—a political privilege granted to whoever happens to be the largest group.

This Harms More Than Minorities—and It Can Even Betray “Majority Rule”

It’s obviously unjust to silence people just because there are fewer of them. Equal suffrage means every individual counts equally.

But there’s another twist: a democracy built on winner-take-all representation often fails even at its own supposed goal—“rule by the majority.”

Why? Because it frequently hands power to a majority of the majority, which can easily be a minority of the nation.

Here’s the extreme case that exposes the logic. Imagine:

  • every district election is closely contested,
  • each seat is won by a small local margin,
  • so the legislature, overall, represents only a thin majority of voters.

Now let that legislature pass major laws by a thin majority inside the legislature. What assurance do you have that those laws reflect what most citizens wanted?

Consider how the numbers can break:

  • Nearly half of voters in every district lost—and under winner-take-all, they got zero influence on who represents them.
  • Of the voters who backed winning candidates, a substantial portion might still oppose the specific measures those candidates support.
  • Meanwhile, many winners will vote against those measures, meaning the law passes on a knife edge of shifting coalitions.

It becomes entirely possible—often likely—that a policy “wins” even though most citizens dislike it. What prevailed was the preference of a ruling slice created by electoral machinery, not the genuine preference of the nation.

If you truly mean “the majority should prevail,” there’s only one way to make that reliable: every individual must count equally in the final tally of representation. Leave any sizable group out—by design or by the way the system works—and you don’t empower “the majority.” You empower some other minority.

“But Don’t Local Majorities Balance Out?”

A common reply is: “Different places think different things. If you’re a minority here, you’ll be a majority somewhere else. So, overall, every viewpoint gets represented.”

In today’s limited electorate, that’s roughly true—if it weren’t, the gap between Parliament and public opinion would become obvious quickly.

But the argument collapses if the electorate expands dramatically, and especially if it becomes universal. In that case, the majority in nearly every locality would be drawn from the same broad class—manual laborers—and when an issue pits that class against the rest of society, the other classes might fail to secure representation anywhere.

Even now, look at the daily injustice built into the system: in every Parliament, large numbers of voters who want representation simply have no member they voted for. Is it acceptable that entire districts are effectively represented by whoever local machines and patrons select? Large-town constituencies that contain many of the most educated and public-spirited citizens are, to a significant extent, unrepresented or misrepresented.

Winner-take-all also distorts representation within a party:

  • If you’re on the “wrong” side of the local majority, you’re not represented at all.
  • If you’re on the “right” side, you’re often forced to accept the party’s most broadly acceptable nominee—even if that person disagrees with you on everything except the party label.

In some ways, that can be worse than banning minorities from voting outright. If only the majority voted, the majority might at least choose its best, most capable champion. But when victory depends on never splitting the vote, everyone becomes terrified of variety. Voters rally behind the first candidate who wears their party colors, or the one promoted by local leaders. And even when those leaders aren’t acting from personal interest, they still face a ruthless constraint: to avoid internal fractures, they put forward someone who offends no faction strongly.

The result is predictable: the “safe” candidate is often a person with no clear convictions beyond the party password.

This dynamic shows up clearly in U.S. presidential politics. The dominant party often avoids nominating its most impressive or well-known figures, because anyone prominent has, by being visible, irritated some segment of the coalition. A less-known candidate is a “safer bet” for keeping every subgroup on board. So the nominee—even when chosen by the larger party—may reflect only the wishes of the narrow margin that makes the coalition possible.

In practice:

  • Any bloc whose support is needed to win gains a veto over the candidate.
  • The bloc that is most stubborn can force its choice on the rest.
  • And the most stubborn bloc is often the one defending its own narrow interest, not the public good.

So the minority’s voting rights, instead of securing fair influence, can end up serving a perverse function: they help pressure the majority into accepting the candidate preferred by the most timid, the most prejudiced, or the most narrowly self-interested segment of the majority itself.

These Evils Aren’t “The Price of Freedom”

It’s not shocking that many people, while recognizing these problems, treat them as the unavoidable cost of free government. For a long time, that was the default belief among friends of liberty.

But resignation becomes a habit. People get so used to a harm that they stop seeing it as a harm—and then they resist any proposed fix, as if naming the problem creates it.

Still, whether a cure is easy or hard, anyone who genuinely values liberty should feel the weight of these defects and welcome the discovery that they can be reduced or removed.

Most importantly: the near-erasure of minorities is not a necessary consequence of freedom. It isn’t even naturally connected to democracy. It is the opposite of democracy’s first principle: representation in proportion to numbers. If minorities are not adequately represented, you may have the appearance of democracy, but you do not have the real thing.

Partial Fixes: Limited Voting and Cumulative Voting

Once people grasp the problem, they often propose remedies that soften it—though they don’t fully solve it.

One approach is limited voting in multi-member districts:

  • If a district elects three members, allow each voter to cast only two votes.
  • Or go further: allow each voter to cast only one vote.

Either way, a minority that makes up roughly a third of the district could usually secure one of the seats.

Another approach is cumulative voting:

  • Keep the three votes, but allow a voter to place all three on a single candidate.

These ideas are far better than doing nothing, and it’s unfortunate none were widely adopted, because they at least acknowledge the right principle and would have made a fuller reform easier to accept later.

But they remain workarounds. They fail in two important ways:

  • Local minorities smaller than about a third still get nothing.
  • Minorities spread across many districts—numerous in total, but not concentrated—remain unrepresented.

True equality in representation requires something stronger: any group of voters, equal in size to the average constituency, should be able to combine across geography and elect a representative together, no matter where those voters live.

Hare’s Plan: Proportional Representation at National Scale

This level of representation seemed impossible until Thomas Hare devised a workable system and even drafted it as a proposed law. Its rare virtue is that it carries a major principle of government close to perfection for its main goal, while also producing other benefits almost as important.

The key idea is simple: define a quota—the number of votes needed to win a seat.

  • Compute it by dividing the total number of voters by the number of seats in the House.
  • Any candidate who reaches that quota wins a seat, no matter where those votes come from.

Votes are still cast locally, but voters aren’t trapped inside local choices. Any voter may vote for any candidate standing anywhere in the country. So if none of the local candidates fits, the voter can support someone better—someone they genuinely want to represent them. That alone rescues the practical meaning of the vote for people who would otherwise be politically erased.

Hare’s plan goes further, because it isn’t only “non-local voters” who need help. It also needs to catch the votes of people who did vote locally but lost.

So the system uses a ranked voting paper:

  • You list your first choice, then additional names in the order you prefer them.
  • Your vote counts for only one candidate at a time.
  • If your top choice doesn’t reach the quota, your vote moves to your next choice who still needs it.
  • If your higher choices either can’t reach the quota or don’t need your vote because they already have enough, your vote can still be used to help elect someone further down your list.

To fill the House completely and to keep wildly popular candidates from swallowing nearly all votes, the plan also limits how many votes count for any one winner:

  • A candidate may receive far more votes than the quota, but only the quota counts to elect them.
  • The “extra” votes are not wasted; they transfer to those voters’ next preferred candidates who still need support to reach a quota.

There are several ways to decide exactly which of a popular candidate’s votes are treated as the quota and which are released for transfer. We don’t need the technical details here. The guiding idea is that the elected candidate should keep, above all, the votes of those who would otherwise end up unrepresented. For the remainder, a random draw—if no better method is available—would be an unobjectionable expedient.

Ballots would go to a central office, where officials would count them and tally how many first-choice, second-choice, third-choice, and so on each candidate received. Anyone who reached the required quota would be awarded a seat, until the legislature was full—using first-choice votes first, then second-choice, then third, and continuing down the ranking as needed. The ballots and every step of the calculation would be stored in public archives, open to anyone with a stake in the result. And if someone who clearly hit the quota somehow wasn’t officially seated, they could prove the mistake with straightforward evidence.

That’s the core of the system. If you want the nuts-and-bolts version, the best guides are Hare’s own short book on elections (published in 1859) and Henry Fawcett’s pamphlet from 1860 that explains Hare’s plan in a stripped-down, practical way. Fawcett’s value is that he drops a few of Hare’s extra add-ons—useful in theory, but not worth the complexity—so the basic mechanism is easier to see. The more you study these explanations, the more obvious the point becomes: the plan is not only workable, it has advantages so large that I see it as one of the biggest improvements ever proposed in how representative government actually functions.

What the system achieves, first and foremost, is proportional representation for the entire electorate. Not just two giant parties, plus the occasional large local minority that happens to be concentrated in one place. Under Hare’s approach, any minority anywhere in the country—so long as it’s big enough to deserve representation on plain fairness—can earn a seat.

Second, it fixes a quiet absurdity in many existing systems: people are routinely told they’re “represented” by someone they never chose. Here, that stops. Every member of Parliament would be backed by a unanimous constituency: a quota’s worth of voters who all actively selected that person. And crucially, they aren’t limited to whatever two or three names happen to be on the local ballot. They can choose from candidates across the country. So instead of being forced to pick between a couple of stale options simply because those are what the local party machines put on the shelf, voters can support the person who truly matches their views—or the person whose judgment and character they most trust.

That changes the relationship between representative and voter in a way we’ve barely experienced. The bond becomes personal and real:

  • The voter is connected to their representative, not a random winner of a local contest.
  • The representative is connected to actual people, not just a town name on a map.
  • Each vote reflects either genuine agreement with the candidate’s principles or a deliberate decision to delegate judgment to someone admired for ability and integrity.

This doesn’t mean local places lose their voice. Everything worth keeping about local representation remains. Parliament shouldn’t spend much time on purely local business—but as long as it must handle some of it, every significant locality needs people specifically tasked with watching its interests. Under this system, they still exist. Any locality large enough to reach the quota on its own will usually choose someone from within the area—someone who knows local conditions and lives there—assuming there’s a qualified local candidate. It’s mainly the local minorities, who can’t win that local seat, who will look beyond the district for someone likely to attract additional votes from elsewhere.

Where Hare’s system really shines is the quality of the people it can bring into the legislature. Right now, it’s increasingly hard for someone to enter Parliament on talent and character alone. The easiest paths are:

  • having strong local influence,
  • spending huge sums of money,
  • or being “parachuted” in by a major party—selected in a club or committee far away and sent to a district as a reliable vote.

Hare’s system breaks that bottleneck. If voters don’t like the local lineup—or if they can’t get their preferred local candidate elected—they can use the rest of their ranked choices to support respected national figures who share their general political outlook. As a result, almost anyone who has earned an honorable reputation—through writing, public service, expertise, or principled leadership—would have a real shot at reaching the quota even without local power and without swearing loyalty to a party. With that possibility on the table, you could reasonably expect far more of these independent, capable people to run than ever before.

Think about what this means in practice. There are many able, original thinkers who could never win a majority in any one existing district. But they may have small pockets of support spread across the country—people in nearly every region who know their work and respect it. Under today’s rules, that scattered support is wasted. Under Hare’s rules, it adds up. Count every one of those votes together, and the candidate might reach the quota. I can’t see any other realistic system that makes it so likely that Parliament contains the country’s true elite of ability.

And the improvement doesn’t come only from minorities. Majorities get better candidates too. Why? Because the majority no longer faces a crude ultimatum: vote for the party’s local nominee or stay home. If the nominee is weak, the majority can simply shift its votes to someone better—possibly even someone outside the district—while still voting. That forces local party leaders to stop treating voters like a captive audience. They can’t push through the first person who shows up with party slogans and a few thousand pounds to spend. If the majority wants to win, it must offer a candidate worthy of being chosen; otherwise its voters will drift to stronger competitors and the minority may win.

That pressure changes local politics from the inside:

  • The majority stops being dominated by its least admirable members.
  • Communities start putting forward their most capable local figures.
  • And districts begin competing to attract strong candidates, especially those with reputations that can pull in “extra” votes from outside.

Now zoom out to the broader trend. Representative government, like modern civilization, naturally drifts toward collective mediocrity. Expanding the electorate and lowering barriers to voting tend to shift power toward social groups that, on average, have had less access to education. That doesn’t automatically make democracy bad. But it does create a real question: even if the most educated and capable people are outnumbered—as they will be—will they at least be heard?

In a false democracy—one that gives representation only to local majorities—the educated minority can end up with no voice in the legislature at all. The United States is often cited as a clear example of this defect: highly cultivated citizens, unless they’re willing to suppress their own judgment and become obedient spokespeople for less informed opinion, often don’t even try to run for Congress or state legislatures because they expect to lose. If something like Hare’s system had been available to the founders, America’s federal and state assemblies would likely have included far more of its most distinguished minds—and democracy would have avoided one of the sharpest criticisms made against it, along with one of its most dangerous weaknesses.

Against that weakness, personal representation works almost like a specific remedy. The educated minority, scattered across many districts, could coordinate—without any special privilege—to elect a number of representatives proportional to their actual numbers. They would have every incentive to choose the ablest people available, because only by pooling their dispersed votes could they turn a small numerical strength into something politically significant.

Meanwhile, majority representatives would still outnumber them—exactly as the majority outnumbers the minority in the country. The majority could still win votes. But it would no longer govern in a vacuum. It would argue and legislate in the presence of capable critics, under scrutiny that it can’t ignore. When disagreements arise, the majority would have to answer real arguments with reasons that at least look persuasive. And because they couldn’t simply assume they’re right (as people often do when speaking only to their own side), they would sometimes discover they’re wrong. Since a fairly chosen national assembly can be expected, on the whole, to be well-intentioned, contact with stronger minds would gradually lift the quality of its thinking—even when the contact is combative.

This matters for public life beyond Parliament’s walls. Unpopular ideas wouldn’t be confined to books and niche periodicals read only by sympathizers. Competing views would confront each other directly, in the same room, before the country. Then we’d see whether an opinion that wins by counting votes would also win if votes were weighed—if judgment, competence, and argument had a visible stage.

People often do have a genuine instinct for spotting ability, when they’re given a fair chance to see it demonstrated. If an able person fails to gain any influence, it’s usually because the institutions keep them out of view. Ancient democracies didn’t have that problem: any talented speaker could step into the public space and address the citizens without needing permission. Representative systems, by contrast, can accidentally lock out exactly the kind of Themistocles or Demosthenes whose counsel might save a nation—leaving them never able to win a seat at all. If you can reliably secure even a handful of first-rate minds inside the legislature, their influence will be felt in deliberation, even when they run against the prevailing popular mood. And I can’t imagine a system that guarantees their presence as effectively as Hare’s.

Finally, this group would serve another essential function that most democracies barely provide for: antagonism—the organized resistance that keeps the strongest power from becoming the only power.

In every government, one force eventually becomes stronger than the rest. And whatever is strongest tends to push, relentlessly, toward total dominance. Sometimes it does that intentionally; often it happens without anyone fully noticing. But the direction is the same: it tries to make everything bend to it, and it isn’t satisfied as long as any durable counterweight exists—any influence that refuses to align with its spirit.

Here’s why that’s dangerous. If the dominant power ever succeeds in crushing all rivals and remaking society entirely in its own image, improvement stops and decline begins. Progress has never been produced by a single force that contains every ingredient of human good. Even the best ruling power carries only part of what a society needs. The rest has to come from elsewhere, which means there must be a living counterforce.

History backs this up. Long-running progress almost always coincides with sustained tension between major powers in society:

  • spiritual authority versus temporal authority,
  • military or landed interests versus commercial and industrial classes,
  • kings versus peoples,
  • religious orthodoxy versus reform movements.

When one side won so completely that the struggle ended—and nothing else replaced it—stagnation followed, and then decay.

Majority rule is generally less unjust, and often less harmful, than many other kinds of dominance. But it brings the same structural danger, and in some ways even more certainly. Under a monarchy or oligarchy, “the many” always exist as a rival force. They may not be strong enough to control the rulers, but their opinion can still support dissenters and give social backing to those resisting the governing tendency. Under a supreme democracy, there may be no “one” or “few” powerful enough for dissenting opinions or threatened interests to lean on. So the central problem of democratic government has often been this: How do you create, inside a democratic society, the kind of social support that earlier societies accidentally produced—a firm foothold for individual resistance, a protective rallying point for unpopular ideas and vulnerable interests? Without such a foothold, older societies—and nearly all modern ones—either broke apart or settled into stagnation, which is just slow deterioration.

This is exactly the gap that personal representation is designed to fill, as fully as modern conditions allow. If you need a corrective to the instincts of a democratic majority, the only realistic place to find it is the educated minority. But in the ordinary setup of democracy, that minority has no channel—no “organ”—through which to act. Hare’s system gives it one.

The representatives elected by the combined votes of minorities would form that channel in its strongest form. Creating a separate political organization for the educated classes, even if it were possible, would be offensive and divisive—and the only way it could avoid provoking resentment would be to have no real influence at all. But if the best of the educated minority enter Parliament by exactly the same right as everyone else—representing the same number of citizens, the same fraction of the national will—no one can reasonably object to their presence. And from that position, they can do two vital things: make their arguments heard on every major subject, and take an active role in the actual work of governing.

In practice, their competence would likely earn them more than their numerical share of responsibility in administration—much as Athens regularly entrusted important duties to capable leaders rather than to its loudest demagogues. The educated minority would count only as their numbers when votes are tallied. But as a moral and intellectual power, they would count for far more, because knowledge tends to carry influence—especially when it is visible, tested in debate, and brought into direct contact with the minds of the majority.

An arrangement better designed to keep popular opinion tethered to reason and justice—and to protect democracy from the temptations that prey on its weak spots—would be hard to invent. Done this way, a democratic society would gain something it almost never gets by accident: leaders whose intellect and character actually rise above the average. A modern democracy would have its occasional Pericles—and, more importantly, a steady supply of capable, guiding minds.

So with all these foundational arguments in favor, what’s the case against it? Almost nothing that survives real scrutiny—once people can be persuaded to give a new idea real scrutiny in the first place. Yes, if there are people who, under the banner of “equal justice,” really want to swap one form of class domination for another—replacing the ascendancy of the rich with the ascendancy of the poor—then they’ll dislike any plan that levels both. But I don’t think that desire currently exists among the working classes in England, even if I can’t promise what future opportunities and demagogues might stir up.

In the United States, where the numerical majority has long held something close to collective despotism, they’d likely cling to it the way a single despot—or an entrenched aristocracy—clings to power. England, by contrast, seems not yet in that mood. The English democracy, for now, would likely be satisfied with protection against other people’s class legislation, without insisting on the power to legislate in its own class interest in return.

The “Unworkable” objection (usually from people who haven’t looked)

Among those who publicly oppose Mr. Hare’s plan, some claim it simply can’t work. But in most cases, you discover they’ve barely heard of it—or they’ve skimmed it and moved on.

Others object on a different ground: they can’t stomach losing what they call the local character of representation. In their imagination, a nation isn’t made up of people; it’s made up of geographic units—boxes on a map produced by statistics and boundary lines. Parliament, they think, must represent towns and counties, not human beings.

But nobody is proposing to erase towns and counties. Towns and counties are represented when the people who live in them are represented. Local feelings don’t float in the air; someone has to feel them. Local interests don’t exist on their own; someone has to have them. If the people who carry those feelings and interests have their proper share of representation, then those feelings and interests are represented—along with everything else those people care about.

And once you see that, it’s hard to understand why geography should be treated as the only identity worth political recognition. Why should people whose strongest commitments are moral, religious, intellectual, occupational, or otherwise—things they value more than a county line—be forced to treat location as their sole political label? The idea that Yorkshire or Middlesex has “rights” separate from the people who live there—or that Liverpool and Exeter are the true objects of a legislator’s care, as distinct from their inhabitants—is a striking example of how words can hypnotize us into nonsense.

“The English will never accept it” is not an argument

Often, objectors end the discussion with a shrug: “England will never consent.” And what exactly are the people of England supposed to think of someone who issues that verdict on their intelligence and judgment—without even bothering to ask whether the thing is right or wrong?

I don’t believe the English people deserve to be branded, without trial, as hopelessly prejudiced against anything that can be shown to be good for themselves or for others. In fact, when prejudices cling stubbornly, the fault often lies with those who keep declaring them unbeatable—so they can excuse themselves from ever trying to change them. Any prejudice becomes “insurmountable” if the people who don’t share it still bow to it, flatter it, and treat it like a law of nature.

In this case, though, what most people feel isn’t deep hostility—it’s the normal, healthy suspicion we have toward new systems that haven’t been thoroughly argued through. The real obstacle is unfamiliarity. And unfamiliarity is powerful: our imagination can accept a major change in substance more easily than a small change in names and procedures. Still, if an idea truly has value, time dissolves that resistance. In an age of constant public debate and widespread interest in improvement, what once took centuries can take only years.

Criticism is maturing—and that’s a good sign

Since this treatise first appeared, several critiques of Mr. Hare’s proposal have shown at least one thing: people are finally examining it carefully, and with more intelligence than before. That’s the usual trajectory of major reforms.

At first, they meet blind prejudice and arguments that only blind prejudice could find persuasive. As the prejudice weakens, the objections often get sharper for a while—because once people understand the plan better, they can see its genuine inconveniences and the practical reasons it might not instantly deliver every benefit it’s capable of. You learn the friction points along with the strengths.

But among all the objections I’ve encountered that even look reasonable, I haven’t found one that supporters of the plan hadn’t already anticipated, considered, and debated—and then judged either unreal or easily manageable.

The Central Office fear: fraud, or suspicion of fraud

The most “serious-looking” objection is also the easiest to answer: people assume it would be impossible to prevent fraud—or even to prevent the suspicion of fraud—in the work of the Central Office.

The proposed safeguards are simple:

  • Full publicity
  • Complete freedom to inspect the voting papers after the election

Critics reply: “That won’t help, because to verify the returns, each voter would have to redo all the clerks’ work.”

That would indeed be a crushing objection—if the plan required every voter to personally audit the whole process. It doesn’t. The only verification you can reasonably ask of an ordinary voter is to confirm that their own ballot paper was used as it should have been. For that purpose, each paper would be returned, after the proper interval, to the place it came from.

And the broader checking—the heavy lifting—would happen the way it already tends to happen in contested elections: through motivated competitors. The candidates who lost, especially those who believe they should have won, would hire agents to verify the entire process. If they found serious errors, the documents would go before a committee of the House of Commons, which could examine and verify the national electoral operations at perhaps a tenth of the time and expense currently required to scrutinize a single disputed return under the existing system.

Two worries about “gaming” the system

Even granting that the plan is workable, critics say its benefits could be undermined in two ways—producing harm instead of improvement.

1) It would empower cliques and single-issue groups.
They imagine tight little organizations—sectarian alliances, special-purpose associations (temperance leagues, ballot societies, liberation societies), class-based coalitions, or groups bound by religious identity—using coordination to magnify their influence.

2) It would become a party “ticket” machine.
They worry each party would send around a master list of hundreds of candidates to be voted for nationwide. The party’s supporters in every constituency would vote the list as a block. That organized vote, critics say, would swamp any independent candidate. This “ticket system,” as in America, would allegedly entrench the big parties, only rarely being beaten by the same small sectarian groups and hobbyhorse cliques mentioned above.

The real answer: organization matters—but the current system makes it everything

Here the reply is decisive. Nobody claims that under Mr. Hare’s plan—or any plan—organization stops being an advantage. Scattered individuals will always struggle against disciplined, coordinated groups. Since Mr. Hare’s proposal can’t change human nature, we should expect every organized party or faction, large or small, to use its organization to the fullest.

But notice the contrast:

  • Under the current system, organized forces are effectively everything.
  • The scattered, independent elements are effectively nothing.

Right now, voters who aren’t bound to one of the major parties—or to one of the smaller sectarian blocs—often have no way to make their votes count for what they truly want. Mr. Hare’s plan gives them a tool. They may use it clumsily or skillfully. They might win their fair share of influence—or less than their fair share. But whatever influence they gain would be a net improvement over their present near-zero.

And if you assume every small interest will learn to organize, why on earth would we assume the great interest of national intellect and character would remain unorganized? If there can be temperance tickets and ragged-school tickets, why couldn’t one public-spirited person in a constituency circulate a “personal merit” ticket—pointing voters toward candidates of recognized ability and integrity? And why couldn’t a handful of such people, meeting in London, select the most distinguished names from the full list—without obsessing over party orthodoxy—and publish that shortlist cheaply across the country?

Remember the crucial structural difference. Under the current elections, the two great parties’ influence is essentially unlimited. Under Mr. Hare’s scheme it would still be large, but it would be bounded. Neither the major parties nor the smaller cliques could elect more members than corresponds to the number of supporters they actually have.

That’s the reverse of how American ticket politics works. In America, people vote the party ticket because elections are decided by a simple majority, and voting for someone who can’t reach that majority is treated as a wasted vote. Under Mr. Hare’s system, a vote for a person of known worth has almost as good a chance of succeeding as a vote for a party nominee. That creates a healthy pressure: any Liberal or Conservative who is more than a mere party label—anyone with preferences beyond the party line—would be tempted to strike out the obscure, insignificant names on the party list and replace them with people who genuinely honor the nation.

And once that possibility becomes real, party managers have an incentive to adapt. When drawing up party lists, they’d be less likely to fill them entirely with pledged party loyalists, and more likely to include respected national figures—people broadly admired, and more naturally aligned with that party than with its rival.

The subtle risk: “half-independent” voting

There is, however, a real difficulty, and it shouldn’t be glossed over. Independent-minded voters—those who want to support unpatronized people of genuine merit—might write down a few such names, then fill the rest of their list with ordinary party candidates. Ironically, that would help inflate the numbers against the very kind of representation they’d prefer.

If that proved serious, there’s an easy fix: limit the number of secondary (contingent) votes. Nobody has an informed, individual preference among 658 candidates—probably not even among 100. It would be reasonable to cap each voter’s list at twenty, fifty, or whatever number makes it plausible that the voter is exercising personal judgment rather than behaving as anonymous party rank-and-file.

But even without such a rule, the problem would likely shrink as soon as the system became widely understood. In fact, the very cliques people fear would have a strong incentive to teach the remedy. Each small minority group would tell its members:

  • “Vote for your special candidates only,” or
  • “At least put them first, so they have the best chance to reach the quota through first votes, or without slipping too low in the transfers.”

And voters outside those cliques could learn from the same lesson.

Minorities get exactly what they deserve—no more, no less

Under this system, smaller groups would have precisely the amount of power they ought to have: exactly what their numbers entitle them to, not one particle more. And to secure even that fair share, they’d be pushed toward a healthy strategy: choosing candidates who can attract support beyond the clique itself—people whose broader qualities persuade voters who don’t share the sectarian badge or single-issue obsession.

It’s also worth noticing how public arguments for the existing system conveniently change shape depending on what they’re trying to fend off. Not long ago, defenders of the old arrangements loved to claim that under them all “interests” and “classes” were represented. And in a sound Parliament, important interests and classes should indeed have representation—meaning spokespeople and advocates.

But from that, defenders leapt to a far more dangerous claim: that we should keep a system that gives partial interests not merely advocates, but control of the tribunal itself.

Now the tune flips. Mr. Hare’s plan prevents partial interests from commanding the tribunal while still ensuring they have advocates—and for that very reason it’s attacked. Because it combines the benefits of class representation and numerical representation, it gets shot at from both sides.

The real barrier: people think it’s too complicated

Yet these aren’t the objections that truly block adoption. The real barrier is the exaggerated belief that the plan is too complex, and the resulting doubt about whether it can actually be implemented.

The only complete answer is trial. Once the plan’s merits are better understood and it wins broader support among impartial thinkers, the right move is to introduce it experimentally in a limited arena—say, the municipal election of a large town.

A perfect opportunity was missed when Parliament chose to divide the West Riding of Yorkshire to give it four members, instead of trying the new principle by keeping the constituency whole and allowing a candidate to win by securing—through first votes or through secondary votes—one quarter of the total votes cast.

Experiments like that wouldn’t fully test the plan’s value. But they would demonstrate how it works. They would show people it isn’t impracticable, familiarize the public with its machinery, and provide real evidence about whether the feared difficulties are substantial or imaginary.

The day Parliament authorizes such a partial trial, I believe it will open a new era of parliamentary reform—one that gives representative government a form suited to its mature and successful phase, once it has outgrown the militant stage in which the world has so far mostly seen it.

This blunder by Mr. Disraeli (from which, to his great credit, Sir John Pakington soon took the opportunity to dissociate himself) is one among many vivid examples of how little Conservative leaders understand Conservative principles. Without demanding the unlikely virtue that parties should fully grasp their opponents’ principles—much less know when to apply them—we can still say it would be a real improvement if each party understood and acted on its own. England would benefit greatly if Conservatives voted consistently for what is genuinely conservative, and Liberals for what is genuinely liberal.

We wouldn’t have to wait long, then, for reforms that—like this one and plenty of other major measures—are both genuinely democratic and genuinely conservative in the best sense: they expand political power while also stabilizing the system.

The problem is that the party that calls itself “Conservative” is often the worst offender when it comes to blocking exactly that kind of reform. And here’s the grim irony: if someone proposed a policy on any topic that was truly broad-minded, long-range, and responsibly conservative—so sensible that even Liberals were willing to support it—most Conservatives would still charge in on instinct and kill it before it could pass.

Meanwhile, the idea at the center of this chapter—personal representation (a voting method designed to give minorities a fair share of representation, not just a token voice)—has moved beyond theory.

In the years since the last edition of this book, we’ve learned something important: the “experiment” described here hasn’t just been imagined. It’s been tried in the real world, at a scale larger than any city or province. In the constitution written for the entire Danish kingdom (not merely Denmark proper), lawmakers built in minority representation using a plan so close to Thomas Hare’s that it’s hard to tell them apart.

That matters for two reasons:

  • It proves the scheme can be implemented, not just admired on paper.
  • It’s another example of a recurring phenomenon in intellectual history: when society reaches a certain point, the same solution can occur independently to different capable minds, even without any contact between them.

In Denmark’s case, the public learned the details through a strong paper by Robert Lytton, published among the House of Commons reports in 1864. The upshot is that Hare’s plan can now be described as Hare’s—or Andræ’s. It has advanced from “interesting proposal” to working political reality.

So far, Denmark is the only country where personal representation has become a standing institution. But the speed at which the idea has spread among serious political thinkers has been striking—especially in places where universal suffrage is treated as unavoidable.

It has attracted support from two very different camps:

  • Friends of democracy, who see it as the clean logical consequence of democratic principles.
  • Reluctant democrats, who accept democracy because they must, and want a practical way to correct its obvious hazards.

The early intellectual momentum came from Switzerland. France followed. And in France, very recently, two highly influential writers—one from the moderate liberal tradition and one from the most radical democratic tradition—have publicly endorsed the plan. In Germany, one of the country’s leading political thinkers, who also serves in the Liberal cabinet of the Grand Duke of Baden, is counted among its supporters.

The same current is part of the broader awakening of political thought in the United States—an awakening already being forced and sharpened by the enormous, ongoing struggle over human freedom. And in Australia, in the two largest colonies, Hare’s plan has been seriously debated in their legislatures. It hasn’t been adopted there yet, but it already has a strong base of support.

Just as telling as that support is the quality of the debate. Speakers from both major sides—Conservative and Radical—have shown they grasp the principles clearly and completely. That alone is a strong answer to a common objection: that the plan is “too complicated” for ordinary public life.

It isn’t. What’s missing isn’t intelligence; it’s attention. All that’s required to make the plan—and the advantages it promises—perfectly clear to everyone is that the public reaches the point where they think it’s worth the effort to actually focus on it.

VIII — Of The Extension Of The Suffrage

VIII
Of The Extension Of The Suffrage

A representative democracy worth the name isn’t just “majority rules.” It’s a system that truly represents everyone—including minorities of interest, minorities of opinion, and minorities of intellect. In that kind of democracy, people who are outnumbered still get heard, and they still have a real shot at influence—because character and argument can carry weight, not just headcounts. That’s the only democracy that is genuinely fair: government of all, by all.

If we built it, it would avoid many of the worst problems of the so-called democracies we see around us—the ones that have shaped most people’s idea of what “democracy” means. And yet even this better version has a vulnerable spot: the numerical majority still holds ultimate power if it decides to use it.

Here’s the catch. In most societies, that numerical majority is largely drawn from a single broad class, with similar assumptions, shared biases, and a common way of looking at the world—and, frankly, not the class with the most education and cultural cultivation. So even a well-designed democracy can still slide into the familiar disease of politics: class government. It would be less extreme than the “democracy” we often get now (which is really just one class ruling while wearing a democratic label), but it would still be held back only by the majority class’s own self-restraint—its good sense, moderation, and willingness to hold back.

If that’s all we can rely on, then all our grand talk about constitutions is basically theater. The point of constitutional government isn’t to hope the people with power will behave. It’s to build a system where they can’t easily abuse it. No one trusts a constitution because it promises the powerful won’t misuse power; people trust it only if it makes misuse genuinely difficult.

So democracy isn’t the best possible form of government unless we reinforce its weak side—unless we organize it so that no class, not even the largest, can shove everyone else into political irrelevance and steer laws and administration purely for its own advantage. The challenge is to prevent that kind of domination without giving up what’s best about popular government.

Why “Just Limit the Vote” Doesn’t Solve It

You don’t solve the problem by narrowing the electorate—by forcing some citizens out of political life. One of the greatest benefits of free government is the way it educates people: not just their minds, but their feelings and moral instincts too. That education reaches all the way down to the poorest citizens when they’re asked to take part in decisions that shape the country’s biggest interests.

I’ve argued this before, but I’m coming back to it because people still underestimate it. Many treat it as a kind of fantasy: the idea that letting working people vote could be a powerful engine of mental growth. It seems too small a cause to produce such a big effect.

But if we ever want real, widespread intellectual development—not just among elites, but across humanity—this is one of the main roads that can actually get us there.

If anyone doubts it, look at the evidence Tocqueville assembled, especially his judgment of Americans. Travelers have long noticed a striking fact: the average American seems, in some real sense, both patriotic and mentally cultivated. Tocqueville explained how closely those traits are connected to democratic institutions. Nowhere else has there been anything like such a broad diffusion of educated ideas, tastes, and sentiments—or even a serious belief that it was possible.

And even so, America’s system is only a partial example of what democracy could do, because it has a major flaw: political life there is an excellent school, but it shuts out many of the best teachers. The strongest minds in the country are often kept away from national representation and public office almost as effectively as if they were legally barred.

There’s another problem, too. When “the people” are treated as the single source of power, ambition flows toward them the way ambition in a dictatorship flows toward the ruler. The majority becomes an object of flattery and performance. Politicians and public figures chase popularity with the same kind of adulation and sycophancy you see around a monarch. Power has a corrupting side, and in that setting it can expand just as fast as its improving and ennobling side.

Still, even with that contamination, democratic institutions have produced a clear superiority in mental development among America’s lowest class compared with the same class in England and elsewhere. So imagine what could be achieved if we could keep the good educational influence while reducing the bad.

We can do some of that—but not by stripping political rights from the very people who have the fewest intellectual stimuli elsewhere. For many working people, political life is an invaluable doorway into “large, distant, complicated” interests—things beyond the narrow horizon of daily routine. Close that doorway, and you cut off one of the most effective channels by which broader intelligence and public spirit spread through society.

What Voting and Politics Teach—In Plain Terms

Political discussion does something crucial for the manual laborer whose job is repetitive and whose life offers little variety of impressions or ideas:

  • It teaches him that faraway events and remote causes can directly affect his own life.
  • It trains him to connect personal well-being to broader systems—laws, markets, wars, alliances, public spending.
  • It helps someone whose daily concerns are tightly centered on a small circle learn to feel with others—fellow citizens he may never meet.
  • It turns him, consciously, into a member of a large community, not just an individual surviving day to day.

But this educational force mostly bypasses people who don’t vote and aren’t trying to get the vote. Political arguments aren’t aimed at them. Their opinions aren’t being courted. Nothing depends on what they decide. So there’s little reason for them to form a decision at all.

Their situation is like spectators in a courtroom compared with the jury. The speeches are made for the twelve people whose verdict matters, not for the audience watching from the benches.

In a government that otherwise calls itself popular, anyone with no vote and no realistic path to get one ends up in one of two states:

  • permanent resentment, as a standing outsider; or
  • indifference, as someone who feels society’s affairs aren’t his business—something managed by others.

He becomes the kind of person who “has no business with the laws except to obey them,” and no role in public concerns except to watch. What he knows and cares about politics from that position can be roughly gauged by comparing what an average middle-class woman of the time knows about politics versus what her husband or brothers know: not because she’s incapable, but because the system doesn’t treat her judgment as something it needs to win.

Exclusion Is Also Personal Injustice

Even setting aside the educational effects, it’s simply unjust to deny any person the ordinary privilege of having his voice counted in affairs where he has the same stake as others—unless you can show that excluding him prevents some greater harm.

Think about the obligations the state can impose. If someone:

  • must pay taxes,
  • may be compelled to fight,
  • must obey laws,

then he should have the legal right to ask: For what? He should be entitled to have his consent requested and his opinion counted—counted at its real worth, not inflated beyond it, but not erased either.

A mature, civilized nation shouldn’t create political pariahs—people permanently shut out of citizenship—except where they’ve disqualified themselves through their own actions. Everyone is degraded, whether they notice it or not, when others claim unlimited authority to steer their destiny without even consulting them.

And even if humanity were far wiser than it has ever yet been, it still wouldn’t be realistic to expect fair treatment for the voiceless. Rulers, and the classes that dominate politics, have to pay attention to the interests and wishes of voters. With those who are excluded, they may attend—or not—at their convenience. Even honest rulers are usually too busy with the matters they can’t ignore to spend much thought on people they can disregard with no penalty.

So no suffrage arrangement can be permanently satisfying if it permanently excludes any person or class. The vote must be open to every adult who wants to obtain it.

Necessary Exclusions: Minimal and Justified

That said, some exclusions have strong practical reasons behind them. They’re unfortunate in themselves, and the ideal is to remove the conditions that make them necessary—but until then, they may be required.

Basic literacy and numeracy. I can’t accept that anyone should vote who can’t:

  • read,
  • write, and
  • do basic arithmetic.

Justice demands that the means to gain these skills be within reach of everyone, free or at a cost even the poorest working person can manage. If society truly did that, then giving the vote to someone who couldn’t read would seem as absurd as giving it to a child who can’t speak. In that situation, it wouldn’t be society excluding him; it would be his own refusal to do the minimal work required.

If society hasn’t done its duty—if it hasn’t made this elementary education genuinely accessible—then denying the vote is a hardship. But it’s a hardship that should still be endured, because two obligations are at stake, and the more fundamental must come first: universal education must come before universal enfranchisement.

Only someone blinded by abstract theory could claim that people should be handed power over others—over the whole community—without the most basic tools needed to manage their own lives sensibly, pursue their interests intelligently, and understand the interests of those closest to them.

Could we demand more knowledge than that? In a perfect world, yes. It would be very desirable to require some grasp of:

  • basic geography (the shape of the earth and its divisions),
  • general history,
  • and the history and institutions of one’s own country.

Those kinds of knowledge are important for using the vote intelligently. But in this country—and probably everywhere except parts of the Northern United States—they aren’t available to everyone. Worse, we don’t have reliable, trustworthy machinery to test for them. Trying to enforce such requirements now would invite bias, trickery, and fraud. Better to grant the vote broadly—or even to withhold it broadly—than to hand it out or deny it at the discretion of a public official.

With reading, writing, and arithmetic, though, the test can be simple and fair. Anyone registering could be required, in front of the registrar, to copy a sentence from an English book and solve a basic proportional arithmetic problem (the “rule of three”). With fixed rules and full publicity, it would be easy to apply honestly. This condition should accompany universal suffrage, and after a few years it would exclude only those who care so little about voting that their ballot wouldn’t usually reflect a real political opinion anyway.

“No Taxation Without Representation”—And Also the Reverse

There’s another principle that matters just as much: the body that votes taxes—national or local—should be elected only by those who pay something toward those taxes.

If people who pay nothing can vote on spending other people’s money, they have every incentive to spend freely and no incentive to economize. In matters of finance, giving them power violates a core idea of free government: you separate control from a personal stake in using control responsibly.

It’s like letting them reach into other people’s pockets for any purpose they choose to label “public.” In some large American cities, this has produced a level of local taxation so heavy it’s almost without parallel—and borne almost entirely by the wealthier classes.

The British constitutional tradition says representation should be coextensive with taxation: it shouldn’t fall short of taxpayers, but it also shouldn’t extend beyond them.

To make that compatible with universal voting, something else becomes essential (and is desirable for other reasons as well): taxation, in an obvious, visible form, needs to reach the poorest class too.

In this country, and in most others, few working families contribute nothing to public revenue. Even the poor pay indirect taxes through everyday purchases—tea, coffee, sugar, and so on. But indirect taxes don’t feel like taxes. Unless someone is educated and reflective, he doesn’t naturally connect his interest with keeping public spending low in the same way he would if money were demanded directly from him.

And even if he did make that connection, he’d likely still vote for lavish expenditure—while making sure the bill didn’t come in the form of higher taxes on the items he personally consumes.

So it would be better if some portion of taxation were direct and universal—for example:

  • a simple head tax on every adult; or
  • allowing every adult to become an elector by voluntarily adding themselves, outside the usual order, to assessed taxes; or
  • requiring a small annual payment from each registered voter that rises and falls with the country’s total spending.

The point is that everyone should feel, unmistakably, that some of the money they help vote into existence is their money too—so everyone has a real personal interest in keeping the total down.

Disqualifying Dependency and Default

Whatever the precise arrangement, one disqualification follows from first principles: anyone receiving parish relief should be barred from voting.

Someone who can’t support himself by his own labor has no claim to the privilege of helping himself to other people’s money through the vote. By becoming dependent on the community for basic subsistence, he gives up—at least temporarily—his claim to equal political rights. Those whose contributions keep him alive can justly claim the exclusive management of common concerns to which he brings nothing, or less than he takes away.

As a voting condition, there should be a waiting period—say, five years before registration—during which the applicant’s name has not appeared on the parish relief rolls.

Similarly:

  • an uncertified bankrupt, or someone who has used insolvency laws to escape debts, should be disqualified until he has paid what he owes—or at least shown that he is not now, and has not been for a long time, dependent on charity;
  • persistent nonpayment of taxes (when it clearly can’t be chalked up to oversight) should disqualify as long as it continues.

These exclusions are not meant to be permanent. They demand only conditions that everyone is able—or ought to be able—to meet if they choose. They keep the vote open to anyone in the normal condition of adult life. If someone loses the vote under such rules, it’s because either he doesn’t care enough about it to do what he’s already obligated to do, or he’s in a general state of hardship and degradation where this added mark would barely be felt—and when he rises out of that condition, the mark disappears with everything else.

Even with Universal Suffrage, Two Big Dangers Remain

Over time, then—assuming only the restrictions we’ve discussed—we should expect that nearly everyone would have the vote, except that class which we can hope will steadily shrink: those receiving parish relief. With that small exception, suffrage would become effectively universal.

And it must expand that far if we’re aiming at a genuinely enlarged and elevated conception of good government.

But even with near-universal voting, another reality remains: in most countries, and especially in this one, the great majority of voters would be manual laborers. That means two dangers still loom, and in a very serious way:

  • too low a general level of political understanding, and
  • class legislation—lawmaking driven mainly by the interests of one class.

So the next question is whether there are any means to prevent these evils.

There are—if people honestly want them—not through clever gimmicks, but by following the natural order of human life, which almost everyone recognizes as sensible in matters where they have no vested interest or inherited prejudice pushing them the other way.

In human affairs, the basic rule is simple: if you’re directly affected by a decision, and you’re not legally under someone else’s authority, you have a legitimate claim to be heard—unless giving you that voice would put everyone else at risk. But “everyone should have a voice” does not automatically mean “everyone’s voice should count exactly the same.”

Think about any shared project. Two people own a business together, and they disagree. Does fairness require treating their judgments as equally valuable no matter what? If one of them knows more—or if both are equally smart but one is more honest and public‑spirited—then the better mind or better character is more likely to land on the better decision. Pretending otherwise doesn’t create equality; it just declares, as a matter of public principle, that better judgment and worse judgment are interchangeable.

Of course, there’s an immediate problem: between two individuals, who gets to say who’s wiser or better? In most cases, you can’t—at least not reliably or without turning politics into personal score‑settling. But if you look at people in large groups, you can get closer to the truth by using broad, publicly understandable signals that tend to track knowledge and competence.

It’s also important to draw a line here.

  • If something is purely personal—an individual right that concerns only you—then you should decide for yourself, even if a smarter person disagrees.
  • But when a decision belongs to both of you, equal weight can force a perverse outcome: either the wiser person has to surrender to the less informed judgment, or the less informed person has to accept guidance from the wiser.

If you insist that it’s “unjust” for anyone to give way, you still have to ask a real question: which injustice is worse—forcing the better judgment to bow to the worse, or asking the worse to defer to the better?

National politics is exactly this kind of shared business, except on a much larger scale. And unlike a two‑person partnership, no one has to make a total surrender. Everyone’s view can still be counted—just not necessarily counted equally. In principle, you could design a system where every citizen votes, but some votes carry extra weight because the voter has stronger grounds for sound judgment.

Done right, that doesn’t have to be insulting. Being shut out of the common decisions is an insult: it tells you you’re nobody. But acknowledging that some people, on average, have better tools for steering a complicated joint enterprise isn’t inherently degrading. In everyday life, we accept this constantly. When you’re dealing with something you don’t understand well, you naturally give more weight to someone who clearly does—assuming you can see why they deserve that trust. The only real requirement is that the reasons for extra influence must be intelligible and feel fair to ordinary people.

Why property is the wrong yardstick

Let me be explicit: I don’t think it’s acceptable—except perhaps as a short‑term stopgap—to give extra political weight because someone has more property.

Yes, wealth is loosely correlated with education in many countries. On average, richer people have had better schooling than poorer people. But the correlation is messy and morally compromised. Luck and circumstance often matter more than merit in who ends up wealthy. And no amount of study guarantees someone will rise in social rank. So if you tie extra votes to money, you don’t just create a flawed system—you create one that will be hated, and rightly so. Worse, you discredit the entire principle of weighting votes by competence by making it look like a trick for the comfortable to lock in power.

In places like this country, people aren’t especially threatened by the idea that some individuals might genuinely be better informed. They are (and should be) deeply suspicious of “superiority” that rests on cash alone. So if we’re going to count one person’s political judgment as worth more than another’s, the only defensible basis is mental qualification—education, competence, and the habits of thought that go with them. The real task is figuring out some workable way to estimate that.

What could count as a practical test?

In a better world, we’d have either:

  • a genuinely national system of education that reliably brings most people to a known standard, or
  • a trusted, broadly accessible examination system that could measure competence directly.

Without that, you’re stuck using imperfect proxies. One of the best imperfect proxies is occupation—not because certain jobs magically confer virtue, but because some roles routinely demand planning, reasoning, and managing complexity.

For example, on average:

  • An employer must use “head work” more than a manual laborer, simply because running work requires coordination and judgment.
  • A foreman is typically more informed than an ordinary laborer.
  • A skilled tradesperson usually has more training than an unskilled worker.
  • A banker, merchant, or manufacturer often manages broader and more complicated interests than a small tradesman, which tends to cultivate (and require) greater intelligence.

And notice the key point: it’s not merely claiming a higher function that proves anything. It’s doing it successfully. That’s also why the system would need guardrails—so people can’t briefly adopt a title just to harvest extra votes. One sensible safeguard would be a minimum period of real practice—say, three years—in the occupation before it counts.

Subject to conditions like that, the law could allow two or more votes to people performing these higher‑responsibility functions.

The same logic applies even more strongly to the liberal professions—when they’re genuinely practiced, not merely claimed. In places where entry into a profession requires serious study or a difficult exam, that’s already a public filter. Members could reasonably receive extra voting power on that basis.

You could extend this further to:

  • university graduates
  • people with credible certificates showing they completed advanced study at schools that actually teach higher subjects (with real safeguards against fake credentialing)

And where there are reputable examinations open to all—like the “local” or “middle‑class” Associate examinations established by Oxford and Cambridge—passing such a test could be a particularly good reason to grant extra votes, assuming the exams are fairly administered and genuinely accessible.

None of these details are beyond debate, and I’m not trying to lock in a final blueprint. My point is directional: this is the true ideal of representative government—to give everyone a voice, while giving more weight to those with better demonstrated capacity to judge the common interest. Political progress, as I see it, means moving toward that ideal using the best workable mechanisms available.

How many extra votes—and where to stop

If you ask how far this principle could go—how many votes someone might get for superior qualifications—the exact number matters less than people think. What matters is that the distinctions are:

  • non-arbitrary
  • publicly understandable
  • acceptable to a general sense of justice

But there is one absolute boundary. Extra votes must never be assigned so heavily that the educated voters, or any single class that largely overlaps with them, can outweigh the entire rest of the community. The purpose is to prevent crude class legislation by the least educated majority—not to enable a different kind of class legislation in the opposite direction. The system has to protect against both.

There’s also a fairness condition I consider essential: the plural‑vote privilege must be open even to the poorest person in the country. If someone can demonstrate—despite obstacles—that they meet the standard of intelligence and knowledge, they should be able to earn the extra votes. That means voluntary examinations, open to anyone, where a person can prove they meet the required level and be admitted to the higher voting grade.

A privilege that anyone can claim by meeting its real conditions doesn’t have to offend justice. But if you grant extra power based on rough presumptions (like occupation) and then refuse extra power to someone who can offer direct proof, you create something much harder to defend.

Why this matters politically, even if it’s not coming tomorrow

Plural voting is familiar in some local elections, but it’s so alien in parliamentary elections that it probably won’t be adopted soon—or happily. Still, I suspect a moment will come when the choice is basically between:

  • equal universal suffrage, one person one vote, with no weighting at all, or
  • some form of plural voting that tries to keep education from being politically drowned by sheer numbers

If you don’t want the first option, you should start getting comfortable with the second.

Even before plural voting exists in its direct form, we already tolerate indirect versions. For example, someone can effectively have more than one vote by being registered in more than one constituency and voting in each. That privilege today tracks money more than intelligence. But until we adopt a better educational test, I wouldn’t rush to abolish even this imperfect substitute, because it still functions—clumsily—as a proxy for education.

And we could expand that privilege in a way that ties it more directly to learning. Imagine a future reform that greatly lowers the property requirements for voting. It might be wise, alongside that, to let certain categories—university graduates, people who’ve credibly completed higher schooling, members of the learned professions, and perhaps others—register explicitly in those capacities and cast a vote as such in whatever constituency they choose, while also keeping their ordinary local vote where they live.

Universal suffrage without weighting: what could go wrong

Until society devises—and public opinion is willing to accept—some method of plural voting that gives education its due influence, universal suffrage won’t deliver its full benefits without also bringing what looks to me like a serious risk of offsetting harm. One possible transition (and we may have to pass through something like this) would be a patchwork compromise:

  • In some constituencies, remove all barriers so the members would be chosen mostly by manual laborers.
  • Elsewhere, keep the existing qualification—or change it in ways that prevent laborers from becoming dominant in Parliament overall.

That kind of compromise would preserve anomalies and even pile on new ones. Still, that alone isn’t a decisive objection. If a country won’t pursue right ends through a coherent system, it often ends up with irregular stopgaps—which can still be better than a tidy system aimed at the wrong goals, or a “clean” design that simply forgets one of the necessary aims.

But there’s a deeper problem: this compromise clashes with the interlocking constituency system required by Mr. Hare’s plan. Under the compromise, each voter stays trapped inside the one (or few) constituencies where they’re registered. If you don’t like the candidates in your local area, you’re simply out of luck—you aren’t represented at all.

Why I still defend plural voting in principle

I care intensely about freeing the people who already technically “have votes” but whose votes are effectively useless because they’re always outnumbered. And I have real faith in what truth and reason can do—if they can get a hearing and be argued well. So I wouldn’t despair of equal universal suffrage if proportional representation actually made it real by giving every minority a fair chance, as Mr. Hare’s system aims to do.

Even so, I’d still defend plural voting as a principle.

I’m not proposing it as a regrettable evil—like excluding part of the public from voting—tolerated only until it becomes too dangerous. And I don’t treat equal voting as something intrinsically admirable so long as we can manage its side effects. I see equal voting as, at best, a relative good: better than inequalities based on irrelevant accidents, but still wrong in principle because it endorses the wrong standard and trains citizens into a damaging way of thinking.

It doesn’t help a country to have its constitution declare that ignorance deserves the same political power as knowledge. Institutions teach. They don’t just allocate authority; they shape what people come to believe is respectable, true, and deserved. A well-designed political system should help citizens hold a healthy belief: that everyone deserves some influence, but better judgment deserves more influence than worse judgment.

That “spirit” of institutions—often ignored, especially by English thinkers—may matter more than their technical rules. In countries without heavy oppression, institutions usually leave their deepest mark not through a single statute, but through the habits of mind they encourage.

American institutions, for example, have pressed hard on the American imagination the idea that any one man (with a white skin) is as good as any other. Many people feel that this comforting but false creed is tied to some of the less attractive features of American character. And it’s no small harm for a constitution to sanction a creed like that, because belief in it—spoken or unspoken—can damage moral and intellectual excellence as seriously as almost any governmental effect.

The “but it educates the masses” argument—and its limit

Someone might reply: even if equal influence gives too much power to the least instructed, it still promotes progress because it forces constant appeals to them. It exercises their minds. It compels the educated to explain, persuade, and correct prejudice. In that way, equal suffrage becomes a powerful engine of learning.

I agree—up to a point. Giving less educated classes some share of power, even a large share, really can produce that beneficial effect, and I’ve argued as much elsewhere.

But both theory and experience point to a reversal when those same classes become possessors of all power. When any group—whether one person, a few, or the many—becomes supreme over everything, it no longer needs the tools of reason. It can simply impose its will. And people who can’t be resisted are usually far too pleased with their own opinions to revise them, or to listen patiently to anyone who suggests they might be wrong.

The situation that most powerfully pushes people to sharpen their minds isn’t the comfort of having power—it’s the climb toward it. Growth happens when you’re close enough to influence outcomes that you have to think, argue, and persuade, but not so dominant that you can bulldoze your way through.

Of all the stopping places on the road to political control—whether temporary or permanent—the healthiest one is this: you’re strong enough to make reason win, but not strong enough to win against reason. In other words, you can’t simply impose your will; you have to justify it.

That’s the position we should try, as far as possible, to give to everyone in society:

  • the rich and the poor
  • the highly educated and the less educated
  • and every other group that social life divides into “classes”

If you combine that guiding idea with another principle that’s hard to deny—that greater mental capacity should sometimes carry greater political weight—you get as close as human government can realistically get to a workable kind of perfection. Not flawless, not utopian. Just the best fit we can manage for the messy complexity of real life.


Sex Is Not a Political Qualification

In arguing for a universal but graduated suffrage, I deliberately haven’t been treating sex as a relevant category at all. And I don’t think it is. Sex has no more to do with political rights than height, or hair color.

Everyone has the same stake in good government. Everyone’s welfare rises or falls with the laws. And everyone has the same need for a voice, because that’s how people protect their share of public benefits and public security. If anything, women need that protection more than men do, because physical weakness often makes them more dependent on law and social institutions for safety.

To justify excluding women from the vote, you’d have to fall back on assumptions that the modern world has largely abandoned. Almost no one now claims that women should be personal servants—people with no thoughts of their own, no ambitions, no work except unpaid domestic labor under the authority of a father, brother, or husband. Society already accepts, at least in principle, that women can and should do things that require judgment and independence:

  • hold property (certainly when unmarried, and increasingly even when married)
  • have financial and business interests
  • think, write, and teach

Once you admit those facts, there’s no solid principle left to support a political disqualification. The wider drift of modern thinking is steadily rejecting the idea that “society” has the right to decide, for each individual, what they’re fit to do and what they’re forbidden to attempt. If modern politics and political economy mean anything, they mean this: people are usually the best judges of their own capacities. Under genuine freedom of choice, most people will gravitate toward the work they’re generally best suited for, and the unusual paths will be taken mainly by the unusual individuals.

So either the whole direction of modern social reform has been a mistake, or we ought to follow it through to its logical end: the complete removal of exclusions and disabilities that block any honest kind of work—or public role—from any human being.


Even If You Believed in Female Subordination, the Vote Still Follows

But you don’t even need to go that far to prove women should vote.

Even if it were right—as it is not—that women should be a subordinate class, confined to domestic duties and under domestic authority, they would still need the protection of the suffrage to defend themselves against abuse of that authority. Political rights aren’t primarily about giving everyone a turn at ruling. They’re about making sure people aren’t ruled badly.

Most men are, and will remain for life, laborers—working in fields or factories. That doesn’t weaken their claim to the vote; it strengthens it, because they’re precisely the people most vulnerable to decisions made over their heads. And nobody seriously argues that women would use the vote dangerously. The harshest claim is usually softer than that: women would simply vote as dependents, echoing the wishes of fathers, brothers, or husbands.

Suppose they did. What then?

  • If women think for themselves, real good follows.
  • If they don’t, little harm is done.

Removing chains is still a benefit, even if the person you unshackle doesn’t immediately walk away. Simply ending the legal declaration that women are incapable of political opinion would be a moral upgrade: it would stop the law from officially treating half the human race as unfit to hold a preference about the most serious shared concerns of humanity.

Women would gain personally, too, in a very practical way. A vote is something they could give—something male relatives might want but couldn’t demand as a matter of right. And marriage itself would change: a husband would have to talk politics with his wife, because the vote would no longer be his private possession but a joint family concern.

People underestimate how much dignity is added, in the eyes of a coarse-minded man, by the simple fact that his wife can act on the outside world independently of him. That independence can win a kind of respect that mere personal virtues often can’t secure for someone whose public existence he believes he wholly owns.


The Vote Would Improve Men’s Voting, Too

The quality of voting would improve overall. A man would more often be forced to find honest reasons for his political choices—reasons that could stand up in conversation with someone of a more upright and impartial temperament sitting beside him. A wife’s influence would often keep him aligned with what he genuinely believes.

Yes, sometimes that influence would pull the other way—toward family advantage, social status, or vanity rather than public principle. But notice the key point: when a wife’s influence tends in that bad direction, it already operates at full strength today. In fact, it may operate more reliably now, because women are commonly kept so far from political life that they don’t experience politics as a domain where principle and honor matter.

And human beings tend to respect “points of honor” mainly in the arenas where they themselves are expected to have one. Many people are about as indifferent to another person’s political honor as they are to another person’s religious feelings when they don’t share that religion.

Give a woman a vote, and she comes under the discipline of a political point of honor. She learns to see politics as something she’s allowed to judge. And once you believe you’re allowed to have an opinion, you also feel you ought to act on it. That creates a new sense of personal responsibility. She no longer feels, as she so often does now, that if she can simply persuade the man, then everything is fine and his responsibility covers whatever she has pushed him into.

The only way to stop a woman’s indirect influence from warping a man’s political conscience is to replace indirect influence with direct responsibility. Encourage her to form her own judgment. Help her understand the reasons that should restrain conscience against the temptations of personal or family advantage. That’s how she stops being a disruptive force and becomes a responsible participant.


Property-Based Voting Makes the Exclusion Even More Absurd

So far I’ve been treating suffrage the way it would work in a well-designed society: as something tied to personal qualifications. But in countries like ours, voting often depends on property. And that makes the contradiction even harder to defend.

Think about what happens when a woman meets every condition demanded of a male voter:

  • she has independent means
  • she’s a householder and head of a family
  • she pays taxes
  • she satisfies whatever formal guarantees the system asks for

If the whole representation scheme is supposedly based on property and “stake in society,” then excluding her doesn’t just look unfair—it flatly contradicts the system’s own stated logic. The property principle is quietly thrown aside, and a special, purely personal disqualification is created for one purpose: to keep her out.

And the irrationality becomes almost comical when you add one more fact. In the very country that does this, a woman reigns. And the most celebrated ruler that country ever had was also a woman. At that point, the picture is complete: unreason, and injustice with only the thinnest disguise.

Let’s hope that as society continues tearing down the decaying structure of monopoly and tyranny—piece by piece—this exclusion won’t be the last stone left standing. The views of Bentham, Samuel Bailey, Hare, and many other of this country’s strongest political thinkers should reach every mind not hardened by selfishness or long-trained prejudice. And let’s hope that before another generation passes, the accident of sex—no more than the accident of skin—will be considered a sufficient excuse for denying anyone the equal protection and fair privileges of citizenship.

IX — Should There Be Two Stages Of Election?

Some representative governments have tried a two-step election system: ordinary voters don’t directly choose members of the legislature. Instead, they first choose electors, and those electors then choose the members of Parliament.

The idea behind this setup is pretty clear. It’s meant to put a small speed bump in front of raw, fast-moving public opinion. The many still hold the ultimate power—because they pick the electors—but they have to use that power through a smaller group that’s supposedly calmer, more informed, and more responsible than the crowd in a moment of excitement. Since the electors are already a selected group, the theory goes, they’ll be a bit above the average in judgment and character, and therefore make a more careful choice. And there’s an argument that sounds sensible on its face: it takes less knowledge to decide which neighbor can be trusted to pick a good representative than it takes to judge which candidate is best qualified to be one.

Why the theory doesn’t deliver the way you’d hope

Even if this indirect method slightly reduces the risks of popular rule, it also weakens the benefits—and that downside is much more certain. For the system to work the way its defenders imagine, voters have to adopt the mindset the theory requires. That mindset is not, “Who should represent me?” but, “Who should I empower to choose my representative?”

In other words, the voter is supposed to treat the first-stage vote like granting a general power of attorney: pick someone you personally respect and let them decide the political question for you.

But if voters really behave that way, the system destroys one of the main reasons to give them the vote at all. Voting isn’t only about producing an outcome; it’s also a civic education. It can build public spirit, draw people into public affairs, and exercise their judgment. A two-stage system asks the primary voters to do the opposite: to stop thinking about public questions, candidates, and policies, and to focus instead on choosing a private individual to think on their behalf.

And there’s a deeper contradiction hiding inside the plan. If a voter is expected not to care who ultimately gets elected—or to treat that question as “not my business”—why would they care about the intermediate step that leads to it? Wanting a specific person to represent you is within reach of someone with only a modest amount of information and civic virtue. Wanting an elector because that elector will choose your preferred representative follows naturally from that. But taking real interest in selecting “the most worthy person to decide for me,” while deliberately refusing to think about what decision you want, demands an unusually abstract devotion to duty—doing the right thing simply because it’s right. That’s a high bar. People who can meet it are already the kind of citizens who can be trusted with a more direct form of political power.

So as a way to involve poorer citizens—those with less time, education, or political leisure—this is about the worst public function you could invent. It offers little emotional payoff, little obvious connection to anything that affects daily life, and very few natural incentives beyond a conscientious resolve to perform one’s duty. And if large numbers of voters actually cared enough about politics to value such a thin and indirect role, they probably wouldn’t remain content with anything that limited.

If you want advice, you don’t need a constitution to get it

Now suppose we grant the key claim behind indirect election: maybe a person who can’t judge candidates for Parliament can still judge the honesty and general competence of a trusted neighbor and delegate the decision to them. Even if that’s true, it doesn’t require a two-stage constitutional machine.

If someone genuinely wants another person to decide for them, they can simply ask that person privately, “Who should I vote for?” In that case, direct and indirect systems converge to the same result: the voter follows the judgment of a trusted guide, and every “advantage” of indirect election appears under direct election too.

The two systems only differ in practice if we assume the voter does want to choose directly, but is blocked by the law and therefore forced to act through an elector. But if that’s the voter’s mindset—if they care about who wins—then the law can’t really stop them. They’ll just pick an elector who’s a known supporter of the candidate they prefer, or who explicitly promises to vote for that candidate. And unless the country is politically asleep, that’s exactly how two-stage elections naturally behave.

The U.S. presidential election shows what happens in real life

This is basically how the U.S. elects its President. On paper it’s indirect: citizens vote for electors, and electors choose the President. In reality, the electors are chosen because they’re pledged to a particular candidate. People don’t vote for an elector because they admire the elector as a person. They vote for a ticket: the Lincoln slate or the Breckenridge slate.

And notice something important: the electors aren’t selected so they can roam the country and discover the best possible President. There would be something to say for the system if that were the aim. But it isn’t, and it won’t be—at least not until humanity adopts Plato’s strange but intriguing belief that the best person to wield power is the one most reluctant to take it.

In practice, the electors are just choosing among people who have already put themselves forward as candidates. The primary voters already know who those candidates are. If there’s any political energy in the country, voters will already have decided which candidate they want, and that will be the only thing that matters. Each side will present a ready-made list of electors, all committed to their candidate, and the “first-stage” choice becomes a simple question: which list do you support?

When two-stage elections can work well

Two-stage elections do work in one special case: when the electors aren’t chosen solely to be electors, but have other substantial responsibilities that dominate why they were elected in the first place. Those other duties prevent them from being handpicked merely as delegates pledged to cast a single predetermined vote.

This combination shows up in another American institution: the U.S. Senate (as it existed in the arrangement being described here). The Senate functions like an upper house, meant to represent the states as states, and to protect the portion of their sovereignty they haven’t surrendered to the federal government. Because each state’s internal sovereignty is treated as equally sacred in an equal federation, every state—small Delaware or massive New York—sends the same number of senators: two.

Those senators aren’t elected directly by the people. They’re chosen by the state legislatures, which are elected by the people. But since these legislatures have major ongoing jobs—making state laws and overseeing the executive—citizens choose them mainly with those responsibilities in mind, not primarily based on how they might select federal senators. As a result, when those legislatures pick senators, they usually exercise genuine judgment, constrained only by the broad attention to public opinion that any democratic government must pay.

By this route, the Senate elections have often worked extremely well—arguably better than any other American elections—because the Senate tends to be filled with the most distinguished public figures who have become known through public life.

So it’s not true that indirect election is never useful. Under certain conditions, it may be the best system available.

But those conditions are hard to achieve in practice unless you have a federal system like the United States, where the choice can be entrusted to local political bodies whose normal work already touches the nation’s most important interests.

In a country like ours, the only roughly comparable bodies would be municipalities or other local boards. Yet few people would call it an improvement if the members for the City of London were chosen by the Aldermen and Common Council, or if members for a borough like Marylebone were chosen—openly and officially—by vestries of its parishes (as, in effect, they already are informally).

And even if those local bodies were much better than their critics think, the skills that make someone fit for the narrow and specialized duties of local administration are no guarantee that they can judge who belongs in Parliament. They probably wouldn’t do that job any better than voters do directly. Meanwhile, if “fitness to choose members of Parliament” became part of the criteria for selecting town councillors or vestrymen, you’d end up excluding many people who are excellent at local governance, simply because voters would feel pressure to pick officials whose national political opinions match their own. We already see a version of this: even the indirect political influence of town councils has pulled municipal elections away from their intended purpose and turned them into party contests.

A simple analogy makes the point. If your bookkeeper or steward were responsible for choosing your doctor, you probably wouldn’t get better medical care than if you chose your doctor yourself. But you would end up narrowing your choices for steward or bookkeeper to people you could safely trust not to endanger your health with their medical decisions.

Bottom line: indirect election doesn’t buy what it promises—and it adds new problems

Put it all together and the result is straightforward:

  • Anything genuinely good that indirect election can achieve can also be achieved under direct election.
  • Any supposed advantage that requires voters to stop caring about outcomes and merely “choose the choosers” is unlikely to happen in real life—and if it doesn’t happen under direct election, it won’t reliably happen under indirect election either.
  • Indirect election brings several disadvantages that direct election avoids.

One objection is purely mechanical, but real: it adds an extra, unnecessary gear to the machine.

More seriously, it’s worse at cultivating public spirit and political intelligence. And if it ever worked “as intended”—meaning primary voters actually left meaningful discretion to their electors—then voters would feel less personal connection to “their” representative, and representatives would feel less direct responsibility to constituents.

On top of that, concentrating the final decision in a small group of electors makes intrigue and corruption easier. In terms of bribery, it would push constituencies back toward the condition of the old small boroughs: you wouldn’t need to sway many people to guarantee victory.

Someone might respond, “But electors are accountable to the voters who chose them.” The problem is that electors don’t hold a continuing public office, and they have little to lose from casting a corrupt vote beyond the chance of not being chosen again—often a consequence they wouldn’t fear much. So enforcement would still have to rely mainly on anti-bribery penalties, and experience has already shown how weak that safeguard becomes in small electorates.

The danger scales with how much independent discretion the electors have. The only time they’d be reluctant to use their power for personal advantage is when they were elected under an explicit pledge—when they function as mere delegates who carry their constituents’ votes straight through to the final decision. But in that case, the double stage accomplishes nothing. The moment the two-stage system begins to have a real effect, it begins to have a bad effect. And that’s generally true of indirect election, unless you’re in circumstances like those of U.S. Senate elections described above.

A limited, temporary case for it

The strongest argument left for this contrivance is practical, not principled. In some political climates, it might be an easier compromise than schemes like plural voting—a way to give everyone some vote without letting sheer numbers alone dominate Parliament. For example, imagine expanding the electorate by adding a large, more select segment of the laboring classes, elected by the remainder. Under some circumstances, a two-stage arrangement might serve as a convenient temporary bargain.

But it doesn’t carry any political principle far enough to satisfy thoughtful people as a stable, long-term settlement.

X — Of The Mode Of Voting

The biggest question about any voting system is simple: should votes be secret or public? Let’s go straight to that.

It’s a mistake to make this a drama about “cowards” who hide. Secrecy is perfectly reasonable in plenty of situations—and in some, it’s absolutely necessary. Wanting protection from harms you can honestly avoid isn’t cowardice. And of course there are imaginable cases where secret voting beats public voting. My claim is narrower: in politics, those cases should be the exception, not the standard.

This is one of those topics where the spirit of an institution—what it trains people to feel and assume—matters as much as the mechanics. A secret ballot naturally suggests a particular story in the voter’s mind: that the vote is his personal possession, something handed to him for his own use and benefit, rather than something he holds in trust for the public.

But if voting really is a public trust, then a hard question follows: if the public is entitled to your vote, aren’t they entitled to know how you cast it? The secret ballot can plant a false and damaging idea—especially because many of its loudest modern champions have helped spread it. The earliest promoters may not have meant it that way, but ideas aren’t judged only by what their inventors intended; they’re judged by what they do in the minds of the people who adopt them. Mr. Bright and his democratic allies, for example, insist the franchise is a right, not a trust. If that belief takes hold widely, it creates a moral harm that can outweigh even the most optimistic estimate of the ballot’s benefits.

Because here’s the principle: no one can have a moral “right” to power over other people. Any such power, if society allows you to wield it, is—morally speaking—a trust. And voting is power over others. Whether you’re choosing a representative or acting as one, your decision affects people besides yourself.

Once you say “the vote is a right I own for my own sake,” you don’t get to complain when people act like owners. On what basis could you condemn someone for selling their vote? Or for using it to curry favor with whoever can reward them? We don’t demand that someone use their house, their investments, or any other private property solely for the public good—because those things really are theirs. The vote isn’t like that.

Yes, the franchise is owed to a person partly so he can protect himself from unfair treatment. But it’s owed on the condition that he also uses it—so far as his vote can—to protect everyone else from the same unfairness. His vote isn’t a matter of personal whim any more than a juror’s verdict is. It’s a duty: he must cast it according to his most careful, conscientious judgment about the public good.

Anyone who treats voting as a personal tool is not fit to vote. For that person, the franchise doesn’t elevate the mind; it corrupts it. Instead of encouraging public spirit and a sense of responsibility, it feeds the temptation to use a public function for private interest, entertainment, or impulse—the same kind of motive, just on a smaller scale, that drives despots.

And people take cues from what society seems to expect of them. If society hands someone a public function and behaves as though it’s “his to use,” he will almost certainly come to feel exactly that. The standard people aim at is usually the one they think others set for them. So when voting is secret, the interpretation most voters will slide into is: “I don’t owe explanations to anyone who can’t see what I did. I can vote however I like.”

That’s why you can’t argue from the use of secret ballots in clubs and private societies to their use in parliamentary elections. In a club, a member really does have no obligation to weigh the interests of outsiders. His vote simply says whether he wants to associate with someone. On that question, everyone agrees his personal preference should decide—and it’s better for everyone, including the rejected person, if he can decide without provoking a quarrel.

There’s another difference: in a club, secret voting doesn’t naturally push people into lying. The members are social equals, and it would be considered rude for one to interrogate another about how he voted. Political elections are nothing like that, and are likely to remain nothing like that as long as the social conditions that create the demand for secrecy remain—conditions where one person feels so superior to another that he thinks he has the right to dictate the other’s vote. In that world, silence or evasiveness is almost guaranteed to be read as proof you voted “the wrong way.”

In any political election—even under universal suffrage, and even more obviously under a limited franchise—the voter has an absolute moral obligation to think first about the public interest, not personal advantage. He must vote by his best judgment as though the entire election depended on him alone.

Once you grant that, a first, obvious conclusion follows: like other public duties, voting ought to be performed in public view, under public criticism. Everyone has a stake in whether this duty is done well, and everyone has a legitimate grievance if it’s done dishonestly or carelessly. Of course, no moral rule in politics is sacred in every imaginable circumstance; sometimes stronger considerations override it. But this rule has so much weight that the only cases that justify departing from it should be sharply exceptional.

And those exceptional cases do exist. It can happen that if you try to make the voter responsible to the public by exposing his vote, you don’t actually make him answerable to “the public” at all. You make him answerable to some powerful person whose interests run even more directly against the community’s good than the voter’s own selfishness would. If that’s true for a large share of voters—if they live under something close to political servitude—then the ballot may be the lesser evil. When voters are effectively slaves, you can tolerate a lot if it helps them loosen the chains.

The strongest argument for the ballot is when the harmful power of the Few over the Many is growing. In the last centuries of the Roman Republic, the case for the ballot was overwhelming: the oligarchy grew richer and harsher each year, the people poorer and more dependent, and stronger barriers were needed to stop the franchise from becoming just another tool in the hands of the powerful and unscrupulous. Likewise in Athens, the ballot—where it existed—did real good. Even in relatively stable Greek city-states, a single unfairly obtained popular vote could destroy freedom for a time. And even if the Athenian voter wasn’t usually dependent enough to be routinely coerced, he could still be bribed—or terrified by the lawless violence of tight groups of wealthy young men, which were hardly unknown even there. In that context, the ballot was an instrument of order, helping produce the eunomia—good civic rule and discipline—for which Athens was famous.

But in the more developed modern states of Europe, and especially in England, the power to coerce voters has weakened and continues to weaken. Today, the bigger danger is less “what others can force the voter to do” and more “what the voter is tempted to do on his own.” The threats now come from self-interest, class interest, and ugly emotions that live inside the voter—either as an individual or as a member of a group. If you protect him from outside pressure by secrecy, but in the process remove every restraint of shame and accountability that might hold back his worst impulses, you trade a smaller and shrinking evil for a larger and growing one.

On this point—especially as it applies to England at the time—I once wrote the argument out in a pamphlet on Parliamentary Reform, and I don’t think I can improve on it, so I’ll restate it here:

Thirty years ago, the main evil in parliamentary elections was precisely what the ballot would block: coercion by landlords, employers, and customers. Now, I believe the larger source of wrongdoing is the voter’s own selfishness and biased loyalties. Bad votes are more often driven by personal gain, class advantage, or some petty feeling than by fear of retaliation. And the ballot would let the voter surrender to those motives without the discomfort of public judgment.

Not long ago, the rich and powerful classes effectively ran the government. That domination was the country’s central grievance. The habit of voting at an employer’s or landlord’s command was so deeply rooted that only a rare burst of popular enthusiasm—usually for a good cause—could shake it. A vote against those pressures was generally a public-spirited vote. And even when it wasn’t, it was still likely to be a good vote, because it pushed back against the worst evil of the era: oligarchic control. In that situation, it would have been a real gain if voters could have voted freely and safely—even if their choices weren’t always wise—because it would have broken the yoke of the ruling power: landlords and the brokers of rotten boroughs.

The ballot wasn’t adopted. But events have steadily done much of the ballot’s work anyway. The political and social conditions that matter here have changed radically and keep changing. The upper classes are no longer the masters of the country. You’d have to ignore everything happening around you to think the middle classes still automatically bow to the wealthy, or that working people are as dependent on the upper and middle ranks as they were twenty-five years earlier. Recent events have taught each class its collective strength, and they’ve put individuals in lower ranks in a better position to stand up to those above them.

In most cases now, votes aren’t coerced, because the old tools of coercion aren’t as available. Votes are increasingly the expression of the voter’s own political tastes and personal preferences. Even the worst features of the current system prove it. The growth of bribery—so loudly denounced, and spreading to places once free of it—shows that local power-brokers no longer control everything. People vote to please themselves, not merely to please someone else. There’s still plenty of servile dependence in counties and small boroughs, no question—but the spirit of the age runs against it, and events keep wearing it down. A good tenant can now feel he matters to his landlord almost as much as the landlord matters to him. A successful shopkeeper can afford not to cling to any one customer. With each election, the vote belongs more and more to the voter’s own will.

So what needs emancipating now is less the voter’s circumstances and more the voter’s mind. Voters are no longer just the instruments of other men’s will—mere levers that deliver power to a controlling oligarchy. The voters themselves are becoming the oligarchy.

And that’s exactly why publicity becomes indispensable. The more a voter’s choice is determined by his own will (not a master’s), the more his position resembles that of a member of Parliament—and we don’t let legislators hide their votes. As long as part of the community remains unrepresented, the Chartist argument against combining a restricted suffrage with the ballot is hard to answer. The current electorate—and most of the people a likely Reform Bill would add—come largely from the middle class, which has its own distinct class interests, separate from working people, just as landlords and major manufacturers do. Even if you extended the vote to all skilled laborers, their interests could still diverge from the unskilled.

Push the thought experiment further. Suppose you extended suffrage to all men—what used to be misleadingly called universal suffrage, and what some now call “manhood suffrage.” Even then, the voters would still share a class interest that sets them apart from women. So imagine the legislature is debating an issue that directly affects women: whether women should be allowed to earn university degrees; whether the mild punishments given to men who beat their wives nearly to death should be replaced with something that actually protects victims; or whether married women should have legal rights to their own property, as American states are increasingly guaranteeing in their constitutions. Aren’t a man’s wife and daughters entitled to know whether he supports a candidate who will fight for—or against—those changes?

A common objection is predictable: “All of this only matters because the franchise is unjust. If nonvoters’ opinions would pressure voters into acting more honestly or wisely, then those nonvoters must be more fit to vote than the voters—and therefore they should have the vote. Anyone fit to influence electors is fit to be an elector. And if we make everyone electors, then everyone should also have the ballot to protect them from improper pressure.”

That sounds persuasive—and I once thought it settled the question. I now think it’s wrong.

Not everyone who is fit to influence voters is therefore fit to be a voter. Voting is a far greater power than merely shaping opinion. People may be ready for the smaller political role before they’re safe to trust with the larger one. The views and desires of the poorest and least educated laborers can be valuable as one influence among many on voters and legislators. And yet it could still be dangerous to give that group decisive power immediately, given their current level of education and civic habits.

In fact, this indirect influence—nonvoters shaping the thinking of voters—is one of the ways societies transition peacefully toward wider suffrage. As it grows, it makes each new expansion of the franchise less abrupt and less frightening, until the moment arrives when a further extension can happen calmly.

But there’s an even deeper point that political thinkers often miss. It’s simply not true that public scrutiny is only useful when the public is wise enough to judge perfectly. It’s a shallow view of public opinion to think it helps only by forcing everyone into obedient conformity. Being seen—having to answer to others, having to defend your reasons—matters most of all for the people who disagree with the crowd, because it forces them to be sure of their footing when they resist pressure.

Most people—unless they’re swept up in a sudden wave of anger—won’t do something they fully expect to be harshly condemned for unless they’ve already decided, in a settled way, that they’re going to do it. That kind of stubbornness is usually a sign of deliberation. In all but the truly vicious, it tends to grow out of sincere, strongly held convictions.

And even when conviction isn’t in play, there’s a simpler pressure that keeps power from being abused: the expectation of having to explain yourself later. The mere prospect of giving an account of your conduct pushes you toward actions you can at least describe without shame. If someone thinks “basic decency” isn’t a serious brake on misuse of authority, they haven’t spent enough time watching people who feel no need to keep up appearances.

That’s why publicity—being seen, being answerable—is invaluable even when it achieves only the bare minimum. At its simplest, it does two things:

  • It blocks actions that simply cannot be defended in any plausible way.
  • It forces deliberation: before you act, you have to decide what you’ll say when you’re asked why you did it.

Why I Still Don’t Want the Ballot—even under Universal Suffrage

A common reply goes like this: “Maybe secrecy is risky now, when only some people vote. But someday everyone will be educated and fit to vote; men and women alike will have the franchise because they’ve earned it. Then there’s no danger of class legislation, because the electorate will just be the nation. Even if some people vote selfishly, the majority won’t have any narrow motive. And since there won’t be non-voters to answer to, the ballot will filter out only the corrupt pressures and do pure good.”

I don’t accept that—even in the best imaginable version of universal suffrage.

1) In that future, secrecy wouldn’t be necessary

Picture what the argument itself assumes: a broadly educated population, with every adult holding a vote. In that world, public opinion would rule more strongly than it already does. Even today—when only a minority votes and much of the public is poorly educated—everyone can see that opinion is the final authority in practice.

So it’s fantasy to think that, over a community where everyone reads and everyone votes, landlords and wealthy people could regularly force the electorate to act against its own inclinations in a way the public couldn’t easily shake off.

2) But publicity would still be necessary

Even if secrecy stopped being needed as protection, public accountability would still matter as much as ever. Human experience would have to be wildly mistaken for this to be true: that merely being part of the community—without a stark, obvious conflict with the public interest—automatically makes people perform public duties well, even without the stimulus and restraint of others’ judgments.

For most people, simply having “a share” in the public good isn’t enough to make them do their public duty. Even when they have no private interest pulling the other way, they still need external motives—encouragement, social expectation, and yes, the discomfort of potential disapproval.

3) People don’t vote as honestly in secret as in public

It also doesn’t follow that if everyone voted, they’d vote as honestly behind a curtain as they would in the open. The claim that “when the electors are the whole community, they can’t vote against the community’s interest” sounds profound, but it doesn’t really say much once you examine it.

The community, taken as a whole, can only have its collective interest. But individuals can have countless interests that diverge from it—because a person’s “interest” is whatever they feel invested in.

Everyone carries many competing motives: likes and dislikes, resentments and loyalties, selfish impulses and better ones. No single one of these automatically defines “a person’s interest.” People are good or bad largely according to which class of motives they choose to obey.

And those motives shape votes. For example:

  • Someone who’s a tyrant at home is likely to sympathize with tyranny elsewhere—at least when it isn’t aimed at him—and to feel little sympathy for resistance to it.
  • An envious person can vote against a plainly deserving candidate simply because that candidate has a reputation for virtue (like the old story of voting against Aristides “the Just”).
  • A selfish person may choose a small personal gain over his share of the larger public benefit from a good law, because private advantages are the ones he’s trained to focus on—and the ones he can most easily measure.

Many voters, in fact, carry two sets of preferences:

  • what they privately want (or what would benefit them, their group, their grudges, their rivalries), and
  • what they recognize as publicly defensible.

The second is the only set most people want to be seen endorsing. Even when their neighbors aren’t any better than they are, people still care about showing the better side of themselves.

That’s exactly why secrecy changes behavior. People will cast petty, dishonest, or mean-spirited votes—out of greed, malice, spite, personal rivalry, or the prejudices of class or sect—more readily in private than they will when their community can see them.

And sometimes, the main thing holding a crooked majority back is a kind of unwilling respect for the opinion of an honest minority. Think about places where majorities have tried to dodge obligations or repudiate commitments: isn’t there at least some restraint in the shame of having to look an upright person in the face afterward?

If the ballot throws away all of that—even under the circumstances most favorable to it—then you’d need a far stronger case than can now be made to justify adopting it. And the case for its necessity is getting weaker over time, not stronger.


Voting Papers: Useful, But Don’t Turn Them Into Private Deals

On the other debatable issues around voting methods, we don’t need so many words.

A system of personal representation like Mr. Hare’s requires voting papers. Fine. But one thing is non-negotiable: the voter should sign that paper in a public polling place, or—if no polling place is conveniently reachable—at some office open to the public, in the presence of a responsible public officer.

The alternative some people suggest—letting voters fill out their papers at home and send them by post, or having a public officer pick them up—would be disastrous.

Why? Because it flips the moral physics of voting upside down. The act would happen:

  • without the healthy influences that come from being in public, and
  • with every unhealthy influence that thrives in private.

In private, a briber could literally watch the bargain being carried out. An intimidator could see the coerced vote delivered, on the spot, with no chance to reconsider. Meanwhile the good counterweights—the presence of people who know the voter’s real beliefs, and the bolstering solidarity of those who share the voter’s politics—would be shut out.


Make Voting Easy—and Make Elections Cheap

Polling places should be numerous enough that every voter can reach one easily. And under no excuse should we tolerate “transportation” at a candidate’s expense. That practice is too close to buying influence.

If anyone needs help getting to the polls, it should be limited to the truly infirm—and only with a medical certificate—and the cost should fall on the state or local community, not on candidates.

More broadly, the basic machinery of elections—polling stations, clerks, and the rest—should be paid for publicly. And it’s not enough to say candidates shouldn’t have to spend heavily to get elected. They shouldn’t be allowed to. A candidate ought to be permitted only a small, tightly limited personal expense.

Mr. Hare proposes requiring a £50 deposit from anyone who puts their name forward, to discourage vanity candidates—people with no real intention or chance—who might siphon off votes needed to elect serious contenders. That seems sensible.

There is, however, one cost that’s hard to avoid and unreasonable to ask the public to cover for everyone who wants it: making your candidacy known through basic announcements—advertisements, posters, circulars. For necessary expenses of that kind, £50—if it can be used for those purposes—should usually be enough. If not, make it £100.

But beyond that deposit—£50 or £100—any spending out of the candidate’s own pocket should be illegal and punishable. Friends may still spend money on committees and canvassing; it’s difficult to prevent. But the candidate’s own expenditure beyond the fixed limit should be treated as a crime.

If public opinion had any real reluctance to excuse lying, we should even require each member, on taking a seat, to swear—or pledge on honor—that they have not spent (and will not spend) anything beyond the allowed sum, directly or indirectly, to secure their election. If the pledge were proved false, they should face the penalties of perjury.

That threat matters because it signals seriousness. When lawmakers show they truly mean it, public opinion tends to follow. It stops treating a grave offense against society as a minor peccadillo. And once opinion shifts, an oath or honor pledge becomes genuinely binding—because people tolerate false denials only when they already tolerate the wrongdoing being denied.


Why Electoral Corruption Persists

This is painfully obvious in the case of electoral corruption. Political leaders have never made a truly serious attempt to end bribery, because they’ve never truly wanted elections to be inexpensive.

Expensive elections protect the wealthy by narrowing the field. They exclude countless potential rivals. And anything that keeps Parliament primarily accessible to rich men gets defended—no matter how harmful—on the grounds that it’s “conservative.”

This attitude runs deep among legislators of both major parties, and it’s almost the only point on which I think their intentions are genuinely bad. They care relatively little about who votes so long as they can ensure that only people of their own social class can realistically be elected.

They trust the solidarity of their class. And they can rely even more on the deference of newly rich outsiders who are trying to enter it. Under that arrangement, even a very democratic electorate won’t produce policy truly hostile to the sentiments or interests of the wealthy—so long as genuinely democratic candidates can be kept out of Parliament.

Yet even by their own lights, this habit of “balancing evil with evil” is miserable policy. The goal should be to bring the best people from both classes together under conditions that encourage them to put class instincts aside and pursue the common good—rather than letting the “many” indulge full class resentment in elections, only to be forced to act through representatives steeped in the class feelings of the “few.”


Don’t Turn Public Service Into Something You Buy

There are few ways political institutions do more moral harm—through their spirit, not just their rules—than by treating public office as a favor to be granted: something a person seeks for personal reasons, and even pays for, as if it were a source of private profit.

People don’t like paying large sums for the privilege of doing hard work.

Plato had a sounder instinct when he argued that the people most worth entrusting with power are often the ones most reluctant to take it, and that the most reliable motive for inducing the best people to shoulder the burdens of government is the fear of being governed by worse people.

Now imagine the voter watching three or four gentlemen—none previously known for lavish, selfless generosity—compete with one another in how much they’ll spend to tack “M.P.” onto their names. Is the voter likely to believe they’re doing it for his benefit? And if he forms a cynical view of their behavior, what kind of moral obligation will he feel about his?

Politicians like to sneer that an uncorrupt electorate is a dream. And as long as candidates themselves won’t be clean, they’re right—because voters take their moral tone from the people asking for their votes. So long as the elected member, in any form, pays for the seat, every attempt to make elections anything other than a selfish bargain will fail.

As long as the candidate and the surrounding social customs treat being an MP less as a duty to perform and more as a favor to request, no effort will make the ordinary voter feel that voting is also a duty—and that the only proper basis for choosing is personal fitness.


Why Paying MPs Isn’t the Fix People Think It Is

The same principle that demands we neither require nor tolerate election spending by candidates leads to another conclusion that can look, at first glance, like the opposite: it argues against paying members of Parliament.

Some propose paying MPs to open Parliament to people of all ranks. But if payment exists, it should be compensation for lost time or money, not a salary.

The supposed advantage of a salary—widening the pool of candidates—is mostly an illusion. No reasonable salary would lure people already thriving in other professions with strong prospects. So serving in Parliament would become a career in itself, pursued mainly for income, under all the demoralizing pressures of a job that is inherently precarious.

It would become especially attractive to low-grade adventurers. You’d end up with 658 officeholders and ten or twenty times as many would-be officeholders constantly bidding for votes—promising anything, honest or dishonest, possible or impossible—trying to outdo one another in flattery, in pandering to the meanest feelings, and in exploiting the most ignorant prejudices of the most vulgar part of the crowd.

The comic “auction” between Cleon and the sausage-seller is a fair caricature of what would be happening all the time. This would be a permanent blister applied to the most diseased parts of human nature: 658 prizes offered to the most successful flatterer and most skillful misleader of their fellow citizens. Even despotisms have rarely matched that kind of organized cultivation of vicious courtship.

There is, however, a practical alternative for rare cases. Sometimes a person with exceptional qualifications—qualities the public truly needs—may lack independent means from property or a profession. In that case, there’s the option of public subscription: their constituents can support them while they serve, as happened with Andrew Marvell.

That arrangement is not objectionable, because it won’t be offered merely for submissiveness. Groups don’t care enough about the fine distinctions between one sycophant and another to pay for the privilege of being flattered by a particular one. Support of this kind will be given only for striking personal qualities—qualities that, while not proof of fitness to represent the nation, at least suggest it, and in any case provide some guarantee of independence of mind and will.

In small, local elections—especially the kind that mainly decide how a public fund gets handed out—the real danger is that the outcome ends up controlled by the tiny group of people who are intensely involved. Why? Because when the stakes are narrow and the public doesn’t feel much urgency, the people most eager to “get active” are often the ones who expect to cash in privately. In that situation, it can be smart to make participation as easy and low-effort as possible for everyone else, even if only so a broader group can drown out those narrow interests.

National government is different. Here, the goal shouldn’t be to coax the indifferent into voting by making it effortless. The goal should be the opposite: to keep decision-making out of the hands of people who don’t care enough to think.

Here is why this matters. Someone who can’t be bothered to show up to vote is exactly the kind of voter who, if you remove even that small inconvenience, will hand over a ballot to the first person who asks—or trade it for some trivial favor. If you don’t care whether you vote, you probably don’t care much how you vote. And a vote that isn’t the expression of an actual conviction carries the same weight as one that reflects a person’s considered beliefs and long-term aims. In that frame of mind, a person has no moral right to vote at all.

Making candidates commit—publicly and enforceably

Several experienced election witnesses supported a practical idea: require members of Parliament to make a formal declaration—something they affirm openly, with real penalties attached if they violate it. Their view was simple: if the law makes clear it’s serious, the system will actually work. And if a conviction for bribery carried a personal stigma, it would shift public opinion and make bribery less socially tolerable.

One objection raised was that it’s unfair to treat a merely promissory oath (a pledge about future conduct) like an assertory oath (a statement about present facts) by attaching perjury penalties to it. But that distinction doesn’t carry much weight. In court, witnesses take a promissory oath too—they promise to tell the truth. And the comeback—that a witness promises something immediate while a member would be promising for the future—only matters if we imagine the person might somehow forget the obligation, or break it without noticing. In a case like this, that’s not a realistic concern.

The charity loophole: when “giving” becomes buying

A more serious problem is how election spending often disguises itself. One common form is “generous” subscriptions to local charities and projects. It would be a harsh rule to ban a local member from donating to charity in their own area. When the giving is genuine, the goodwill it earns is simply one of the unfair-but-real advantages that comes with wealth.

But much of the real damage is that these contributions can become bribery wearing polite clothes—money spent to “keep up the member’s interest.” To block that route, the member’s declaration should include a clear financial rule:

  • Any money the member spends in the locality, or for local people or local purposes (with the possible exception of ordinary personal travel and lodging costs) should go through an election auditor.
  • The auditor—not the member, not the member’s friends—should then apply the money to the stated, legitimate purpose.

Who should pay for lawful election costs?

Finally, there’s the principle that legal election expenses should not be paid by the candidate personally, but charged to the locality itself. Strong testimony supported that approach.

A caution about creating paid political operators

As one commentator put it: if you create a cash incentive for the lowest and most desperate stratum to make politics their “career,” you effectively open the door to the professional demagogue. Nothing is more dangerous than making it the private interest of a swarm of energetic people to push government toward its easiest corruption. Human weaknesses look bad enough on their own; imagine what happens when they’re played like an instrument by a thousand skilled flatterers. If there were hundreds of seats—each with a guaranteed, even modest, income—available to anyone who could persuade the crowd that ignorance is just as good as knowledge (or better), the odds are frighteningly high that many would believe it and act on it.

XI — Of The Duration Of Parliaments

How long should members of Parliament serve before they have to face voters again?

The basic principles are straightforward; the hard part is picking the right number.

On one side, you don’t want an MP to sit so long that they stop feeling accountable. Give someone too much job security and they may start coasting—treating the work casually, drifting toward decisions that benefit them personally, and skipping the kind of open, public back-and-forth with constituents that makes representative government more than a slogan. Whether an MP agrees with their voters or argues with them, those regular conversations are part of the point.

On the other side, you also don’t want elections so frequent that an MP can only ever be judged on yesterday’s headline. A decent term gives voters a chance to evaluate a whole pattern of conduct, not one vote taken out of context. It also gives representatives enough room for independent judgment—as much discretion as a free government can safely allow, while still keeping real democratic control. And in practice, democratic control works best when it’s exercised after enough time has passed for a representative to show what kind of person they are and what they can do. The goal isn’t to reward the MP who merely obeys and repeats the electorate’s opinions; it’s to make room for someone who can earn respect by thinking, explaining, persuading, and sometimes resisting.

There’s no universal rule that draws a clean line between these aims. The right term depends on the surrounding political environment.

When democracy is weak and easily drowned out, frequent elections matter. If a representative leaves a democratic constituency and immediately enters an elite social world—courtly, aristocratic, and full of subtle pressures—those influences can steadily pull them away from the popular point of view. They can cool toward the interests of the people who elected them and slowly forget what those people wanted in the first place. In that setting, forcing MPs to return often for renewal of their “commission” is essential. Even three years can be close to too long; anything longer is unacceptable.

When democracy is the dominant force and still gaining strength, the problem flips. The risk isn’t that MPs will forget the voters; it’s that they’ll become nervously obedient to them. In a world of intense publicity—especially with a constantly watching press—representatives know their actions will be reported, debated, and judged immediately. They are always, in their constituents’ eyes, either rising or falling. And that same constant exposure keeps public opinion and democratic pressure continually active in the MP’s own mind. Under those conditions, a term shorter than five years would do little except train representatives into timid subservience.

That shift in English political life helps explain a change in fashion: annual parliaments were a major demand of the more advanced reformers forty years earlier, but now you rarely hear the idea. And there’s another practical point worth noticing. Whether a parliamentary term is short or long, the last year of any term puts members into campaign mode—exactly the state they would always be in under annual elections. If you made terms very brief, you’d effectively create annual parliaments for a large share of political time, because so much of each term would be spent in that anxious, election-focused posture.

Given how things now stand, a seven-year maximum is longer than necessary, but it may not be worth changing for the modest gains you’d likely get—especially because the ever-present possibility of an early dissolution keeps MPs mindful of their constituents anyway.

Even if we agreed on the ideal length of the mandate, another design question comes next: should each member’s seat expire on the anniversary of their own election, so that the House is renewed gradually rather than all at once?

At first glance, that might sound sensible. But unless there’s a practical reason to push it, the case collapses under stronger objections.

One major problem is responsiveness. If members cycled out one by one, there would be no quick way to remove a majority that had taken a course the nation finds intolerable. By contrast, the certainty of a general election after a fixed period (often one that may already be nearly over), combined with the possibility of a general election at any time when the minister wants it—or thinks it would play well with the public—helps prevent a serious and lasting split between the mood of Parliament and the mood of the country. If a majority could always count on having years left, it could drift far from public feeling and stay there.

Gradual replacement doesn’t solve that. If new members entered “drop by drop,” they’d be more likely to take on the habits and assumptions of the existing majority than to change them. Yet it’s crucial that the overall sense of the House broadly matches the sense of the nation—even while we also want room for exceptional individuals to speak freely, including when their views are unpopular, without automatically losing their seats.

There’s also a second, weighty argument against partial renewal: it’s valuable to have a periodic, unmistakable national test of strength. A general election functions like a full muster of opposing forces. It measures the state of public opinion and shows, beyond serious dispute, how strong different parties and ideas really are. Partial elections don’t provide that clarity, even when a large slice of the assembly turns over at once, as in some French constitutions where a fifth or a third leaves together.

The reasons for giving the executive the power to dissolve Parliament belong to a later chapter, when we take up the structure and duties of the executive in a representative government.

XII — Ought Pledges To Be Required From Members Of Parliament?

Should a legislator be bound by instructions from the people who elected them? Or should they be free to use their own judgment? Put differently: is an elected representative supposed to be a community’s microphone—an “ambassador” sent to repeat back whatever the district already believes—or a hired expert who’s authorized not just to act for the voters, but to think for them about what should be done?

Both ideas have real supporters, and different governments have leaned into each. In the old Dutch United Provinces, members of the States General worked as straightforward delegates. The idea was taken so literally that if a major issue came up that wasn’t covered by their instructions, they had to go back home for guidance—just as a diplomat checks with their government before taking a big step.

Most modern representative systems go the other way. By law and custom, a member of Parliament is allowed to vote according to their own sense of what’s right, even when their constituents disagree. Still, a competing “floating” expectation hangs around: many representatives—sometimes without caring about popularity or re-election—feel a genuine moral pressure to vote the way their voters clearly want on contested questions. So if we set aside specific laws and national traditions, which model is actually the right one?

This isn’t really a question about how to write a constitution. It’s a question about what you might call constitutional morality—the ethics of how representative government should behave. It’s less about institutions on paper and more about the mindset voters ought to bring to the job of voting: what moral obligations do electors have, and what should they think they’re entitled to demand?

Because no matter how a representative system is designed, voters can always turn it into plain delegation if they want to. As long as people are free to vote—or not vote—and free to vote as they choose, you can’t stop them from attaching conditions to their support. A constituency can refuse to elect anyone who won’t pledge to back all their views. They can even demand, if they feel like it, that the representative promise to consult them before voting on any major issue that wasn’t anticipated. That kind of pressure can reduce a representative to a mouthpiece, or force them—out of honor—to resign when they no longer agree to play that role.

And because voters can do this, any serious theory of constitutional government has to assume they might. The core assumption of free institutions is not that power will always be abused, but that it has a natural tendency to be used for the purposes of whoever holds it. The whole point of constitutional design is to build in safeguards against that tendency. So even if we think it’s misguided or shortsighted for voters to turn representatives into delegates, it’s a plausible use of electoral power. A representative system should therefore be built so that—even if voters insist on pledges—their insistence can’t produce something that should never be in anyone’s hands: class legislation, written to advantage one group at everyone else’s expense.

Calling this “morality” doesn’t make it a soft issue. In fact, questions of constitutional morality can matter just as much as questions of constitutional machinery. Some governments exist only because key players habitually restrain themselves. Others are tolerable only because unwritten limits are honored.

You can see the pattern across systems:

  • In a fully one-sided system—pure monarchy, pure aristocracy, pure democracy—shared moral maxims are often the only barrier against the regime running to its natural extremes.
  • In imperfectly balanced systems, where limits exist on paper but the strongest power can blow past them (at least for a while), respect for constitutional checks survives mainly because public opinion sustains it.
  • In well-balanced systems, where power is split and each part can defend itself with weapons as strong as the others can use to attack, government still depends on restraint. It works because each side usually chooses not to use its maximum force—unless provoked by someone else doing the same.

In that sense, constitutional morality is often what keeps a constitution alive in practice.

The pledge question isn’t usually a life-or-death issue for representative government. But it matters a lot for whether representative government does good work. The law can’t tell voters what principles should guide their choice. But it makes a huge difference what voters believe their principles ought to be—especially on this point: should voters make support conditional on a representative promising to stick to specific opinions laid down by the constituency?

Anyone who’s followed the argument so far can probably guess where it leads. From the beginning, we’ve insisted on two equally important goals:

  1. Responsibility: political power should answer to the people for whose benefit it exists.
  2. Capacity: government should, as much as possible, gain the advantage of superior intellect—minds trained by deep study and practical discipline for the job.

If that second goal matters at all, it comes with a cost. A sharper mind and a more educated judgment are useless if they never produce conclusions different from those reached by people who haven’t studied. If we genuinely want representatives who are intellectually above the average voter, we have to expect that they will sometimes disagree with the majority of their constituents—and that, when they do, their judgment will often be the better one. From that, one lesson follows plainly: voters usually act unwisely if they demand total agreement as the price of keeping a seat.

But applying that principle is genuinely hard. Here’s the problem in its strongest form.

Yes, voters should ideally choose someone more informed than they are. But that person must also be accountable to them—meaning the voters have to judge whether the representative is doing the job well. How can they judge except by using their own opinions as a yardstick? And how can they even choose in the first place without doing the same?

Voters can’t safely choose by sparkle alone—by mere brilliance or showy talent. Ordinary people have limited tools for judging ability ahead of time, and what tools they have mostly measure presentation: the talent for speaking and writing. Those skills don’t reliably tell you whether the ideas are sound. If voters are supposed to put their own opinions “on hold,” what standard is left for deciding who will govern well?

Even if voters could identify the most capable person with perfect accuracy, they still wouldn’t always be justified in handing them a blank check. The smartest candidate may be on the opposite side of the big moral and political fights of the day. One might be a conservative where the constituency is liberal, or liberal where they are conservative. The central disputes might be religious or institutional, and the candidate’s convictions might cut sharply against the voters’—and vice versa. In that situation, greater ability might simply mean the person can push a course the voters sincerely believe is wrong more effectively and more aggressively. Voters may reasonably think it matters more to keep their representative aligned with what they see as duty on those core questions than to gain above-average talent.

There’s another issue too: representation isn’t only about getting someone “able.” It’s also about ensuring that different moral standpoints and ways of thinking are actually present in the legislature. Every substantial current of opinion ought to be felt there. If the constitution already ensures that competing outlooks will also be represented, then a constituency may reasonably focus on making sure its outlook isn’t missing. In some contexts, that can be the most important thing for voters to secure.

And sometimes voters may feel they need to tie a representative’s hands—not out of spite, but to keep them faithful to what the voters believe is the public interest. In a world where you could choose from an unlimited pool of honest, fair-minded candidates, this would be less tempting. But in the real world, elections are expensive, and social structure pushes constituencies—especially poorer ones—toward choosing among candidates from a much higher social class, with different economic interests. If that’s your situation, why would you be expected to “trust their discretion” unconditionally? Can we really blame working-class voters who face a choice between two or three wealthy men for demanding pledges on the measures they see as basic proof that the candidate isn’t simply serving the interests of the rich?

And even within the same political camp, not everyone gets their first choice. Some voters may have to accept the candidate selected by a majority of their own side. Their preferred candidate might be hopeless—but their votes might still be necessary for the majority’s candidate to win. In that case, the only practical leverage they have may be to make their support conditional on the candidate promising specific commitments.

So you can see why this gets tangled. It matters that voters elect people wiser than themselves and allow those people room to use that wisdom. Yet it’s also unavoidable that voters will rely heavily on their own convictions when deciding who counts as “wise,” and whether that wisdom has been proven by conduct. Because of that, it’s hard to lay down a strict moral rule for voters. The real outcome depends less on any neat formula than on the general character of the electorate—especially on whether they have a habit of respecting genuine intellectual and moral excellence.

People—and whole societies—who deeply appreciate the value of superior judgment usually learn to recognize it by signs other than simple agreement. They can spot it even when the person disagrees with them on plenty of points. And once they recognize it, they want to secure it; they won’t be eager to dictate opinions like orders to someone they regard as wiser.

But some electorates have the opposite temperament: they don’t look up to anyone. They assume no single person’s opinion is much better than their own—or at least not better than the combined opinion of a hundred or a thousand people like them. When voters think this way, they will elect no one who isn’t (or doesn’t loudly claim to be) a mirror of their views, and they will keep that person only as long as they keep reflecting those views in action. And then every aspiring politician, as Plato observed, will try to shape themselves after the Demos—making themselves as similar to the crowd as possible.

A full democracy has a strong tendency to cultivate that mindset. Democracy isn’t naturally friendly to reverence. It’s a good thing that it dissolves automatic deference to social rank. But in doing so, it also shuts down one of society’s most common “schools” for learning reverence in human relations. More importantly, democracy leans so heavily on the idea that people are equal in the ways that matter for political rights that it can undervalue the ways in which one person may genuinely deserve more weight than another—especially in knowledge and judgment. For that reason, among others, I’ve argued that institutions should give the opinions of the more educated some additional weight—not because education makes someone morally superior, but to help set a public tone that recognizes real differences in competence. I would still defend extra votes for demonstrated educational attainment, even if it had no direct effect on outcomes, simply because it would shape public feeling.

When an electorate does have an adequate sense that one person’s practical and intellectual value can be vastly greater than another’s, they won’t be at a loss for ways to identify the best candidates. The clearest signal is real public service:

  • holding serious responsibilities and performing well in them, with results that justify the decisions made
  • authoring measures whose effects show good design
  • making predictions that events repeatedly confirm and rarely contradict
  • giving advice that, when followed, tends to work—and when ignored, tends to lead to trouble

None of these signs is perfectly reliable. But we’re looking for evidence that ordinary people can reasonably use. Voters should avoid betting everything on a single indicator unless the others support it. And when judging whether someone’s “success” really shows wisdom, they should pay close attention to the general verdict of disinterested, well-informed observers who actually understand the subject.

These tests apply most clearly to people who’ve already been tried. But “tried” doesn’t only mean tested in office. It also includes those tested in thought: people who have written or spoken about public affairs in a way that shows sustained study and serious understanding. As political thinkers, they may earn much the same kind of confidence as practical statesmen.

When voters must choose among people who are entirely untested, the best substitutes are:

  • a strong reputation for ability among those who know them personally
  • endorsements and trust from people the public already has reason to respect

Constituencies that truly value mental ability, and actively hunt for it, will usually manage to elect people better than mediocre—often people they can trust to manage public affairs with independent judgment. For such representatives, demanding that they surrender their judgment on command—at the insistence of those less informed—would be an insult.

But if candidates of that kind, honestly sought, really aren’t available, then voters are justified in taking other precautions. You can’t reasonably expect people to set aside their own convictions unless they’re doing it in order to be guided by someone whose knowledge and judgment are genuinely superior to their own.

They’d be wise, even in that case, to remember something basic: once you’ve elected a representative who takes the job seriously, that person is often in a better position than most voters to realize they were wrong at first—and to fix it. They’ll hear arguments, see evidence, and deal with consequences up close. That’s why voters generally shouldn’t demand a promise like, “Never change your mind—or if you do, resign.” The one exception is when circumstances force you to pick someone whose fairness you don’t fully trust.

But things look different when you’re choosing a newcomer—someone you don’t know, and who hasn’t been clearly vouched for by any credible authority. In that situation, it’s not realistic to expect voters not to treat agreement with their own views as the main qualification. Still, there’s a line: it should be enough that they don’t treat a later change of mind—honestly admitted, with reasons plainly stated—as an automatic reason to withdraw support.

Even when the candidate is obviously capable and has a strong reputation, voters shouldn’t switch off their own judgement entirely. Respect for intelligence isn’t supposed to turn into self-erasure. But if the disagreement isn’t about the core foundations of politics, a sensible voter should pause and think: if an able person disagrees with me, there’s a real chance I’m the one who’s mistaken. And even if I’m not, it can still be worth yielding on nonessential points for the enormous benefit of having a truly competent person representing me on the many questions where I’m not qualified to decide well.

People often try to have it both ways: they want the talent and they want the talented person to echo their own views on every disputed point. They push the able representative to “meet them halfway” by giving up his own judgement on the issues where they differ. But for an able representative to accept that bargain is a betrayal of what makes him valuable. One of the most important duties of real intellectual and moral independence is to stand by the position that’s unpopular, and to keep serving the very opinions that most need help—because they’re being shouted down. A person of conscience and proven ability should demand full freedom to act according to his own judgement, and refuse to serve under any other conditions.

At the same time, voters have a right to know what they’re getting. They’re entitled to a clear account of how the candidate means to act—what beliefs and principles, on the matters relevant to public duty, will guide his conduct. If some of those beliefs are unacceptable, it’s on him to persuade them that he still deserves their trust. And if voters are wise, they’ll often overlook large disagreements, because the candidate’s overall value outweighs those differences.

Still, there are limits to what any electorate can reasonably overlook. Anyone who cares about self-government the way a free citizen should will have a few convictions about national life that feel non-negotiable—beliefs tied up with their sense of right and wrong. The strength of those convictions, and the importance attached to them, makes people unwilling to trade them away, or to hand them over—even to someone far more intelligent.

And here’s a key point: when such convictions exist widely in a people (or even in a meaningful portion of them), they deserve political weight simply because they exist—not only because they’re likely to be true. You can’t govern a country well against the population’s deepest moral instincts, even if those instincts include errors. A sound view of the relationship between rulers and ruled doesn’t require voters to accept representation by someone who plans to govern in direct opposition to their fundamental convictions.

So if voters make use of a representative’s abilities in other areas—during a period when the “life-and-blood” issues aren’t likely to come up—they’re still justified in removing him the moment those issues become real and contested, especially when the majority is uncertain and his vote and influence might matter. For example (names are only illustrative), voters might ignore the views attributed to Mr. Cobden and Mr. Bright on resisting foreign aggression during the Crimean War, because national feeling ran overwhelmingly the other way, making their dissent politically insignificant. But those same views could reasonably cost them their seats during the conflict with China, where the question was more evenly balanced and it genuinely mattered which side might win.

Where this leaves the question of pledges

Taken together, the argument points to a practical set of rules:

  • Don’t require explicit pledges—unless social conditions or bad institutions narrow the choice so much that voters are forced to select someone likely to be driven by loyalties or biases hostile to their interests.
  • Demand full transparency about the candidate’s political opinions and sentiments; voters have a right to know what will guide his actions.
  • Reject candidates who clash with you on fundamentals; on the few articles that form the base of your political faith, voters are often not just permitted but obligated to refuse representation by someone who opposes them.
  • Be more tolerant on non-fundamentals in proportion to the candidate’s proven mental superiority; the more capable the person, the more voters should endure disagreements on matters outside their core principles.
  • Seek the highest caliber representatives—people you can trust with real discretion to act on their own judgement.
  • Treat it as a civic duty to do everything possible to place such people in the legislature.
  • Prefer ability over superficial agreement: the benefits of a representative’s competence are certain, while the assumption that he is wrong and you are right on disputed points is, by nature, uncertain.

Up to now I’ve discussed this on the assumption that the electoral system—so far as law can shape it—follows the principles laid out earlier. Even under that best-case setup, the “delegate” theory of representation strikes me as false, and its real-world effects harmful, though the harm would be limited.

But if the constitutional safeguards I’ve argued for aren’t adopted—if minorities aren’t protected by representation, and if every vote is treated as identical regardless of any measure of education—then the case for leaving representatives unfettered becomes overwhelmingly stronger. Under universal suffrage, that freedom may be the only way for any opinions other than those of the majority to be heard in Parliament.

In a system that calls itself a democracy but functions as the exclusive rule of the working classes—everyone else effectively unrepresented and unheard—the only escape from crude class lawmaking and the worst forms of political ignorance would be whatever willingness the less educated might show to choose educated representatives and to defer to their judgement. Some willingness is plausible, and everything would depend on cultivating it as far as possible. But if a class holds political omnipotence, and then voluntarily agrees to any serious limits on its own certainty and will, it would display a wisdom that no group with absolute power has shown—or, under the corrupting influence of that power, is ever likely to show.

XIII — Of A Second Chamber

Of all the debates in representative government, few have generated as much heat—especially in continental Europe—as the question of two legislative chambers. People have treated it like a political litmus test: if you favor “limited democracy,” you’re expected to like two chambers; if you favor an unchecked popular majority, you’re expected to prefer one.

I don’t buy that framing. A second chamber isn’t a magic brake that makes a democracy safe. And if you’ve already gotten the big constitutional choices right, the difference between a one-house legislature and a two-house legislature is usually secondary.

That said, if you do have two chambers, they can be built in two basic ways:

  • Two similar chambers (chosen in much the same way, representing much the same mix of society).
  • Two different chambers (designed so one restrains the other).

Two similar chambers: mostly delay, sometimes useful

If the two houses are made up in roughly the same way, they’ll tend to respond to the same pressures. In practice, whatever can win a majority in one house can usually win a majority in the other.

The classic argument for two houses in this case is procedural: forcing a bill to pass two hurdles can slow things down. In the abstract, that could matter. If each house is the same size, you could end up in a situation where a bit more than a quarter of the whole legislature can block a bill—because it takes only a slim minority in one chamber to stop what a bare majority would pass in a single chamber.

But that scenario is more of a textbook possibility than a common reality. Two similarly composed houses rarely split in wildly different ways. If one house rejects a measure, there’s usually already been a substantial minority against it in the other. So the “improvements” likely to be blocked are usually the ones that only barely command a majority in the legislature as a whole.

In those cases, the worst likely outcome isn’t permanent defeat. It’s one of two things:

  • a short delay before the measure passes, or
  • a fresh appeal to the voters—in effect, asking whether that thin parliamentary majority is backed by a real majority in the country.

You can argue about which is better or worse, but in this setup the cost of delay and the benefit of checking public support tend to balance out.

The real value of a second chamber: it forces power to listen

The most common defense of two chambers is that they prevent rash decisions by requiring “a second look.” I don’t find that persuasive. A well-designed representative assembly should already have procedures—multiple readings, committee stages, debate rules—that amount to far more than two moments of reflection.

The stronger argument is psychological and political: power behaves worse when it feels alone.

Any person or group holding authority is tempted to become arrogant and domineering when they know they don’t need anyone else’s agreement. It matters, especially in major affairs, that no set of people can make their sic volo—“because I say so”—stick, even temporarily, without having to win consent from another body.

A stable majority in a single assembly is especially prone to this. If the same coalition of people works together year after year and can reliably win every vote in their own chamber, it can start acting like a little despotism—simply because nothing forces it to imagine how its actions look from the outside, or whether they would be accepted by another legitimate authority.

That’s the basic logic behind the Roman choice to have two consuls: split power so neither person is corrupted by having it whole. In the same spirit, two chambers help ensure that even a democratic majority is never left alone with its own certainty for long.

Just as importantly, politics—especially the politics of free institutions—runs on conciliation. You need the habit of compromise: the willingness to yield something to opponents and to shape good policies so they don’t unnecessarily insult people who disagree. The “give and take” between two houses is a constant training ground for that habit. And the more democratic a legislature becomes, the more valuable that training is likely to be.

Two different chambers: a “check” only works if it has real support

The two houses don’t have to be twins. They can be designed to check each other. Typically, that means one house is democratic, and the other is built to restrain the democratic house.

But whether that actually works depends entirely on something outside the building: the social power behind the second chamber.

A legislature that doesn’t rest on some major force in the country can’t effectively resist one that does. An “aristocratic” chamber is only powerful in an aristocratic society. Britain’s House of Lords was once the dominant force, with the Commons acting as a check—but that was when the great landholding barons were the main power outside Parliament.

In a genuinely democratic society, I don’t believe the House of Lords would have much real ability to moderate democracy. When one side is much weaker, the way to make it effective is not to line the two sides up as open enemies and fight it out. That guarantees the weaker side gets crushed.

A weaker moderating force can only do useful work if it avoids looking like a rival army. It has to operate from within the larger body of opinion, not against it—joining the crowd, not standing apart from it. That means:

  • attracting and organizing the parts of the majority that can be persuaded on a given issue,
  • influencing outcomes by mixing into the mass and “leavening” it,
  • strengthening the better side of a debate by lending it additional weight,

rather than provoking a full-scale rally against itself by posing as an external antagonist.

In other words, in a democracy, the genuinely moderating power has to act in and through the democratic house.

A “center of resistance” is necessary—but a hereditary upper house isn’t the best form

I’ve argued already that every political system needs some center of resistance to whatever power dominates it. In a democracy, that means some nucleus capable of resisting the majority when the majority goes wrong. I see that as a basic rule of good government.

Now, if a people with democratic representation—because of their history—are more willing to tolerate that resisting force as a second chamber than in any other form, that’s a strong practical reason to use that form. Habits and expectations matter.

Still, I don’t think a second chamber is the best design in itself, or the most effective for the job. If you have two houses and one is taken to represent “the people” while the other represents only a class—or isn’t representative at all—then in a society where democracy is the main social power, the second house won’t truly be able to resist the first.

It might be allowed to exist out of tradition. But it won’t function as a serious check. And if it tries to act independently, it will be pressured to do so in the same general spirit as the popular house—meaning it will need to be broadly democratic too. In that case, it will mostly limit itself to:

  • fixing accidental oversights of the more popular chamber, or
  • competing with it in popular, crowd-pleasing measures.

From here on, the practical ability to restrain majority rule depends on how power is distributed within the most popular branch of government.

I’ve already described how I think that internal balance can best be achieved. And there’s a crucial point: even if the numerical majority is allowed to dominate Parliament, if you also give minorities the strictly democratic right they deserve—representation proportional to their numbers—you ensure something vital.

You ensure the constant presence, inside the house, of many of the country’s strongest minds, elected by the same popular legitimacy as everyone else. They don’t need to form a separate caste or carry any special, insulting privilege. Yet their influence will usually be far greater than their headcount, because talent and experience carry personal weight.

That creates the needed moral center of resistance in the most effective form.

So for that purpose, a second chamber isn’t necessary—and in some imaginable arrangements it could even make that goal harder to reach.

If you still want a second chamber, don’t build it like the House of Lords

Even so, if—because of the other advantages I’ve mentioned—you decide to have an upper house, then it should be built from elements that can speak with authority against the mistakes and weaknesses of the majority without looking like a class enemy.

It should be inclined to oppose the class interests of the majority when those interests push policy in a selfish or shortsighted direction. But it must do so without seeming to represent a privileged class permanently set against the many.

Those conditions are clearly not met by a body structured like Britain’s House of Lords. Once the public stops being intimidated by inherited rank and great fortunes, a House of Lords becomes politically weightless.

A better model: a “Chamber of Statesmen,” like the Roman Senate

If you wanted to build a genuinely wise conservative body—one meant to moderate and regulate democratic dominance—the best example is the Roman Senate, arguably the most consistently prudent and capable council ever to run public affairs.

Why did it work? Because the weaknesses of a democratic assembly are, in large part, the weaknesses of the public itself: not stupidity, but lack of specialized training and deep administrative knowledge.

The natural corrective is to pair the popular house with another body defined by those qualities.

So, if one chamber represents popular feeling, the other should represent proven merit, tested by actual public service and strengthened by experience. If one is the People’s Chamber, the other should be a Chamber of Statesmen: a council made up of living public figures who have held important offices.

A chamber like that wouldn’t merely restrain. It would also lead. The power to hold the public back would sit with the people most capable—and usually most willing—to guide them forward in the right direction. And when it corrected popular errors, it wouldn’t look like an enemy class blocking progress. It would be made up of the public’s own natural leaders.

No other design comes close to giving an upper house real authority as a moderating force. And because it would often be the body pushing reforms, it would be difficult to dismiss it as “obstructionist,” no matter how much harm it prevented by saying no.

A hypothetical English version: who would sit in such a Senate?

If England had an empty slot in its constitution for such a Senate (and this is purely hypothetical), it might be built from people like these—chosen by clear, public qualifications rather than vague notions of “status”:

  • Members (past or present) of the Legislative Commission described earlier, which I consider essential to a well-designed popular government.
  • Current or former Chief Justices, and heads of the highest law and equity courts.
  • Judges who have served five years as senior judges.
  • People who have held a Cabinet office for two years.
    • They should still be eligible for the House of Commons; and if elected there, their senatorial status should be suspended.
    • The two-year rule prevents cynical appointments made just to place someone in the Senate, and it matches the time that often qualifies someone for a pension.
  • Former Commanders-in-Chief, and commanders of armies or fleets who have received formal thanks from Parliament for major successes.
  • Senior diplomats who have held first-class diplomatic posts for ten years.
  • Governors-General (for example, of India or British America) and colonial governors with ten years of service.
  • High-level leaders of the permanent civil service, such as:
    • Under-Secretary to the Treasury,
    • permanent Under-Secretary of State,
    • and other comparably senior, high-responsibility roles held for ten years.

It would also be good, if possible, to include some representation from the “speculative” or scholarly class. But that has to be done carefully. Simply picking “eminent” scientists or writers is too vague and invites controversy—because someone must choose, and that choice becomes political.

A better route might be to tie seats to certain professorships at national institutions after a fixed number of years in office. That way the qualification is objective. Pure literary or scientific fame is too disputable, and when someone’s writings have nothing to do with politics, they don’t demonstrate the specific abilities needed for statesmanship. When they do concern politics, the system would let each new government flood the chamber with its own loyalists.

England’s reality: any second chamber would have to grow out of the Lords

England’s history makes one fact hard to avoid: unless the constitution were violently overturned (which is unlikely), any workable second chamber would have to be built on the foundation of the House of Lords.

It’s not practical to abolish it and replace it overnight with a Senate like the one I’ve described. But it might be possible to add the kinds of qualified public servants just listed to the existing body, by appointing them as life peers.

A further—and perhaps necessary—step could be to stop having hereditary peers sit automatically in person, and instead have them present through elected representatives. That idea already exists in the cases of Scottish and Irish peers, and sheer growth in the number of peers will likely make some such reform unavoidable eventually.

A small adjustment to Mr. Hare’s plan could also prevent the representative peers from speaking only for the majority faction among the peerage. For example:

  • allow one representative for every ten peers;
  • let any group of ten peers choose a representative, grouping themselves however they like;
  • require candidates to declare themselves and register on a list;
  • set a time and place for voting, either in person or by proxy (as parliamentary practice allows);
  • have each peer vote for one candidate;
  • declare elected anyone who receives ten votes.

If a candidate receives more than ten votes, then reduce the excess so that exactly ten peers count as that person’s constituency—either by letting the extra voters withdraw their votes, or by selecting the ten by lot—and free the remaining voters to cast their votes again for someone else. Repeat the process until, as far as possible, everyone voting is included in some group of ten.

If fewer than ten peers remain at the end:

  • if there are five, allow them to agree on a representative;
  • if fewer than five, their votes are lost, or they might be allowed to add their support to someone already elected.

Aside from that small leftover exception, the system would ensure that each representative peer stands for a definite group of ten peers who not only voted, but deliberately chose the person they most wanted to speak for them from among all available candidates.

And to compensate peers who aren’t chosen as representatives, they should be eligible for the House of Commons—an eligibility currently denied in some cases, such as Scottish peers and Irish peers within their own region—while any representation in the Lords for minority factions among the peerage is also effectively denied under the status quo.

Other designs exist: for example, an upper house chosen by the lower

The “Senate” model I’ve argued for isn’t just attractive in theory; it has the strongest historical precedent and the clearest record of success.

But it’s not the only workable option. Another possibility is to have the second chamber elected by the first, with one restriction: the lower house can’t appoint its own members to it.

A chamber formed this way—like the American Senate, only one step removed from direct popular choice—wouldn’t clash with democratic principles, and it would likely gain significant public influence. And because its members would owe their positions to the lower house’s selection, it would be less likely to provoke jealousy or stumble into open hostility with the more directly popular chamber.

With the right safeguards to ensure minority viewpoints are actually represented, this kind of second chamber would almost certainly be a strong one. It would draw in plenty of highly capable people who never make it into the spotlight—men who, by chance or because they lack flashy, crowd-pleasing traits, either didn’t want to campaign for votes or simply couldn’t win a popular election.

The best design for a second chamber is one that brings in as many perspectives as possible that don’t share the majority’s built-in interests and biases—while still remaining fully compatible with democratic values. In other words, it should be independent enough to question the majority, without looking like an insult to democracy.

But I need to say this plainly: you can’t treat any second chamber as the main brake on majority power. The real nature of a representative government is set by how the popular (elected) house is built. Next to that, almost every other structural question about the form of government is minor.

XIV — Of The Executive In A Representative Government

Talking about exactly how to slice the executive branch into departments would take us too far afield here. Different countries need different setups. And honestly, if people are willing to start from scratch—rather than treating today’s bureaucracy as sacred just because it grew that way over centuries—there’s little risk they’ll make any huge mistake in how they group responsibilities.

One simple rule gets you most of the way:

  • Organize offices around real subject-matter “wholes,” not around historical leftovers.
  • Don’t create several independent departments to manage different pieces of what is obviously one job.

If the goal is one thing—say, “maintain an effective army”—then the authority in charge should be one thing too. All the tools and resources aimed at a single purpose should sit under a single chain of command, with clear responsibility. When you split them among independent authorities, something weird happens: the means become the goal. Each department starts treating its own budget, procedures, and priorities as the point of the enterprise. Then nobody—except maybe the head of government, who often lacks the hands-on expertise—has the job of guarding the real end. The parts don’t get coordinated around one guiding idea. Every department pushes its own demands, ignores the rest, and the mission gets sacrificed to the machinery.

Clear responsibility matters just as much as good structure. As a general rule, every executive task—big or small—should clearly belong to some specific person. Everyone should be able to tell who did what, and whose failure left something undone. Responsibility becomes meaningless when nobody can identify the responsible party. And even when responsibility exists, splitting it usually weakens it. If you want responsibility at full strength, there has to be one person who receives the full credit for what goes right and the full blame for what goes wrong.

That said, there are two different ways “shared responsibility” can work—one merely dulls accountability, while the other wipes it out.

1) Shared signatures (accountability weakened, but still real).
Accountability gets softer when an action requires the approval of more than one official. Still, each official remains genuinely responsible: if something wrong happened, none can honestly say, “I didn’t do it.” They’re involved the way an accomplice is involved in an offense. If the act is actually illegal, the law can punish them all, and their legal penalties needn’t be lighter just because more than one person took part.

But public judgment doesn’t work like legal punishment. The “penalties” and “rewards” of reputation always shrink when you spread them across a group. And in the common case where there’s no clear legal wrongdoing—no bribe, no embezzlement, no outright corruption—just an error, a bad call, or something that can be framed that way, each participant gains a ready-made excuse: others were in it too. People will forgive themselves (and expect others to forgive them) for almost anything—up to straightforward financial dishonesty—if those whose job it was to object didn’t object, and especially if they formally signed off.

Still, in this first case, there is at least a traceable act: each person, as an individual, consented and joined.

2) Secret-majority boards (accountability destroyed).
Things become far worse when the act isn’t traceable to individuals at all—when it’s simply “the decision of a majority” of a board that deliberates behind closed doors, and nobody knows (or, except in rare crises, is ever likely to learn) which members voted yes and which voted no. In that setup, responsibility is basically a word without a reality.

Bentham put it neatly: “Boards are screens.” A board’s action becomes everybody’s and nobody’s. You can’t make anyone answer for it.

Even reputational consequences rarely land on the individuals. A board may be criticized as a collective, but individual members don’t feel the impact unless they personally tie their self-image to the institution’s image. That kind of group loyalty tends to develop only in long-lasting bodies where members feel bound to it “for better or worse.” Modern political careers, with their frequent reshuffles, don’t usually last long enough for that strong esprit de corps to form—except perhaps among the lower-level permanent staff. So boards are usually a poor tool for executive work, and they should be used only when there are special reasons why giving full discretion to a single minister would be even worse.

At the same time, experience teaches another truth: many counselors can make one decision wiser. People rarely judge well—even about their own affairs—when they rely only on their own knowledge, or on one trusted adviser; and it’s even harder when the subject is the public interest. But this doesn’t conflict with the need for single-point responsibility. You can keep effective power and final responsibility in one person while still providing advisers—so long as each adviser is responsible only for the advice they give.

This matters because the head of an executive department is usually, in practice, a politician, not a specialist. He may be talented and capable—and if that’s not generally true, the government will be bad—but the broad political knowledge needed to run a country rarely comes with deep, professional expertise in the particular department he’s assigned to lead. So you must give him professional advisers.

Where experience and technical competence can realistically be found in one well-chosen person—think of a legal officer—one principal adviser, plus clerks who handle the details and institutional memory, can be enough. But often that isn’t sufficient. It’s not enough for a minister to consult a single competent person and then, when he doesn’t understand the field, blindly follow that person’s guidance. Many issues require the minister to hear multiple perspectives regularly, and to shape his judgment through the debate among a group of advisers. This is especially true in military and naval affairs. For those departments—and likely several others—the minister should have a council made up of capable, experienced professionals.

To ensure you get the best people even as administrations change, these councils should be permanent—meaning they shouldn’t automatically resign when the ministry that appointed them leaves office (as certain British naval arrangements once expected). At the same time, a good general rule is that people in high posts gained by selection (not by routine promotion) should hold office only for a fixed term, unless reappointed—much like staff appointments in the British Army. This reduces patronage and “jobbing” because the post isn’t a lifetime prize, and it also creates a graceful way to replace weaker appointees and bring in exceptionally qualified younger people who might otherwise have no opening until someone dies or resigns.

These councils should be advisory in the strict sense that the final decision must rest wholly with the minister. But they also must not be treated—or treat themselves—as decorative. If advisers serve a strong-willed or domineering chief, the system must make two things unavoidable:

  • advisers can’t keep quiet without discredit; they must state an opinion, and
  • the chief can’t ignore them; he must listen and seriously consider their recommendations, even if he ultimately rejects them.

A good model for this relationship can be seen in the councils attached to the Governor-General and the provincial governors in India. Those councils include people with professional knowledge of Indian affairs—knowledge that governors typically don’t have, and that it wouldn’t even be wise to require them to have. Each council member is expected to give an opinion, which often is simple agreement. But if a member disagrees, he not only may but regularly does record his reasons—and the Governor-General or Governor records his reasons as well.

In ordinary cases, decisions follow the majority of the council, so the council plays a real role in governing. But the chief may override even a unanimous council, provided he records his reasons. The payoff is powerful: the chief remains personally and effectively responsible for every act of government, while the council members carry the responsibility of advisers—and it is always knowable what each advised and why, because the documentation can be produced (and in practice is produced) when Parliament or public opinion demands it. Meanwhile, because council members occupy a dignified, visible position and visibly participate in government, they have nearly as strong incentives to do the work carefully and form thoughtful judgments as they would if the entire burden rested on them.

This way of running top-level administration is one of the best examples political history offers of matching tools to purpose. Political history, sadly, isn’t overflowing with brilliantly engineered institutions—but this one counts. It’s an improvement to the “art of politics” learned through the East India Company’s governing experience. And like many of the other practical arrangements that helped Britain govern India surprisingly well despite difficult conditions and imperfect materials, it may be headed for extinction—swept away in a general bonfire of established practices, now that these traditions are exposed to public ignorance and the overconfident vanity of politicians.

You can already hear the argument: people call for abolishing the councils as a needless, expensive drag on government. And alongside that, the pressure has long been intense—and is gaining official support—to abolish the professional civil service that trains the very people who make these councils valuable. Without that service, there’s no real guarantee the councils would contain anyone worth listening to.

Now for a principle that matters especially in a popular constitution: executive officials should not be chosen by popular election—not by the people directly, and not even by elected representatives.

Why? Because governing is skilled work. The qualifications required are specialized and professional, and they can’t be judged well except by people who either share some of those qualifications or have practical experience evaluating them. And the job of finding the best public servants is itself difficult: it’s not just picking the best among applicants. It’s actively searching for the best people, noticing talent as it appears, and keeping track of qualified candidates so they can be brought in when needed. That is labor-intensive work demanding fine judgment and a high level of integrity. It’s also, in practice, one of the public duties most badly performed—which is exactly why it’s so important to enforce as much personal responsibility as possible by making it a special obligation of the senior officials who lead each department.

So, with one major exception—positions filled through some form of public competition—all subordinate public officers should be selected on the direct responsibility of the minister they serve under.

At the top, the chain should be equally clear:

  • Most ministers (except the chief) will naturally be chosen by the chief.
  • The chief, although effectively designated by Parliament, should in a monarchy be formally appointed by the Crown.

And the authority to appoint should come with another linked power: the official who appoints should also be the only one empowered to remove any subordinate who can be removed at all. But, crucially, most public officers should not be removable at pleasure—except for personal misconduct. Otherwise, you can’t reasonably expect the people who actually do the detailed work of government—whose competence often matters more to the public than the minister’s—to devote themselves to mastering their profession, or to build the knowledge and skill the minister must often rely on completely. If they can be thrown out at any moment for no fault of their own—just so a minister can reward a friend or serve a political interest—then professionalism collapses.

This brings us to a hard question. If we reject popular election for executive officers generally, should we make an exception for the chief executive in a republic? Is the American system—electing the President every four years by the whole people—a good rule?

It’s not an easy question. In a country like the United States, where there’s little reason to fear a coup, there’s a real advantage in making the chief executive constitutionally independent of the legislature. If both branches are rooted in popular election and answerable to the public, they can act as meaningful checks on each other. That fits with a defining American constitutional instinct: to avoid concentrating large masses of power in the same hands.

But the price of that advantage is wildly disproportionate to its worth.

It seems far better for a republic to have its chief magistrate appointed openly by the representative body—much as a constitutional monarchy effectively ends up choosing its chief minister through Parliament. For one thing, the officeholder is almost guaranteed to be a more eminent figure. If the parliamentary majority appoints the executive, it will typically appoint its own leader—someone who is among the most prominent figures in political life, and often the most prominent.

By contrast, since the last surviving founder left the stage, the American President has usually been either:

  • a relatively obscure figure, or
  • someone known mainly for achievements outside politics.

That pattern isn’t a fluke; it follows naturally from nationwide popular elections. In a contest across the whole country, the most eminent party leaders are rarely the most “electable.” Prominent people accumulate personal enemies. They’ve taken stands. They’ve said things. Even at minimum they’ve expressed opinions that offend some region or significant group—and that can be fatal when votes are counted nationwide. A candidate with no history, no sharp edges, and few recorded views—someone known mainly for reciting the party’s creed—can more easily draw the entire party vote.

There’s another major harm: endless campaigning. When the highest office is awarded by popular election every few years, the time in between turns into a permanent campaign. The President, ministers, party leaders, and their followers become full-time election managers. The public is kept focused on personalities, and policy debates are increasingly shaped not by the merits of the issue but by how it’s expected to play in the next presidential election. If someone set out to design a system that makes party spirit the dominant motive in public affairs—one that pushes leaders not only to treat every issue as a party issue, but to invent issues just to build parties around them—it would be hard to beat this arrangement.

I won’t claim that, in every place and time, it’s ideal for the head of the executive to be as completely dependent on the votes of a representative assembly as England’s Prime Minister is—and the English system works without serious inconvenience. If a country wanted to avoid that level of dependence, it could still appoint the chief executive by Parliament while giving him a fixed term, insulated from parliamentary votes during that period. That would preserve the American-style stability while removing popular election—and with it, many of the worst incentives.

There’s a second way to give the head of the executive as much independence from the legislature as a free government can safely allow.

In Britain, the prime minister isn’t helpless when Parliament turns hostile, because he can dissolve the House and go back to the voters. A hostile vote doesn’t automatically eject him; it forces a choice: resign, or dissolve Parliament and ask the public to decide. That power—calling a fresh election—is so useful that even in a system where the executive serves a fixed term, it’s still wise to give the executive some constitutional way to trigger a new Parliament.

Here’s why: without it, you invite the political equivalent of two cars locked bumper-to-bumper.

  • A president and an assembly fall into open conflict.
  • Neither has any legal way, for years, to replace the other.
  • The country sits in a stalemate.

To survive that kind of deadlock without someone attempting a coup d’état takes an uncommon mix of deep commitment to liberty and disciplined self-restraint—something very few nations have consistently shown. And even if nobody tries anything extreme, it’s naive to assume the two sides won’t simply jam the machinery of government. You’d be betting that, no matter how fierce the party fight becomes, everyone will keep an unbreakable habit of compromise. That spirit can exist, but it’s reckless to push it to the breaking point.

There’s another practical reason to let the executive call a new Parliament whenever necessary. Sometimes the country genuinely doesn’t know which of two competing parties has the stronger backing. In that moment, you want a constitutional way to test the question immediately and settle it. Until you do, almost nothing else gets done well. Politics becomes consumed by the unresolved contest, and the period turns into a kind of intermission for serious governing: neither side feels confident enough to take on reforms that might provoke new enemies among voters or interest groups who could swing the outcome.

So far, I’ve been discussing normal conditions—where the executive is powerful, but not plausibly poised to seize absolute power. There is, however, a darker scenario: the executive holds enormous centralized authority, and the public’s attachment to free institutions is too weak to resist a takeover. In a country with that kind of risk, you simply can’t accept a chief magistrate whom Parliament cannot remove instantly. If the danger is real, the legislature must be able, by a single vote, to reduce him to a private citizen. Even then, complete constitutional dependence is only a thin shield against the most brazen and corrupt betrayal of public trust.

Now consider government offices where popular election is not merely unhelpful, but actively dangerous. At the top of that list are judges.

There are few roles where:

  • ordinary voters are less able to assess the needed professional competence, and
  • impartiality and independence from political factions matter more.

Some reformers—Bentham among them—have argued for a compromise: don’t elect judges, but let the public remove them after enough experience with their performance. It’s true that making any powerful official practically impossible to remove is, in itself, a problem. You don’t want a system where the only way to get rid of an incompetent judge is to catch him doing something criminal. And you don’t want someone with that much influence to feel answerable only to private conscience and vague public gossip.

The real question is comparative: assuming we’ve done everything reasonably possible to ensure judges are appointed honestly, which is worse?

  • judges being answerable only to their own conscience and the informed judgment of the public, or
  • judges being answerable to the government, or to the voters.

Experience has long settled one part of this: making judges dependent on the executive is a bad idea. And the case is just as strong—maybe stronger—against making them dependent on election results.

A popular electorate has many admirable qualities, but the ones a judge most needs—calmness and neutrality—are not among them. Fortunately, in the kind of popular participation essential to political freedom, those qualities aren’t what you’re asking citizens to supply. Even justice, though everyone should value it, is rarely the motive force in ordinary elections. Voters aren’t distributing something either candidate has a right to. They aren’t rendering a careful verdict on general merit. They’re choosing the person they trust more, or the person who better matches their political convictions. That’s normal—and it would be bizarre, even wrong, to demand that voters treat political friends and strangers with exactly the same detachment a judge must bring to court.

And if someone says, “But public opinion keeps officials honest,” that doesn’t land the way they think it does. The opinion that actually restrains and guides a capable judge is not usually the mood of the country at large (except, sometimes, in overtly political trials). It’s the judgment of the one “public” able to evaluate legal work properly: the bar, the professional community in the judge’s own court.

None of this means the general public has no role in justice. It has a crucial one—but in a different form: jury service. Serving as a juror is one of the rare cases where it’s better for citizens to act directly rather than through representatives. It’s also nearly the only case where it’s safer to tolerate the occasional mistakes of an authority-holder than to make that authority-holder constantly fear punishment for mistakes. A judge who must please the electorate to stay employed is no longer free to judge.

If judges could be removed by popular vote, anyone hoping to replace a judge would campaign against him by turning his decisions into political ammunition. They’d drag case after case into the court of public opinion—an audience that:

  • usually hasn’t heard the evidence,
  • hasn’t heard it under courtroom safeguards, and
  • is easily swayed by passion, prejudice, and selective storytelling.

Where such feelings already exist, the challenger would exploit them; where they don’t, he would work to stir them up. In any case that captured broad attention, and with enough effort, that strategy would almost certainly succeed—unless the judge (or the judge’s allies) fought back as a political actor, making counter-appeals to public emotion. That’s exactly what you don’t want judges doing.

Under that regime, judges would quickly learn the wrong lesson: that every high-profile decision risks their job, and that the central question isn’t “What is just?” but “What will play well?” or “What can’t be easily twisted?” Once you force judges to think like that, justice stops being a courtroom practice and becomes a popularity contest.

This is why the modern American practice—adopted in some newer or revised state constitutions—of requiring judges to face periodic popular re-election looks, to me, like one of democracy’s most dangerous mistakes. If it weren’t for the practical good sense that tends to reassert itself in the United States, and which is said to be producing a reaction likely to undo the experiment before too long, you could reasonably treat it as an early, major step downward in the decay of democratic government.

Turn now to the large, indispensable group that forms the stable backbone of the public service: the professional civil servants who do not come and go with every political swing. They remain to help each incoming minister with experience, institutional memory, and day-to-day administration under the minister’s general direction. These are people who enter public service young, as others enter law or medicine, expecting to rise gradually through the ranks over a career.

It’s plainly unacceptable to make such people removable at will, or to let them be thrown out and lose the value of their entire past service, except for serious, proven misconduct. And “misconduct” here isn’t limited to crimes. It includes things like deliberate neglect of duty or behavior that shows they can’t be trusted with the responsibilities they’ve been given.

Because if you can’t dismiss them without fault, the only alternative is to keep paying them—effectively parking them as pensioners at the public’s expense. That makes the first decision—the initial appointment—extraordinarily important. The question becomes: what appointment system best ensures that the right people are chosen at the start?

At the entry level, the big risk isn’t usually that the selectors lack technical expertise. New hires aren’t chosen because they already know the job; they’re chosen so they can learn it. The real danger is favoritism—private pull, party loyalty, and patronage.

At that age, the main way to distinguish candidates is their competence in the standard foundations of a good general education. That can be tested fairly easily—if, and it’s a big if, the people running the selection process are willing to do the work and are committed to impartiality. A minister is a poor choice for that role. He must rely on recommendations, and even if he personally wants to be fair, he will be bombarded by requests from people who can help or harm his electoral prospects, or whose political support matters to the government he serves.

That reality has pushed countries toward a better method: public examinations, administered by people outside party politics and of similar standing to university honors examiners. This would likely be a good idea under any system; under parliamentary government it’s practically the only method that offers any chance—not just of perfect honesty, but even of avoiding appointments that are openly, flagrantly corrupt.

But for examinations to work, they can’t be simple “pass/fail” screenings. They must be competitive, with appointments awarded to those who perform best.

A mere qualifying exam, over time, does little more than keep out the utterly unfit. The incentives tilt toward leniency. Examiners often find themselves weighing two pressures: the harshness of damaging an individual’s future versus the public duty of enforcing a standard that, in that particular case, may not feel like a world-historical matter. They know they’ll be bitterly criticized for being strict, while almost nobody will notice (or care) if the standard quietly slips. Unless an examiner is unusually tough-minded, human kindness tends to win. And once you relax the standard once, it becomes harder to refuse the same “favor” again. Each indulgence becomes a precedent. Over time, the bar sinks until it’s barely respectable.

You can see the pattern in universities: degree exams often drift into minimal requirements, while honors exams remain demanding. When there’s no reward for exceeding the minimum, the minimum becomes the de facto maximum. Most people aim only for that floor—and even then, some won’t reach it, however low you set it.

Competitive exams change the psychology entirely. When appointments go to the top performers, and winners are ranked by merit:

  • each candidate has a reason to push as far as possible, and
  • the incentive spreads backward through the entire education system.

Every school and teacher gains a concrete motive to prepare students well, because sending a pupil to the top of such a competition becomes a mark of distinction and a practical path to reputation and success. There may be no other single way for the state to raise the quality of education across a country as effectively.

In this country, competitive examinations for public employment are still relatively new and imperfectly implemented; the Indian service is nearly the only case where the idea has been applied in its full strength. Even so, the approach has already begun to shape middle-class education—despite the obstacles created by the shamefully low baseline of schooling, which the exams themselves have exposed.

In fact, the standard among young men who receive a minister’s nomination—the nomination that entitles them to sit the exam—has turned out to be so poor that the competition can produce results worse than a simple pass exam would. No one would dare set the pass threshold as low as what has, in practice, been enough to outrun similarly underprepared competitors. As a result, people say that year after year, attainment has actually declined: candidates exert themselves less, because earlier exams revealed that much less effort would still secure success.

This decline comes partly from reduced effort and partly from a different effect: where no nomination is required, many who know they’re unprepared don’t even show up, leaving only a small handful of competitors. The consequence is an odd picture: there are always a few strikingly capable candidates, but the lower end of the “successful” list reflects only modest knowledge. And the Commissioners report that most failures have not been due to weakness in advanced subjects, but to ignorance of the most basic elements—spelling and arithmetic.

The ongoing attacks on these examinations, made by some loud public voices, are often—painfully—criticisms not just of policy but of honesty and common sense. One tactic is to misrepresent what actually causes people to fail. Critics cherry-pick the most obscure, specialized questions ever asked and present them as if answering every one were the absolute condition of success.

But it has been explained endlessly that such questions are included for the opposite reason: not because everyone is expected to answer them, but so that the rare candidate who can answer them has a chance to demonstrate that extra knowledge and gain an advantage. These questions aren’t traps that disqualify; they’re opportunities that help distinguish the best.

Then comes the next complaint: “Will any of this knowledge be useful once the candidate is hired?” People disagree wildly about what counts as “useful.” There are, quite literally, people—including a recent foreign secretary—who consider correct English spelling an unnecessary skill for a diplomatic attaché or a government clerk. But the objectors do seem to agree on one point: that broad mental cultivation is not useful in such jobs, whatever else may be.

If, however, general education is useful—or if any education is useful—then you have to test it with the kinds of questions that reveal whether the candidate actually possesses it. To find out whether someone has been well educated, you must ask about what a well-educated person is likely to know, even if it isn’t directly tied to the daily tasks of the post.

And if critics object to classics and mathematics in a country where classics and mathematics are the only subjects routinely taught, what, exactly, would they propose asking instead?

Some people can’t decide what they’re mad about—so they stay mad about everything.

On the one hand, they object to testing a candidate on the traditional “grammar school” subjects. On the other hand, they also object to testing anything except those subjects. If the Commissioners try to broaden the exam—so that someone who didn’t follow the classic school path can still earn credit by showing real skill in something genuinely useful—they get attacked for that, too. For these critics, nothing counts unless the state throws the doors open to pure, untested ignorance.

They love to point out—usually with a flourish—that neither Clive nor Wellington would have passed the entrance test now required for an engineer cadetship. But that’s a cheap point. Clive and Wellington weren’t asked to meet that standard, so of course they didn’t. That tells us nothing about what they could have done if it had been required.

And even if the only claim is, “You can be a great general without this particular knowledge,” that’s true—and also trivial. You can be a great general without lots of things that are still extremely useful to generals. Alexander the Great never learned Vauban’s rules. Julius Caesar didn’t speak French. None of that is an argument for preferring ignorance.

Next comes the sneer about “bookworms”—a word some people seem to apply to anyone with even a trace of learning. We’re told these “bookworms” might be bad at physical exercise, or might not have the habits of gentlemen. This is a familiar move, especially among well-born dunces: imply that brains and good manners can’t coexist, and that athleticism belongs by right to the uneducated. But whatever they may believe, they don’t own either gentlemanly behavior or physical capability.

If a job truly requires those qualities, then test for them. Do it directly. Provide for them separately. Just don’t do it by excluding mental qualifications. Make them additional requirements, not substitutes for competence.

In fact, I’m reliably told that at the Military Academy at Woolwich, the cadets who enter by competition are not only superior in academics—they’re also at least as strong in the “gentlemanly” and physical respects people worry about. They even learn drill faster, which is exactly what you’d expect: an intelligent person picks up everything more quickly than a dull one. And their general conduct compares so favorably with the old nomination entrants that the authorities can’t wait for the last traces of the old system to vanish. If that’s accurate—and it should be easy to verify—then we should soon hear the end of this tired claim that ignorance beats knowledge for the military, and even more so for any other profession. We should also stop pretending that any admirable trait—however unrelated it may seem to formal schooling—is likely to be improved by doing without education.

Now, even if open competitive examinations decide who gets in, it usually won’t work to decide every later promotion the same way. In most offices, promotion has to rely on a sensible blend of seniority and selection:

  • Routine roles: People whose work is mainly repetitive and procedural should move up by seniority—up to the highest level that still consists of routine duties.
  • Positions of trust: Roles that require special judgment, discretion, or talent should be filled by selecting the best person from within, using the chief’s informed judgment.

If initial hiring happens through open competition, that later selection is likely to be honest. Why? Because the office head will usually be supervising people who, outside of work, would have been complete strangers to him. That distance matters. It reduces the pull of personal ties.

Sure, there may occasionally be someone in the office whom the chief—or his political allies—especially wants to favor. But even then, under a competitive entry system, that favoritism can usually operate only when it rides on top of demonstrated ability: the person still has to be at least as competent, as far as the entry exam can measure, as the other candidates.

And unless there’s an unusually strong temptation to “job” an appointment—hand it out as a piece of patronage—there’s also a powerful reason to promote the truly best candidate: the most capable subordinate makes the chief’s life easier, reduces his workload, improves the office’s performance, and helps build the public reputation for effective administration that naturally—and appropriately—reflects credit on the minister, even when the day-to-day excellence is largely the work of the staff beneath him.

XV — Of Local Representative Bodies

Only a small slice of a country’s public business can be done well—or even safely—by a national government at the center. Even in Britain, which is among the least centralized governments in Europe, Parliament (especially on the lawmaking side) still wastes far too much energy on local hassles. It uses the full weight of the state to “cut small knots” that should be untied by simpler, closer-to-home tools. Everyone who pays attention can see the damage: Parliament’s time gets swallowed by an enormous load of private and local business, and individual members get pulled away from the real work of a nation’s main council. Worse, this problem keeps growing.

A full debate about how far government action should go would take us beyond what this book is trying to do. I’ve argued elsewhere about the principles that should set those limits. But even if you strip away everything European governments do that they shouldn’t be doing at all, you still end up with a huge, varied list of necessary duties. And for that reason alone—plain old division of labor—you have to split responsibilities between national and local authorities.

It’s not enough to appoint separate officials for local tasks (every government does that much). The public’s ability to control those officials also needs its own local “instrument”—a body that speaks for the community. That local community should handle:

  • the initial appointment of local officials
  • ongoing oversight: watching, checking, and correcting them
  • deciding whether to fund their work (or to refuse funds when warranted)

Those decisions shouldn’t sit with the national Parliament or the national executive. They should sit with the people who live with the consequences: the people of the locality.

In parts of New England, those powers have traditionally been exercised directly by the assembled citizens, and it’s said to work better than you might expect. Those communities are educated and satisfied enough with this old-fashioned town-meeting style that they don’t want to trade it for the only representative system they’ve really seen—one that effectively shuts minorities out. Still, that kind of direct democracy only works under unusually favorable conditions. In most places, you need representative “sub-Parliaments” for local affairs.

England does have local representative bodies, but in a patchy, inconsistent, and poorly organized way. Some other countries—despite having less popular liberty overall—structure their local institutions more rationally. England has tended to have more freedom but worse organization; elsewhere, better organization but less freedom. The conclusion is straightforward: alongside national representation, a country needs municipal and provincial representation. That leaves two practical questions:

  1. How should local representative bodies be designed?
  2. What, exactly, should they be responsible for?

To answer those, you have to keep two goals in view at the same time:

  • Doing the local work well.
  • Using local government as a training ground for civic character and intelligence.

Earlier I argued—strongly—that one of the most important things free institutions do is educate citizens through participation. Local administrative institutions are the main place where that education actually happens. Outside of jury service, most people rarely take a hands-on role in public affairs at the national level. Between one parliamentary election and the next, ordinary political participation mostly looks like this: reading newspapers (and maybe writing to them), attending public meetings, and petitioning officials. These freedoms are hugely important—both as protection for liberty and as a kind of cultural and mental development—but they mostly train people to think rather than to act. And they encourage thinking without the responsibility that comes with making real decisions. For many people, that means little more than absorbing someone else’s opinions.

Local government is different. Beyond voting, many citizens get a real chance to be elected. And many—through appointment or rotation—serve in one or another of the many local executive roles. In those positions they have to do things, not just talk about them. They act for public interests, and they can’t outsource all the thinking to party leaders or newspaper editors. There’s another advantage: local posts usually aren’t chased by the highest social ranks, so the political education they provide reaches much further down the social ladder.

Because local governance is especially valuable as mental training—and because, generally speaking, fewer life-or-death national interests hinge on whether the local administration is brilliant—the educational aim can sometimes be given more weight locally than it ever could be in national legislation or “imperial” affairs.

How to build local representative bodies

Designing local representative bodies isn’t especially complicated, because the basic principles are the same as for national representation.

  • They should be elective.
  • Their foundation should be broadly democratic—even more so than the national body, because the risks are smaller and the educational benefits are, in some ways, even larger.

Since the central business of local bodies is raising and spending local taxes, the voting right should belong to everyone who pays those local rates, and it should exclude those who don’t. I’m assuming a system without significant indirect local taxes—no octroi duties (local tolls or entry taxes on goods), or only minor ones—and that anyone who ultimately bears those burdens is also paying a direct local assessment.

Minority representation should be protected in the same way it should be in the national Parliament. And the usual arguments for plural voting—giving some voters more than one vote under certain rules—apply here as well. In fact, local elections are a case where there’s less reason to object if plural voting is tied simply to money qualifications. Local bodies spend and manage money as a larger share of their work than national legislatures do. So there’s more justice—and arguably more practical sense—in giving extra weight to those who have more money directly at stake in local taxation and spending.

A case study: Guardians and justices

One of England’s newer local institutions, the Boards of Guardians, includes a mixed membership: the district’s justices of the peace sit ex officio (by virtue of their office) alongside elected members, with the ex officio group capped at one-third of the total. Given the particular makeup of English society, this seems beneficial. It helps ensure that a more educated class is present in the board—people who might not show up otherwise. And because their numbers are limited, they can’t dominate simply by headcount. At the same time, since they effectively represent a different social class and sometimes a different set of interests, they help check the class interests of the elected majority (often farmers or small shopkeepers).

That praise does not extend to the only provincial boards England has, the Quarter Sessions—bodies made up only of justices of the peace. These justices, beyond their judicial work, are also responsible for some of the most important parts of the country’s administrative business. And their way of coming into office is deeply anomalous. They aren’t elected, and they aren’t truly “nominated” in any meaningful sense. In practice, they hold these public powers much the way feudal lords did—almost as if they come with land ownership. Formally, appointments come from the Crown, but in reality they’re managed by one of their own (the Lord Lieutenant). That power is mainly used to keep out someone thought likely to disgrace the body, or occasionally someone on the “wrong” political side.

This institution is the most aristocratic one still standing in England—more aristocratic even than the House of Lords—because it controls public money and important public interests without sharing power with any elected assembly. The aristocratic classes cling to it with matching tenacity, but it plainly contradicts the principles that representative government rests on.

If England had a properly elected County Board, there wouldn’t even be the same justification for mixing in ex officio members as there is with the Boards of Guardians. County business is large enough to attract country gentlemen on its own. If they want influence there, they can win election to the county board just as they win election to Parliament as county members.

Drawing the boundaries: local interests should rule

Now consider how to draw the constituencies for local bodies. There’s a principle that’s often misused when people talk about national parliamentary representation—community of local interests. As a rigid rule for national representation, it doesn’t work well. But for local representation, it’s exactly the right rule, and really the only one that makes sense.

The point of local government is that people who share interests distinct from the rest of the country can manage those interests for themselves. You defeat the whole purpose if you draw local boundaries by any other logic than grouping those shared local interests together.

Every town—large or small—has local interests shared by all its inhabitants. So every town should have its own municipal council. And just as clearly, each town should have only one municipal council. Neighborhoods within the same town rarely have genuinely different local interests. They mostly need the same services and pay for the same kinds of expenses. With one likely exception—church matters, which may sensibly remain at the parish level—one set of arrangements can serve the whole town. It’s inefficient and inconvenient to run different systems for different parts of the same city when you’re dealing with basics like:

  • paving
  • street lighting
  • water supply
  • drainage
  • port rules
  • market regulation

London is the cautionary tale. Breaking it into six or seven semi-independent districts, each with its own local arrangements (and some of them not even internally unified), makes coordinated planning almost impossible. It blocks consistent administration, pushes the national government to take on work that should be local, and serves no real purpose except preserving the bizarre ornaments of the City of London’s corporation—a mashup of modern patronage and ancient vanity.

One elected body, not a patchwork of boards

A second major principle follows: each local area should have one elected body responsible for all local business, not separate elected bodies for separate slices of it.

True division of labor doesn’t mean chopping every task into tiny fragments. It means grouping together what the same people can supervise well, and separating what really requires different expertise. Yes, the execution of local administration should be divided into departments—just as national administration is—because different tasks demand different kinds of specialized knowledge and full-time attention from qualified officials.

But the reasons for dividing execution don’t apply to dividing control. The elected body’s job isn’t to do the work itself; it’s to make sure the work gets done well, and that nothing important is neglected. One supervising body can oversee all departments—and it will do it better with a broad, connected view than with a narrow, “microscopic” one. It’s as absurd in public life as it would be in private life to give every worker a personal supervisor. The Crown’s government has many departments and many ministers, but those ministers don’t each get their own Parliament to keep them honest.

Like the national Parliament, the local body should focus on the locality’s interests as a whole—an interconnected set of needs that must be coordinated and prioritized.

There’s also a deeper reason to keep local oversight unified. The biggest weakness of popular local institutions—and the main reason they so often disappoint—is the low quality of the people who typically run them. A mixed membership is actually part of the point: local government is supposed to be a training ground for political ability and general intelligence. But a school needs teachers as well as students. The value of the “lesson” depends heavily on bringing less experienced minds into regular contact with more capable ones—something everyday life rarely provides, and whose absence does a great deal to keep most people comfortably stuck at the same level of ignorance. Worse, without careful oversight and without some higher-quality characters inside the institution, local bodies can easily slide into something ugly: a cynical and foolish pursuit of members’ self-interest.

And here’s the practical reality: you won’t persuade people of high social or intellectual standing to join local administration in scattered fragments—one tiny post here, a slot on a Paving Board there, a seat on a Drainage Commission somewhere else. Even the entire business of a town is barely enough to attract people whose abilities and interests fit them for national affairs, and to justify the time and study needed for them to contribute meaningfully rather than serving as respectable cover for petty corruption.

A “Board of Works,” even if it governs the whole metropolis, will predictably end up composed of the same kind of people as parish vestries. That may be unavoidable, and it may not even be desirable to prevent them from being the majority. But for local bodies to do what they’re meant to do—both to govern honestly and intelligently and to raise the nation’s political understanding—they must include at least some of the best minds in the locality. That steady contact works both ways: the more capable members learn practical local and professional details from others, while the less capable absorb broader thinking, higher aims, and more enlightened public purpose.

When a place is too small

A mere village has no real claim to its own municipal representation. By “village,” I mean a place whose people aren’t meaningfully distinct in work or social life from the surrounding countryside, and whose needs can be handled by whatever arrangements cover the rural district around it. Small places like that rarely contain a public large enough to supply a decent municipal council. If they do have talent suited to public business, it often concentrates in a single person—who then becomes the local boss. It’s better for such places to be absorbed into a larger district.

The way you set up local representative bodies—especially in rural areas—should mostly follow geography, but not geography alone. People cooperate more easily when they feel they belong together, and those “natural” groupings come from two main sources:

  • History and identity, like long-standing county or provincial lines
  • Shared interests and work, like farming regions, shipping coasts, factory towns, or mining areas

Different public jobs also call for different-sized districts. A small area can make sense for one duty, while another duty needs a much wider footprint. For example:

  • Relief for poverty works best when organized around parish unions (or something close to them).
  • Highways, prisons, and policing usually require something larger—roughly the size of an average county—because the work is broader and more complex.

That means one appealing slogan—“an elected local body should control all the local business of its area”—can’t be applied mechanically. In bigger districts you have to balance it against other realities, especially this one: you want the most capable people available handling the hardest and most consequential tasks.

Here’s a concrete case. Suppose good administration of the Poor Laws really does require keeping the tax-and-relief area no larger than most of the existing parish unions. That principle points toward having a Board of Guardians for each union. And fine—each union can run what it can competently run. But it’s also true that a county board will usually attract (or be able to recruit) a more qualified class of members than the typical union board. So even if a union could manage certain higher-level local responsibilities, it may still be wiser to reserve some of that work for county boards simply because they’re more likely to do it well.


Executive Work: Don’t Elect the Specialists

A local council—call it a controlling council, a “sub-parliament,” whatever you like—doesn’t just vote and debate. Local government also has an executive side: people who actually run services day to day. And the same basic questions show up here as they do in national government.

The principles are familiar and, honestly, pretty straightforward:

  • Each executive officer should be one person, not a committee, and that person should be clearly responsible for the whole job.
  • Executive officers should be appointed, not elected.

Electing technical or administrative roles by popular vote is usually absurd. A town surveyor, a health officer, even a tax collector—these are not positions where “who’s most popular” is a serious hiring method. In practice, “popular choice” tends to mean one of two things:

  • A small circle of local influencers quietly steers the decision—while avoiding responsibility because they weren’t officially the appointing authority.
  • The vote becomes a sympathy contest: “He’s got twelve children,” or “She’s paid rates here for thirty years.”

And if election by the general public is often a farce, appointment by the local representative body is only slightly less troubling. Representative bodies have a constant temptation to turn into something like a mutual-benefit club, trading favors and creating little private “jobs” for their members and allies.

A better structure is this: appointments should be made under the personal responsibility of one accountable leader—whether you call that person a mayor, a chair of quarter sessions, or something else. In the local arena, this role is much like the Prime Minister’s role nationally. In a well-run system, hiring and supervising local officers should be the central task of that leader. And to keep that leader accountable, the leader should be chosen by the council from among its own members, with either:

  • Annual re-election, or
  • Removal by a vote of the council

What Local Bodies Should Do—and How Independent They Should Be

Now move from how local bodies are structured to the harder question: what they should be responsible for.

That question breaks into two parts:

  1. Which duties belong to local authorities?
  2. Within those duties, should local authorities have full control—or should the national government have some right to step in, and if so, how?

Start with the easy part. Anything that is purely local—affecting only one place—should be handled locally. Street paving, lighting, cleaning, and (in ordinary circumstances) basic household drainage mainly matter to the people who live there. The rest of the country only “cares” in the same broad way it cares about the well-being of all citizens.

But there’s a second category that gets mislabeled as “local” simply because local officials carry it out. Some duties are really national in purpose, even when they’re administered locally. Think of:

  • Prisons (often managed at the county level)
  • Local police
  • Local administration of justice (especially in incorporated towns, where officers may be locally elected and paid from local funds)

You can’t honestly say these are matters of merely local interest. If one district’s policing collapses, the whole country has reason to care: that area can become a safe haven for criminals or a breeding ground for social breakdown. If a prison is badly run, punishment can become either brutally excessive or so lax it’s punishment in name only—regardless of where the prisoners came from or committed their crimes.

Even more important: the principles of doing these jobs well are basically the same everywhere. There’s no good reason to manage police, prisons, or justice by entirely different standards in different parts of the country. And there’s a real danger in leaving such high-stakes systems to the lower average expertise you can typically expect at the local level. Mistakes here don’t stay small; they can become national scandals and national harms.

In fact, security of person and property and equal justice between individuals are the most basic needs of society—the core purposes of government itself. If those can be entrusted to something less than the best available oversight, then almost nothing (except war and diplomacy) truly requires a central government.

So whatever the best system is for securing those basics, it should be uniform across the country and placed under central supervision to ensure it is actually enforced.

At the same time, practical realities matter. In countries like ours, there often aren’t enough central-government officers scattered through every locality. So the state may need to rely on local officials to carry out duties that Parliament imposes. But experience pushes us toward a clear compromise: if local hands do the work, the central government should still appoint inspectors to make sure the job is done properly.

We already accept this logic in many places:

  • If prisons are locally managed, the central government appoints prison inspectors to ensure parliamentary rules are followed—and to recommend new rules when conditions show they’re needed.
  • Similarly, we have factory inspectors and school inspectors to monitor compliance with national law and with the conditions attached to state support.

Some “National” Interests Must Still Be Run Locally

If justice, policing, and prisons are so universal—and so governed by general principles—that they should be uniformly regulated and centrally overseen, there are other duties that don’t fit that mold.

Some areas—like poor relief, public health and sanitation, and related functions—matter to the whole country, yet can’t be managed effectively without local administration. If you want local administration to serve its purpose, these are the sorts of things that have to be handled in the locality.

That raises the next question: how much discretion should local authorities have in these areas, free from state supervision or control?

To answer that, compare the central and local authorities on two dimensions:

  • Capacity (how competent they are likely to be)
  • Protection against negligence or abuse (how well they’re watched and held accountable)

On capacity, the central government typically has the advantage. Local councils and local officers are, in most places, likely to be less informed and less skilled than Parliament and the national executive.

And they’re also accountable to a weaker kind of oversight. Local public opinion:

  • Covers a smaller audience
  • Is usually less informed
  • Often pays less sustained attention, because the stakes can look smaller and more private

There’s typically less pressure from national-level journalism and public debate, and local officials can ignore what pressure exists with fewer consequences than national officials can.

So far, you might think the case for central management is overwhelming. But look closer and the picture changes.

Local people, even if less expert in administrative theory, have a powerful compensating advantage: they care more directly about the outcome. Your landlord or your neighbors might be smarter than you, and may even have some interest in your success, but your own affairs are usually best protected when you are the one minding them.

And even when the central government governs through its own officers, those officers don’t operate from the capital. They operate in the locality. That means the only public with real day-to-day visibility into their behavior is the local public—and the only opinion that can routinely correct them, or alert the government to problems, is local opinion.

National opinion rarely focuses on the details of local administration, and even more rarely has the information needed to judge disputes fairly. By contrast, local opinion presses much harder on local administrators because those officials are usually:

  • Permanent residents, not temporary visitors
  • Still embedded in the community after leaving office
  • Dependent (in some way) on the continuing consent of local society

Add another limitation: the central authority often lacks detailed knowledge of local conditions and local actors, and its time is consumed by a thousand other concerns. It usually can’t gather enough high-quality local information to evaluate complaints or enforce accountability across a huge network of local agents.

So in the fine-grained, practical details of administration, local bodies often have the edge.

But when it comes to understanding the principles of administration—even of local administration—the central government, if properly constituted, ought to be vastly superior. Not only because it can recruit stronger individual talent, but because it is constantly exposed to wider thought and analysis: scholars, writers, reformers, critics, and the accumulated experience of many regions. A local authority learns from one patch of the country; the central authority can learn from the whole country, and—when useful—from other countries as well.


The Rule of Thumb: Centralize Knowledge, Localize Execution

The practical takeaway is simple.

  • The authority most fluent in principles should rule over principles.
  • The authority most competent in details should be left with the details.

In other words: the central government’s main job is to teach and set standards; the local authority’s main job is to apply them well on the ground.

You can localize power. But knowledge—if you want it to do the most good—needs a center. There must be a place where scattered experience is gathered, compared, corrected, and refined, so that partial and distorted understandings elsewhere can be completed and clarified.

For every branch of local administration that touches the general interest, there should be a corresponding central organ—a minister or a specialized official under a minister. Even if that person does nothing more than gather information and share it, that alone matters: what one district learns the hard way shouldn’t have to be relearned from scratch in the next.

But the central authority should do more than collect data. It should keep up a continuous two-way connection with localities:

  • Learn from local experience—and share broader experience back
  • Give advice when asked—and offer it when clearly needed
  • Require public records and transparency in proceedings
  • Enforce every general law Parliament has laid down for local management

And yes, Parliament should lay down such general laws. Localities can be allowed to make a mess of their own affairs, but they must not be allowed to harm others—or violate basic justice between person and person, which the state is obligated to protect.

If a local majority tries to oppress a minority, or one class tries to crush another, the state must step in.

Consider local taxation. It makes sense that only the local representative body should vote local rates. But even an elected local body can raise money in ways that are unfair—choosing a kind of tax, or a method of assessment, that dumps the burden on the poor, or the rich, or some targeted group. So while the local body can reasonably control the amount it raises, the legislature should define the permissible kinds of local taxes and the rules for assessment.

Or consider public charity and poor relief. The work ethic and moral habits of the working population depend heavily on relief being granted according to firm principles. Local officers should decide who qualifies under those principles. But Parliament is the right body to prescribe the principles themselves—and it would be neglecting a major national duty if it failed, in a matter this serious, to lay down binding rules and ensure they are actually followed.

How much direct “hands-on” power the central government should keep in reserve to enforce these laws is a technical design question. The law itself can specify penalties and the mechanism for enforcement. In extreme cases, it may be necessary to allow the central authority to dissolve a local council or dismiss local executive officers—but not to take over local institutions permanently or fill all vacancies itself.

Where Parliament has not spoken, no branch of the executive should interfere with authority. But as:

  • an adviser and critic,
  • an enforcer of enacted law, and
  • a whistleblower to Parliament or to local voters when local conduct is blameworthy,

the executive can be invaluable.


“But Don’t We Need Local Self-Government for Civic Education?”

Some people will argue: even if the central government is better at the principles of administration, the bigger goal is educating citizens socially and politically. To achieve that, we should let people manage these matters with their own imperfect understanding.

The reply is simple: civic education matters a lot, but it isn’t the only thing government is for. Administration exists to accomplish real purposes in the world, not merely to provide practice exercises for citizens.

And more importantly, this objection misunderstands what popular institutions teach. A system that leaves ignorance to train itself—pairing ignorance with ignorance, forcing those who want knowledge to grope for it without guidance, and letting everyone else do without it—offers a pretty thin education.

What we need, above all, is a way to help ignorance recognize itself—and then actually benefit from what knowledgeable people can teach. That means training minds that only know “the usual way” to pause and ask: What principle is this based on? It means getting people comfortable acting on reasons, not habits, and learning to weigh different approaches so they can use their own judgment to tell what works best.

And here’s the key point: if you want a good school, you don’t solve the problem by getting rid of the teacher.

That old saying—“as the schoolmaster is, so will be the school”—isn’t just about kids in classrooms. It’s just as true about the way adults get educated by public life. Public business teaches, whether we notice it or not. The question is whether it teaches people to think, or teaches them to wait.

A government that tries to do everything is like a teacher who completes every assignment for the class. Sure, the students may love him—life is easy when someone else does your work. But they won’t learn much, because they never have to struggle, choose, or understand.

On the other hand, a government that refuses to do anything that anyone else could possibly do—and also refuses to show people how to do it—is like a school with no teacher at all. It’s just students “teaching” other students, even though none of them were ever taught in the first place.

XVI — Of Nationality, As Connected With Representative Government

A group of people counts as a nationality when they share strong bonds with each other that they don’t share with outsiders—bonds that make them cooperate more readily with one another, want to live under the same government, and want that government to be run by themselves (or at least by some portion of themselves), not by outsiders.

That feeling can grow out of many different roots:

  • Shared ancestry (what older writers called “race” or descent)
  • A common language
  • A common religion
  • Geography (mountains, rivers, islands, borders that shape daily life)
  • And, strongest of all, shared political history—a national story people remember together, with the same pride and the same wounds: victories and defeats, humiliations and triumphs, joys and regrets tied to the same past events

None of these factors is absolutely required. And none is automatically enough on its own.

Switzerland, for example, has a powerful national feeling even though its cantons differ in ancestry, language, and religion. Sicily, across much of history, has felt nationally distinct from Naples despite sharing religion, nearly the same language, and some overlapping history. In Belgium, the Flemish and Walloon regions—despite differences in language and background—have often felt more like one nation together than either has felt with its “closer” cultural neighbors (Flanders with Holland, Wallonia with France).

Still, as a general rule, national feeling weakens when key sources of unity are missing. Among German-speaking peoples, shared language, literature, and to some extent shared memories and origins have kept a real sense of nationality alive even without long-term political unity. But that feeling didn’t go so far as to make the separate German states eager to give up their independence.

Italy shows the same pattern in a different way. Italians didn’t have a perfectly unified language or literature, and they contained many mixed origins. Yet several forces combined to produce a real national feeling:

  • Their geography set them off clearly from surrounding countries.
  • They shared a common name, “Italian,” which mattered more than it sounds.
  • That shared label let people take pride—collectively—in the achievements of anyone counted under it: in art, war, politics, religious leadership, science, and literature.

Even if that national feeling was still incomplete, it was strong enough to drive the major events unfolding in the author’s own time—despite Italy’s long history of political fragmentation and the fact that Italians had only been united under one government when that government was a world-spanning empire.

Why Nationality Matters For Self-Government

When a real sense of nationality exists, there’s a strong initial argument—a presumption—for uniting the people who share it under one government, and for giving them a government of their own.

At bottom, this just restates a principle: the people who are governed should decide the form and membership of the political community that governs them. If any human community should be free to choose anything, surely it’s this: which larger body it wants to belong to.

But once a people is ready for free institutions, a deeper practical issue appears: representative government can barely function in a country made up of multiple nationalities.

Why? Representative government depends on something more than voting. It needs a shared public conversation—some level of common public opinion that can form, circulate, and hold leaders accountable. That requires fellow-feeling and shared channels of communication. Without those, the machinery of self-government grinds down.

In a multi-national state, especially where people speak and read different languages:

  • Public opinion doesn’t “unify,” because people aren’t hearing the same arguments or reacting to the same messages.
  • Different regions trust different leaders and follow different political cues.
  • The same newspapers, books, speeches, and pamphlets don’t reach everyone—or don’t land the same way.
  • One part of the country can be unaware of what ideas and political agitation are spreading in another.
  • The same government action can feel protective to one group and threatening to another.

And when different nationalities share a state, each group often fears the others more than it fears the central government. Their mutual hostility can become stronger than their suspicion of the ruler. If one group complains about a policy, another group may support that very policy simply because it hurts their rival. Even if all groups are being treated badly, none trusts the others enough to resist together. No single group is strong enough to resist on its own, and each has a plausible temptation: try to win the government’s favor against the others.

The Army Problem: When “The People” Aren’t One People

There’s an even more serious consequence. In the last resort, the strongest protection against outright government tyranny is often this: the army sympathizes with the people. If soldiers see the civilians as “their own,” they hesitate to enforce oppression. If they don’t, the restraint disappears.

Soldiers, by the nature of their job, draw the sharpest line between “us” and “them.” Most civilians see foreigners mainly as strangers. A soldier sees foreigners as the people he may be ordered to fight—possibly within days—in a life-or-death clash. For him, the distinction is the difference between friends and enemies.

So if a soldier is trained and socialized to feel that half (or three-quarters) of the state’s subjects are basically foreigners, he won’t hesitate to shoot them down when ordered—no more than he would hesitate against a declared enemy. A multi-national army often has only one kind of “patriotism”: loyalty to the flag and obedience to command. Historically, armies like this have repeatedly been used to crush freedom. Their cohesion comes from their officers and the government they serve, and their main concept of duty is simple compliance.

A government can exploit this. By stationing one nationality’s regiments in another nationality’s region—keeping Hungarian troops in Italy and Italian troops in Hungary, for example—it can rule both as if it were a foreign conqueror, maintaining control through force rather than consent.

“Isn’t National Feeling Barbaric?” Yes—and Also Real.

At this point, someone might object: isn’t it morally ugly to treat “fellow countrymen” as deserving one kind of care while treating everyone else as merely “human beings,” owed far less? Isn’t that a savage instinct that civilized people should fight?

The author agrees, strongly. Reducing the gap between how we treat insiders and outsiders is one of the worthiest goals humans can pursue.

But—and this is the key—given the current state of civilization, you can’t advance that goal by forcing different nationalities of roughly equal strength to live under one government. In practice, that tends to make things worse.

In very primitive societies, the dynamic can differ: a ruler may have an interest in calming ethnic hostility just to keep the peace and make the territory easier to govern. But once any of the groups has free institutions—or is striving for them—the incentives flip. A government that rules multiple nationalities often benefits from keeping those nationalities suspicious and hostile toward one another. That prevents them from uniting against the state and lets the government use one group as an instrument to dominate another.

The Austrian monarchy, the author argues, spent decades relying on this approach, with catastrophic results visible in events like the Vienna uprising and the conflict in Hungary. Still, he sees signs that society has advanced too far for such tactics to succeed indefinitely.

The General Rule—and Why It’s Hard To Apply

For these reasons, the author’s general conclusion is blunt: free institutions usually require that political borders align, roughly, with national borders.

But real life complicates that.

1) Geography Can Make Separation Impossible

Even in Europe, there are regions where nationalities are so thoroughly interwoven that separate governments aren’t workable.

Hungary is the author’s main example. Its population includes Magyars, Slovaks, Croats, Serbs, Romanians, and—in some areas—Germans, mixed so tightly that you can’t cleanly divide them by territory. In such cases, there may be no choice but to accept living together and to do it on the only tolerable basis: equal rights and equal laws.

He suggests that a shared experience of subjection—especially since the loss of Hungarian independence in 1849—was beginning to push these groups toward accepting such an equal union.

Other geographic puzzles make the same point. A German community in East Prussia was separated from Germany by Polish territory. Too weak to stand alone, it faced a choice: either live under a non-German state, or else see the intervening Polish land absorbed into a German one. Likewise, German-dominated Baltic provinces such as Courland, Estonia, and Livonia were geographically placed to belong to a Slavic state.

Even within “German” areas, there were substantial Slavic populations: Bohemia mostly Slavic, and Silesia and other districts partly so.

And even France—often seen as Europe’s most unified country—wasn’t truly homogeneous. Beyond small remnants of foreign groups at its edges, France contained two broad historical and linguistic layers: one largely Gallo-Roman, the other significantly shaped by Frankish, Burgundian, and other Germanic elements.

2) Absorption Can Be Real—and Sometimes Beneficial

After accounting for geography, a more social and moral question appears. History shows that one nationality can gradually merge into another. And when the absorbed group is genuinely less developed (in the sense of education, institutions, and culture), the author argues that the change can be a net benefit for them.

He uses examples like Bretons or Basques in French Navarre, and also Welsh people or Scottish Highlanders within Britain. In his view, it is better for members of these smaller groups to enter the mainstream of a more advanced civilization—sharing in its ideas, its opportunities, its protections, and the status that comes with its power—than to remain isolated in a narrow cultural world, cut off from the broader movement of human life.

More generally, anything that truly encourages the mixing of nationalities and the blending of their distinctive qualities can benefit humanity. Not because it should erase all differences—there will still be enough examples of each type—but because it can soften extremes and reduce sharp divisions. A blended people, like a cross-bred line in animals (and even more so, since moral and cultural forces matter as well as biology), can inherit strengths from multiple sources, with the mixture helping to keep those strengths from hardening into neighboring vices.

But this kind of blending doesn’t happen automatically. It depends on conditions, and the outcomes vary.

Power Balance: When Blending Works—and When It Turns Toxic

Nationalities brought under one government might be close to equal in numbers and power, or they might be very unequal. If they’re unequal, the smaller group might be more civilized or less civilized than the larger.

If the smaller group is more advanced, several paths are possible:

  • It might use its superiority to gain dominance.
  • Or it might be overwhelmed by sheer numbers and force, reduced to subjection.

That last case—where the more advanced group is crushed by a less advanced one—is, in the author’s view, a pure loss for humanity, something civilized people should unite to prevent. He calls Greece’s absorption by Macedonia one of the world’s great misfortunes and suggests that the absorption of major European nations by Russia would be a similar disaster.

If the smaller but more advanced nationality conquers the larger—he cites the Macedonians (with Greeks) in Asia and the English in India—civilization may gain. But there’s a catch: conquerors and conquered generally cannot share the same free institutions. If the conquerors are absorbed into the less advanced population, that would be a bad outcome (civilizationally, in his framework). So the conquered must be ruled as subjects. Whether this is ultimately good or bad depends on two questions:

  • Have the conquered reached the point where being denied free government is itself a serious injury?
  • Do the conquerors use their advantage in a way that helps the conquered move toward higher development?

He flags this as a topic he will treat later.

Finally, consider the case where the conquering nationality is both more numerous and more advanced, especially when the subdued nationality is small and has little realistic chance of restoring independence. In that situation, if the state governs with tolerable justice and doesn’t make the dominant nationality hated by granting it exclusive privileges, the smaller group may gradually reconcile itself to its new position and blend into the larger nation.

He points to Brittany and Alsace in France as examples of regions with little desire to separate.

Ireland, he says, hadn’t fully reached that point with England—not only because Ireland was large enough to sustain a national identity of its own, but mainly because England had governed Ireland so brutally for so long that resentment became inevitable. He claims that this disgraceful pattern had largely ended for nearly a generation: Irish people were no less free than Anglo-Saxons and had no less share in the benefits of citizenship. The chief remaining grievance, in his account, was the established church—an issue that many in Great Britain also faced. In his view, what still kept the two peoples apart was mostly memory and religious difference, even though he believed the two were unusually well suited to complement each other. He argues that equal treatment and equal respect were rapidly reducing hostility, making it harder for Irish people to ignore the advantages that come from being fellow citizens, not foreigners, to a neighboring nation that was wealthier, freer, and among the most civilized and powerful on earth.

The Hardest Case: Roughly Equal Nationalities

The greatest practical obstacles to blending arise when the nationalities tied together are close to equal in numbers and in the other ingredients of power. In that situation, each group:

  • Trusts its ability to hold its own in a struggle
  • Refuses to be absorbed into the others
  • Clings stubbornly to its distinct identity
  • Revives old customs, and even dying languages, to make the boundary sharper
  • Treats any official authority exercised by “the other side” as oppression
  • Reads any concession to one group as a loss for all the rest

If such divided nations live under a despotic government that is foreign to all of them—or even one that comes from one group but cares more about its own power than national loyalty—and if it distributes no special privileges to any group and recruits officials from all sides without preference, then over generations something interesting can happen. Shared circumstances can gradually create shared feelings, especially when the groups are geographically mixed across the same territory. People begin, slowly, to treat one another as fellow countrymen.

But if the push for free government arrives before that long fusion has taken place, the author argues that the window closes. The chance to blend the nationalities peacefully has been missed.

Once the clashing national groups are still at odds, and they also live in different places, the case for separation gets very strong—especially when there’s no practical reason they should be ruled together in the first place. Think of an Italian province forced under a French or German government: the arrangement isn’t just awkward, it’s fundamentally misfit. And if you care about either liberty or peaceful coexistence, breaking the political link isn’t merely sensible—it’s often necessary.

To be fair, separation doesn’t always mean total isolation. Sometimes two provinces that go their own ways can still benefit from staying connected through a federal arrangement. But in practice, that’s rare. When people are willing to give up full independence and join a federation, it usually isn’t with the same partner they just fought to leave. More often, each province has other nearby states it would rather link up with—neighbors it actually feels aligned with, sharing stronger sympathies and, often, more overlapping interests.

XVII — Of Federal Representative Governments

Sometimes groups of people simply aren’t willing—or able—to live under one shared domestic government. They may have different histories, identities, and local needs, and forcing them into a single internal system can create more conflict than it solves. But those same groups can still benefit enormously from acting together on the world stage. A federal union can help them do two big things at once:

  • Prevent wars among themselves by tying their fates together in foreign affairs
  • Protect themselves more effectively against larger, more aggressive powers

That said, a federation isn’t a magic wand. It’s a political machine, and it only runs well under certain conditions.

When a Federation Can Actually Hold Together

1) There has to be real mutual goodwill.
A federation effectively says, “When the outside world pushes, we push back together.” That means its members are committed to fighting on the same side. If the populations feel so little sympathy for each other—or feel such different loyalties toward neighboring countries—that they’d typically want to take opposite sides in wars, the union won’t last. And even while it exists, it won’t be reliably obeyed.

The kinds of shared bonds that tend to support a federation include:

  • Race or ethnic kinship
  • Language
  • Religion
  • And most importantly, political institutions—because similar institutions create a shared sense of political interest and identity

This matters most in a very specific situation: when several small free states are surrounded by military or feudal monarchies that distrust freedom and would gladly crush it, even next door. In that environment, those free states often have only one realistic path to preserving liberty: federate.

Switzerland is a long-running demonstration. For centuries, the common need to defend independence and liberty kept the federal bond strong—strong enough to survive not only religious differences at a time when religion tore Europe apart, but also a federation whose constitutional design was, frankly, weak.

The United States shows the other side of the story. Almost everything favored unity—except one colossal difference: slavery. That single institutional divide poisoned mutual sympathy so deeply that the fate of a union valuable to both sides came to depend on a bitter civil war.

2) The member states can’t be strong enough to feel they don’t need each other.
A federation is always a trade: each state gives up some freedom of action in exchange for shared strength. If individual states are powerful enough to defend themselves alone, they’ll tend to resent that sacrifice. And whenever federal policy—on matters within federal authority—clashes with what a state would prefer to do by itself, the resulting friction can spiral into a sectional rupture. Without a strong sense that the union is necessary, disagreement can slide into dissolution.

3) The states can’t be wildly unequal in power.
Perfect equality is impossible. Any federation will have a ladder of influence: some members will be larger, richer, and more developed than others. Think of the difference between New York and Rhode Island, or between Bern and smaller Swiss cantons like Zug or Glarus.

But there’s a crucial limit: there must not be a single state so dominant that it can rival several others combined.

  • If there’s one overwhelmingly strong state, it will demand to control shared decisions.
  • If there are two superpowers, they’ll dominate whenever they agree—and when they disagree, the federation becomes a battleground for supremacy.

This problem alone nearly empties the German Bund of substance. Even setting aside its miserable internal design, it fails at the basic tasks a confederation should accomplish. It never gave Germany a unified customs system, not even a uniform coinage. Instead, it mainly served as a legal mechanism for Austria and Prussia to send troops into smaller territories to help local rulers keep their people obedient under despotism.

And externally, the Bund creates a different kind of dependence: without Austria, Germany would effectively become a dependency of Prussia; without Prussia, it would become a dependency of Austria. Meanwhile, each small prince is pushed into choosing sides—backing one great power, backing the other, or scheming with foreign governments against both.

Two Models of Federal Union (Only One Really Works)

There are two basic ways to build a federation:

Model A: The federal authority speaks only to governments.
Under this design, federal decisions bind member governments, not individual citizens. This was the approach of the German Confederation and Switzerland before 1847. The United States also tried it briefly right after independence.

Model B: The federal authority can legislate directly over citizens.
Here, within its assigned sphere, the federal government makes laws that bind individual people, enforces those laws through its own officers, and adjudicates through its own courts. This is the model of the current U.S. Constitution and the Swiss Confederacy after its reforms.

Only the second model produces an effective federal government. The first is essentially an alliance with nicer stationery—and alliances are fragile by nature.

Here’s why. If federal decisions bind only the governments of New York, Virginia, or Pennsylvania, then enforcement depends on those state governments issuing orders to their own officials, under their own courts, with their own political incentives. In practice, any federal mandate disliked by a local majority would simply be ignored.

And what’s the enforcement mechanism when a member government refuses? Ultimately, war. The federation would need a standing readiness to force obedience with federal troops. But that creates a new problem: other states might sympathize with the resisting state, especially if they share its view on the disputed issue. They might withhold troops—or even fight alongside the disobedient state.

So a “government-only” federation can easily become a cause of internal war rather than a barrier against it. Switzerland avoided that outcome for a long time mainly because the federal authority understood its own weakness and rarely tried to use real power. In the United States, the early experiment collapsed within a few years—fortunately while the leaders who had won independence were still alive to steer the country through the transition. Their defense of the new constitution, collected in The Federalist, remains one of the most instructive works ever written on federal government.

Germany’s loose model didn’t even succeed at the modest goal of maintaining a reliable alliance. In European wars, it repeatedly failed to stop individual member states from aligning with foreign powers against the rest.

And there’s a deeper obstacle: among monarchies, this weaker kind of federation may be the only one that’s politically feasible. A hereditary king, who doesn’t hold power by delegation and can’t be removed or held accountable, isn’t likely to accept a system where another authority governs his subjects directly—or where he must give up his separate army.

If monarchies want an effective confederation, the only realistic route may be a shared monarch. England and Scotland looked like a federation of that sort for about a century, between the union of the crowns and the later union of the parliaments. It worked—not because federal institutions existed (they didn’t), but because royal power in both systems was close enough to absolute that foreign policy could be driven by a single will.

Why a Supreme Court Becomes the Keystone

In the stronger model of federation—where each citizen owes obedience to two governments, their state and the federal authority—clarity is non-negotiable. You need:

  • sharply defined constitutional boundaries between state and federal power, and
  • an independent decision-maker when those boundaries are disputed

That umpire cannot be either government, or any official answerable to either. It must be an independent judiciary: a Supreme Court, supported by subordinate courts across the states, with final authority on these questions.

This means something striking: both the state governments and the federal government, and their officials, must be suable in these courts for exceeding their powers or failing their federal duties. And they generally must use the courts to enforce federal rights.

The result—realized in the United States—is that the highest federal court becomes, in a real sense, supreme over governments. It can declare that a law or official act exceeds constitutional authority and therefore has no legal validity.

Before anyone had seen this system work, reasonable people worried about it. Would judges actually dare to use such power? If they did, would they use it wisely? And would governments accept the decision peacefully?

The debates over the American Constitution show how intense these fears were. But experience has largely quieted them. Over more than two generations, nothing happened that confirmed the worst anxieties, even though there have been fierce disputes—sometimes hardening into party identities—over the boundary between state and federal authority.

Why does this arrangement work as well as it has? A key reason, as Tocqueville observed, is how courts operate. A court doesn’t announce “the law” in the abstract. It waits until a concrete case arrives—one person against another, or one authority against another—where the disputed point must be decided to do justice.

That has several stabilizing effects:

  • The court’s pronouncements usually come after public debate has matured, not at the first spark of controversy.
  • The issue is argued fully by capable lawyers on both sides.
  • The court decides only what the case requires, not everything that could be said in theory.
  • The decision isn’t offered as a political weapon; it’s compelled by the court’s duty to judge impartially.

Still, even these safeguards wouldn’t be enough to produce broad, respectful submission if people didn’t believe the judges were intellectually strong and above private or sectional bias. That trust has generally been deserved—but it’s also fragile. Nothing matters more for Americans than guarding this institution from any decline in quality or independence.

That confidence suffered a serious blow when the Supreme Court declared that slavery was a common right, and therefore lawful in the territories even against the will of the majority living there. That decision likely did as much as anything to push sectional conflict to the breaking point and toward civil war. The main pillar of the American constitutional system can’t take many shocks of that magnitude.

Courts as a Substitute for War Between States

Once states are bound into a federation, they can’t treat disputes with each other like disputes between nations. War and diplomacy are effectively off the table. But disputes still happen—between state and state, and between a citizen of one state and the government of another. A judicial remedy must replace the old “international” remedies.

So the federal Supreme Court naturally becomes the arbiter of these conflicts. In doing so, it applies the law that would otherwise be handled by international relations. It becomes a major example of something civilized societies increasingly need: a genuine international tribunal—a real court for conflicts that would otherwise be settled by force.

What Powers a Federal Government Tends to Need

A federal government obviously handles peace and war and foreign relations. But it also tends to take on whatever additional powers the member states judge necessary to fully enjoy the benefits of union.

For example:

  • Free internal trade: It’s a major advantage when commerce between states is open—no border duties, no internal customs houses.

    • But that freedom collapses if each state can set its own import duties on foreign goods. If one state lets a foreign product in, it effectively enters all states.
    • So in the United States, tariffs and trade regulations belong exclusively to the federal government.
  • A single money and measurement system: One coinage and one system of weights and measures are far easier for commerce and daily life. That uniformity requires federal control.

  • A unified postal system: Mail becomes slower and more expensive if a letter must pass through multiple separate post offices, each under different top-level authority. It’s more efficient when post offices are federal.

But these decisions aren’t purely technical; they often tap into local identity and political fear. Different communities can feel very differently about how far federal power should reach.

One American state, led by a thinker of rare theoretical power (by American standards since the authors of The Federalist), claimed that each state should have a veto over federal tariff laws. In a posthumous work—later printed and widely circulated by South Carolina’s legislature—he defended this on a general principle: limiting majority tyranny by giving minorities a real share of power.

Another major controversy in early nineteenth-century America was whether the federal government should—or constitutionally could—build roads and canals at national expense.

In truth, only foreign relations demand complete federal authority by necessity. On most other issues, it comes down to political choice: How tightly do people want the union to be? How much local freedom are they willing to give up to gain the advantages of acting as one nation?

How to Structure the Federal Government Itself

Internally, a federal government looks like representative government in general: it needs a legislature and an executive, each designed according to the usual principles of representative rule.

The more specific question is how to tailor those principles to a union of states. On that front, the American Constitution made an especially smart choice: a two-house Congress.

  • One house represents citizens by population, with each state’s representation proportional to its number of inhabitants.
  • The other represents states as states, with every state—large or small—sending the same number of members.

This setup blocks any one big state from bullying the rest. It also protects the powers the state governments keep for themselves, because it makes it hard—so far as representation can make anything hard—for Congress to pass a law unless it’s backed by two majorities at once: a majority of citizens and a majority of states.

I’ve also mentioned another side benefit: this arrangement tends to raise the bar for one of the two houses. In the United States, senators are chosen (in the system being described here) by the state legislatures—smaller, more selective bodies than a mass popular vote. For reasons already discussed, those legislatures are more likely than the general electorate to pick people with proven stature. And they have every reason to do it: a state’s influence in national debates depends heavily on the skill, credibility, and personal weight of the people speaking for it. The result has been predictable. The U.S. Senate, chosen this way, has almost always included most of the Union’s established, high-reputation political figures. Meanwhile, careful observers have often judged the House of Representatives as notable for the opposite: not that it contains no talent, but that it more often lacks standout personal distinction in the way the upper chamber tends to have it.


When the conditions are right for a federal union that’s both effective and built to last, creating more of them is a net gain for the world. It’s the political version of the basic advantage of cooperation: the weak, by joining together, can deal with the strong on something closer to equal terms.

Federal unions also reduce the number of tiny states that can’t reliably defend themselves. And that matters because powerless mini-states invite trouble. They tempt stronger neighbors into aggression—sometimes outright military conquest, sometimes the subtler kind driven by the prestige and intimidation of superior power.

Inside the union, a federation typically:

  • ends wars and diplomatic feuds between member states
  • usually removes trade barriers among them

And outward-facing effects matter too. A union’s combined military strength is mostly useful for defense, not conquest. A federal government generally lacks the tightly concentrated authority needed to run an ambitious offensive war efficiently. It can fight well when the cause is clearly self-defense—when it can count on willing cooperation from citizens across all member states. But conquest offers little emotional payoff for national pride, because even a “successful” federal war doesn’t usually produce subjects, or even straightforward new fellow citizens. At best it produces new—and possibly troublesome—independent members of the confederation.

The United States’ war with Mexico was an exception, and even that exception proves the point. It was carried on largely by volunteers, fueled by a strong culture of migration and land-seeking—people pushing toward what they saw as available territory. And if it had any “public” motive beyond that, it wasn’t national glory so much as a sectional project: expanding slavery. In general, there’s little evidence—whether you look at Americans as a nation or as individuals—that simple territorial acquisition for the country’s sake has been a dominant passion. Even the recurring desire for Cuba followed the same pattern: it was sectional, not national, and the northern states opposed to slavery never really supported it.


A related problem comes up when a country is set on unification—as Italy was during its uprising at the time being discussed here. Should it become a single, fully unified state, or should it unite in a looser federal form?

Sometimes geography decides the question for you. There’s a real limit to how much territory can be governed well—or even supervised conveniently—from one center. Yes, some huge countries have been run that way. But the typical result is miserable administration, especially in distant provinces. Often those far-off regions would govern themselves better if they were separate—unless the population is so politically undeveloped that almost any local self-management would be worse.

That size-based obstacle doesn’t apply to Italy. Italy isn’t larger than several single states that have been governed efficiently, both in the past and in more recent times. So the real question becomes practical and political: Do the different parts of the country need fundamentally different kinds of government? Are their conditions and preferences so unlike each other that it’s unlikely one legislature and one central administration will satisfy them all?

If not—if the differences aren’t that deep—then full union is better.

And it’s worth noticing what full union can tolerate. Two major parts of a country can live under very different legal systems and administrative traditions without that preventing one national legislature. England and Scotland are the obvious example. A single parliament can make different laws for different regions, tailoring new legislation to long-standing differences.

Still, that arrangement depends on political culture. In a country where lawmakers are gripped by a “one-size-fits-all” obsession—a mania for uniformity, which the author suggests is more common on the Continent—people may not trust that distinct legal systems will be left alone. Britain’s unusually broad tolerance for odd, irregular arrangements—so long as the people affected don’t feel wronged—made it a particularly favorable place to try this difficult experiment. In many countries, if keeping genuinely different legal systems is a priority, it might be necessary to keep separate regional legislatures as their guardians. And that can still fit alongside a national parliament and a king—or a national parliament without a king—that controls foreign policy and other external relations for the whole.


Even when provinces don’t need permanently different legal foundations—different basic institutions built on different principles—it’s still possible to keep plenty of local variety inside a unified state. The key is simple: give local authorities a wide enough field to operate in.

You can have one central government and still maintain:

  • local governors
  • provincial assemblies focused on local needs

For example, different provinces might reasonably prefer different tax systems. If you can’t trust the national legislature to fine-tune the general tax rules for each province—by listening to members from that province—then a constitution could solve the problem structurally:

  • Make as many expenses as possible local, paid by local taxes set by provincial assemblies.
  • Treat unavoidable national expenses—like an army and navy—as general costs.
  • Each year, apportion those general costs across provinces using a broad estimate of their resources.
  • Let each province raise its assigned share in whatever way best fits local preferences, then pay the total as a lump sum into the national treasury.

Something close to this existed even under the old French monarchy in the pays d’états. Each of those regions, once it agreed (or was required) to provide a fixed amount, could assess and collect it through its own officials. That autonomy helped them escape the harsh, grinding rule of the royal administrators. And contemporaries often cited this privilege as one reason these regions—some of them, at least—became among the most prosperous parts of France.


A single central government can coexist with many degrees of centralization—not just administratively, but even legislatively. A people might want, and be capable of, a tighter union than a mere federation, while still needing real diversity in governmental details because of local history and local character. The good news is that if everyone genuinely wants the experiment to work, it’s usually not hard to preserve those differences—and even to protect them with constitutional rules that block forced “assimilation,” unless the people who would be changed consent to it voluntarily.

XVIII — Of The Government Of Dependencies By A Free State

Free states, like any other kind of state, can end up ruling dependencies—territories they control but don’t treat as fully equal partners. Those dependencies might be gained through conquest or built through colonization. Britain, in the modern era, is the standout example. So the real question isn’t whether a free country can have dependencies. It’s how it can govern them without betraying its own principles.

Small Strategic Outposts

We don’t need to spend much time on tiny holdings that exist mainly as strategic assets—places like Gibraltar, Aden, or Heligoland. When a territory is held primarily as a naval or military position, security comes first. That usually means the local population can’t be fully included in running the place, because the governing state won’t risk losing control of the fort, harbor, or chokepoint that justified taking it in the first place.

Still, even in these cases, the people who live there should get every freedom that doesn’t conflict with the military purpose, including:

  • Local civil liberties as far as security allows
  • Control of municipal affairs (town administration, local services, day-to-day governance)

And because they’re effectively being asked to accept local sacrifice for the empire’s convenience, they should be compensated in the only way a free state can credibly compensate: by granting them full equality with native citizens everywhere else in the empire.

Two Kinds of Larger Dependencies

Once you move beyond small outposts, dependencies come in two broad types—both defined by one key fact: the “paramount” country can exercise sovereign power over them, but they are not equally represented (if represented at all) in the ruling country’s legislature.

  1. Dependencies ready for representative government
    These are societies at roughly the same level of political development as the ruling country—capable of self-rule through representative institutions. In Mill’s time, he points to the British possessions in America and Australia.

  2. Dependencies not yet ready
    These are territories whose social and political conditions are, in his view, still far from that stage—India being the main example he gives.

What follows is mostly about the first category: places that can govern themselves and should be allowed to.

The Break from Old-Style Colonial Control

For a long time, England did grant representative institutions to colonies populated by people of English language and culture (and sometimes others), but she didn’t truly let those institutions govern. The mother country insisted on acting as the final judge—even over purely internal matters—and insisted on doing so by England’s ideas of what was best, not the colony’s.

That behavior grew out of a poisonous colonial theory that used to be common across Europe: the belief that colonies mattered because they could be turned into captive markets. The plan was simple in concept and absurd in practice:

  • The colony is forced to buy the mother country’s goods.
  • In exchange, the mother country pretends to be generous by giving the colony a protected monopoly in the mother country’s market.

The result wasn’t real shared prosperity. It was two societies made to overpay each other through restrictions and middlemen, with much of the value “lost on the road” in waste, inefficiency, and distortion. England eventually abandoned this approach.

But even after England stopped trying to profit directly from controlling colonial policy, she didn’t immediately stop interfering in colonial internal affairs. The meddling continued—often not for Britain’s benefit, but to serve some local faction within the colony. That domineering habit helped provoke the Canadian rebellion, and only then did Britain finally have the good sense to stop acting like the colony’s parent and start treating it like a political adult.

Mill’s analogy is blunt: England behaved like a badly raised older brother who bullies the younger ones out of habit—until one younger sibling finally pushes back hard enough to make him quit. The important point is that England learned after the first warning.

A new era, Mill argues, began with Lord Durham’s Report, which he treats as a landmark of courageous and liberal political judgment (and he credits Durham’s collaborators for the practical intelligence behind it as well).

A Working Model: Full Internal Self-Government

Britain’s settled policy, by Mill’s account, is now clear and—crucially—actually followed: colonies of European origin are entitled to the fullest possible internal self-government, essentially on the same footing as the parent country.

In practice, that means:

  • Colonies can reshape their own representative constitutions, modifying the already popular constitutions Britain originally granted them.
  • Each colony has its own legislature and executive, built on highly democratic principles.
  • The Crown and Parliament technically keep a veto, but it is used rarely, and only for issues that genuinely concern the empire as a whole—not purely local matters.

How broad this “local matters” category has become is illustrated by a striking concession: Britain has handed over the unappropriated public lands behind the American and Australian colonies to the colonies’ own control. Britain could have kept those lands and administered them for the benefit of future emigrants from across the empire without obvious injustice. Instead, it yielded them.

The result is that each colony has, for practical purposes, as much control over its internal life as if it were part of the loosest federation—and in some ways even more autonomy than a state would have under the U.S. Constitution. Mill highlights one especially telling example: a British colony can even impose taxes on imports from Britain as it sees fit.

So yes, there is still a bond with Great Britain. But it is the lightest kind of federal connection—and not an equal one.

The Inequality That Remains

The arrangement is “federal” in form, but unequal in reality. Britain keeps what you might call the core powers of the central government—though, in practice, it tries to use them as little as possible.

This inequality has a very specific cost for the colonies:

  • They have no real voice in foreign policy.
  • They are still bound by Britain’s choices.
  • In particular, they can be pulled into war because Britain decides to go to war.

In other words, colonies may govern themselves internally, but they can still be forced to fight externally.

Why “Equal Representation” Doesn’t Actually Work

Some people—Mill respects their moral instinct—argue that justice binds communities the way it binds individuals. If you wouldn’t be justified in doing something to another person for your own benefit, you aren’t justified in doing it to another country, either. From that perspective, even this limited constitutional subordination looks wrong. So reformers have proposed ways to remove the inequality.

Two main proposals show up:

  • Give the colonies representation in the British Parliament.
  • Split powers so that both Britain’s Parliament and colonial Parliaments handle only internal affairs, while a new, shared representative body governs foreign and imperial policy—making the whole arrangement an equal federation, with colonies no longer “dependencies.”

Mill praises the ethical impulse behind these ideas—but then rejects them as practically unworkable. Countries separated by half the globe, he argues, simply don’t have the conditions needed for one government, or even a functional federation:

  • Even if they share interests, they don’t—and can’t—share enough habit of consultation.
  • They aren’t part of the same public sphere. They don’t debate in the same arena.
  • Each side knows only dimly what the other thinks, wants, or fears.
  • They don’t understand each other’s goals well, and they can’t build real confidence in each other’s political conduct.

Mill then forces the reader to picture what “fair representation” would mean. Suppose one-third of the imperial legislature were British American, another third South African and Australian. Would an English voter be comfortable having their future hinge on such an assembly? And would anyone believe that Canadian and Australian representatives—even on “imperial” questions—could truly know or care enough about the opinions and needs of English, Irish, and Scottish voters?

Even for a federation, Mill says, these are exactly the missing ingredients. In his earlier terms, the essential conditions for a stable federation aren’t present.

He also adds a blunt strategic point: England can defend itself without the colonies, and would be stronger—and more dignified—standing alone than reduced to one member of a sprawling American-African-Australasian confederation. Beyond the trade England could likely keep even after separation, the empire gives England little besides prestige, while imposing heavy costs:

  • the expense of maintaining far-flung territories
  • and the forced dispersal of naval and military power, which in wartime means England must maintain forces double or triple what would be needed for defense at home alone

Why Keep Any Connection at All?

If Britain can do without colonies—and if justice would require Britain to accept separation whenever a colony, after honest experience with the best available union, deliberately wants to leave—why maintain any tie?

Mill’s answer is that there are still strong reasons to keep this light bond, as long as both sides find it acceptable.

He sees it as a small but real step toward:

  • universal peace
  • and broad, friendly cooperation among nations

A shared connection makes war among these otherwise independent communities impossible. It also reduces the chance that any one of them is absorbed by a foreign state—especially a nearby rival power that might be more despotic, more aggressive, or simply less restrained than Britain. In that sense, the connection acts like a geopolitical “do not swallow” label.

The bond also helps keep markets open among these communities and discourages the kind of mutual exclusion through hostile tariffs that, in Mill’s view, most major societies still haven’t outgrown (England being the notable exception).

Finally, he argues that for British possessions there is a distinctive advantage in the present moment: the connection increases the moral influence and global weight of the power that, more than any other existing great nation, understands liberty—and that, whatever its past errors, has developed more conscience and moral principle in dealing with foreigners than its rivals even consider possible or desirable.

If the bond must remain an unequal federation, then the practical problem becomes: how do you keep that inequality from feeling oppressive or humiliating to the smaller communities?

Fair Terms for an Unequal Federation

Mill insists that only one inferiority is built into the arrangement: the mother country decides questions of peace and war for everyone.

The colonies get something in return: Britain is obligated to defend them against aggression. But unless a colony is so weak that it cannot survive without protection, that “reciprocal obligation” still doesn’t fully compensate for having no voice in decisions that can cost lives and money.

So, at minimum, fairness requires limits on what Britain can demand from the colonies:

  • In wars not undertaken for the colony’s own sake, colonists should not be forced (unless they voluntarily ask) to contribute to the general cost of war—beyond what’s needed for the local defense of their own ports, coasts, and borders.
  • Because Britain alone chooses policies that might expose colonies to attack, Britain should shoulder a significant part of the cost of colonial defense even in peacetime—especially the portion that depends on maintaining a standing army.

The Stronger Solution: Equal Access to Imperial Public Service

But Mill argues there’s a deeper, more effective way to compensate a smaller community for giving up full independent standing in world affairs. In fact, he suggests it is usually the only way to offer a true equivalent.

The indispensable—and, at the same time, sufficient—expedient is this:

Open every part of government service, in every department and across the empire, on perfectly equal terms to colonial inhabitants.

If a colony can’t be an equal partner in foreign policy, then its people must at least be treated as full members of the wider political community when it comes to careers, honors, and leadership.

Mill points to a telling case: the Channel Islands. By race, religion, and geography, he says they belong less to England than to France. Yet nobody hears “disloyalty” from them. Why? Because while they control their internal affairs and taxation—much like Canada or New South Wales—every office and dignity controlled by the Crown is open to them. People from Guernsey or Jersey can become generals, admirals, peers of the United Kingdom, and there is nothing in principle preventing them from becoming prime minister.

A similar approach began, he notes, when Sir William Molesworth appointed Mr. Hinckes, a prominent Canadian politician, to govern in the West Indies.

Mill anticipates the sneer: “So what? Only a few people will ever actually benefit.” He calls that a shallow understanding of politics. The handful who can benefit are exactly the people with the most moral and social influence over everyone else. And communities aren’t as numb to collective insult as cynics assume. If you deny a single person an opportunity solely because of a shared trait—“he’s from the colony”—everyone who shares that trait feels the degradation.

If you refuse to let a community’s leading figures step onto the world stage as its representatives in the broader councils of humanity, then you owe them something substantial in return: an equal chance to reach that same prominence within a larger, more powerful nation.

Dependencies Not Ready for Self-Government

So far, Mill has been talking about dependencies whose people are politically advanced enough for representative government. But there are others—societies that have not reached that stage and that, if held at all, must be governed by the dominant country (or by officials appointed by it).

He argues that this can be legitimate—no more illegitimate than any other form of rule—if it is the form best suited, given the people’s current state of civilization, to help them move toward a higher level of improvement.

In some social conditions, he says, what the society needs most is training in the habits and disciplines that make higher civilization possible. In those cases, a vigorous despotism can be the best available training regime.

In other conditions, despotism itself doesn’t teach anything useful—because the population has already learned its lessons far too well—but the society still lacks any internal engine of improvement. Then the only realistic hope of progress may be the accidental appearance of a good despot.

Under native despotism, a good despot is rare and short-lived. But if the rulers come from a more advanced society, Mill believes that society ought to be able to supply competent, improving rule continuously. The dominant country should be able to do, for its subjects, what a long succession of capable absolute monarchs could do:

  • protected by overwhelming force from the instability typical of “barbarous” despotisms
  • and informed by the accumulated experience of a more developed nation, so that it can anticipate what slower trial-and-error would eventually teach

That, he says, is the ideal of a free people ruling a barbarous or semi-barbarous people.

He doesn’t pretend the ideal will be fully realized. But he insists on a hard moral line: unless the rulers make some serious approach to that standard, they are guilty of abandoning one of the highest moral trusts a nation can hold. And if they don’t even aim at it, then they are simply selfish usurpers—morally no better than the empires and conquerors who have, century after century, toyed with the fate of whole populations for ambition and greed.

The Big Problem of the Age

Mill ends by zooming out. He thinks it is already common—and rapidly becoming universal—for “backward” populations to be either directly ruled by more advanced nations or brought under their complete political ascendancy. Given that trend, few problems matter more than designing that rule so it becomes a benefit rather than a curse to the subject people—giving them the best government realistically possible now, while also building the conditions for durable, long-term improvement.

Governing a dependency—a territory that can’t realistically govern itself yet—sounds like it should be straightforward. In practice, it’s one of the least-understood problems in political design. You could even argue we barely understand it at all.

To a casual observer, it looks easy. Take India, for example: if it can’t govern itself, then just appoint a minister to govern it, and make that minister answerable to the British Parliament like any other British official. Clean, simple, and completely wrong.

Here’s the key distinction: ruling a country responsible to its own people is one thing; ruling a country responsible to someone else’s people is another. The first is what we usually mean by free government—because the governed can reward, punish, replace, and correct their rulers. The second is not “free government exported.” It’s still despotism. The only real question is which despotism you get.

And it’s not obvious that “the despotism of twenty million” is automatically better than “the despotism of a few,” or even of one. But one thing is obvious: rule by people who don’t hear, see, or know their subjects has a lot of ways to go badly. We don’t normally assume a local official governs better just because they’re acting in the name of an absent master—especially one distracted by a thousand other priorities. Yes, the master can threaten penalties. But it’s doubtful those penalties will reliably land on the right people, for the right reasons, at the right time.


Why Foreign Rule Is So Hard

A country is always governed imperfectly when foreigners run it—even if the gap in customs and ideas isn’t extreme. Foreign rulers don’t feel what the people feel. They can’t look at an issue through the same emotional and cultural lens, so they routinely misjudge how policies will land.

What a reasonably capable local knows almost by instinct, outsiders have to learn slowly—through study, experience, and trial-and-error—and even then they never fully catch up. The laws, customs, and everyday social relationships that a native absorbs from childhood are, to the foreign administrator, unfamiliar terrain.

Worse, foreigners often have to rely on locals for detailed information, and it’s hard to know who is trustworthy. The population fears them, suspects them, and often dislikes them. People approach them mainly when there’s something to gain, and that makes rulers especially vulnerable to a common mistake: treating the most obedient and flattering as the most reliable.

So both sides develop predictable risks:

  • Foreign rulers are tempted to despise the people they govern.
  • The governed are tempted to assume the foreigners’ actions can’t possibly be meant for their good.

Any foreign administration that honestly tries to govern well faces heavy friction like this. Overcoming it takes real work and unusually high competence—especially at the top, and with a strong average among the ranks below. The best system, then, is the one that forces that work to happen, develops real administrative skill, and puts the most capable people in the positions of greatest trust.

That is exactly what you don’t get when the rulers are “responsible” to an authority that has done none of the work, gained none of the understanding, and often doesn’t even realize how difficult the job is.


“Government By Another People” Isn’t Real Government

Self-government has a real meaning. “One people governing another” doesn’t—at least not in the sense of governing for the good of the governed.

A dominant nation can keep another country as a kind of preserve: a place to extract money, a market to exploit, even a “human cattle farm” worked for the profit of the ruling population. But if the purpose of government is the welfare of those being governed, it’s impossible for a whole foreign people to attend to that directly.

At best, they can appoint some of their ablest individuals to manage the dependency on their behalf. But then the home public’s opinion is a poor guide and an even worse judge. The home population doesn’t live with the consequences, doesn’t understand the tradeoffs, and can’t realistically evaluate whether the job is being done well.

Think about what English government would look like if English citizens knew and cared as little about English affairs as they know and care about the affairs of Indians. Even that comparison doesn’t go far enough. A public that’s indifferent to politics might simply stop meddling and let administrators work. But in India you have something different: a politically active nation at home that mostly acquiesces—yet periodically intervenes, and usually in the wrong place.

The true causes of Indian prosperity or misery are too distant for most English people to see. They often don’t even know what questions to ask, much less how to judge the answers. Essential interests can be managed well without their noticing, and serious mismanagement can continue without drawing their attention.

When public opinion in the ruling country does push hard, it tends to do so for two main reasons—and both are dangerous.


Harmful Pressure #1: Forcing “Our Ideas” Onto Them

One temptation is to shove English ideas down the throats of the local population—especially in religion. A revealing example is the popular demand in England that the Bible be taught in government schools, with attendance left “optional” to pupils or parents.

From a European point of view, this can look fair. It seems compatible with religious liberty: nobody is forced; it’s simply available.

But to many Asian societies, that’s not how governments work. No Asiatic people expects a government to mobilize paid officials and official machinery unless it is pursuing a goal. And if it is pursuing a goal, they don’t believe a competent government pursues it halfway.

So if government schools and government teachers teach Christianity, then however many promises are made—however sincerely—parents will assume pressure is being applied behind the scenes to turn children into Christians, or at least into outsiders to Hinduism. And if they are ever persuaded otherwise, it will likely be only because the schools fail so completely at producing converts that the suspicion becomes unsustainable.

The grim irony is that if the teaching did succeed even a little, it could endanger not only the usefulness—and perhaps the existence—of government education, but even the stability of the government itself.

A parallel is easy to see. An English Protestant won’t readily place his children in a Roman Catholic seminary just because the school claims it won’t proselytize. Irish Catholics won’t send their children to schools where they might be made Protestants. Yet England expects Hindus—who believe the privileges of Hinduism can be lost through what seems, to outsiders, a merely physical act—to risk their children being made Christians.


Harmful Pressure #2: Favoring Settlers Over Natives

The second pressure point is even more persistent: intervention on behalf of English settlers.

Settlers have friends at home. They have access to newspapers, Parliament, and public opinion. They speak the same language and share the same assumptions as the home audience. So when an Englishman complains, he gets a sympathetic hearing almost automatically—even if nobody intends to be unfair.

And experience says something blunt: when one country rules another, the members of the ruling nation who move there to make their fortunes are often the people who most need firm restraint. They become one of the government’s hardest problems to manage.

They arrive with the prestige of a conquering nation and the swagger that comes from power. But they don’t carry the corresponding sense of responsibility. In a society like India, even energetic public authority struggles to protect the weak against the strong—and among the strong, European settlers are the strongest.

Unless their personal character strongly counteracts it, many come to view the locals as dirt under their feet. They find it outrageous that native rights should block even their smallest demands. And when the government tries to protect inhabitants against abuses that settlers consider useful to their business, settlers denounce that protection as an injury—often sincerely.

This spirit is so natural to the position that, even when the authorities discourage it, it still keeps breaking out. And even when the government itself rejects the attitude, it can’t fully prevent it from shaping the behavior of its own young and inexperienced civil and military officers—over whom it has far more control than it has over independent residents.

The pattern repeats across empires and conquests: English in India; French in Algiers; Americans in territories taken from Mexico; Europeans in China, and already in Japan; and, famously, Spaniards in South America. In these cases, the formal government is often better than the adventurers and does what it can to protect natives against them. Even the Spanish government, though often ineffectively, made sincere efforts to restrain abuses.

But notice the political point: if the Spanish government had been directly accountable to Spanish public opinion, would it have tried as hard? The public at home would likely have sided with their Christian friends and relatives rather than with “pagans.” Settlers, not natives, have the ear of the home public. Settlers’ stories pass as truth because settlers have both the means and the motive to repeat them relentlessly to an inattentive audience.

Meanwhile, a distinctive habit in English political culture makes this worse: English people are unusually critical of their country’s dealings with foreigners—but they often reserve that criticism for the official authorities. In disputes between a government and an individual, many English minds start with the presumption that the government is at fault.

So when resident English settlers use the full machinery of English political pressure against safeguards meant to protect natives, executives—however genuine their faint desire to do better—usually find it safer, and certainly easier, to surrender the disputed safeguard than to defend it.


Even “Philanthropic” Pressure Often Misses the Target

There’s an additional twist. When English public opinion is stirred in the name of justice and humanity on behalf of the subject population, it can still misfire.

Why? Because within the subject society there are also oppressors and oppressed—powerful individuals or classes, and masses crushed beneath them. And once again, it is the powerful, not the helpless, who can reach the English public.

A deposed tyrant or sensualist—stripped of abusive power but maintained in wealth and splendor—can easily gather sentimental defenders. A group of privileged landlords can demand that the state surrender its reserved right to collect rent, or frame protections for the poor as “wrongs” against property. These voices find advocates in Parliament and the press. The silent millions do not.


The Principle: Accountability Works Only When It’s To The Governed

All of this illustrates a principle that would be “obvious” if people actually noticed it:

  • Responsibility to the governed is the strongest security for good government.
  • Responsibility to someone else is not only weaker—it can just as easily produce harm as help.

The fact that British rulers in India were answerable to the British nation did have one real benefit: if an act of government was challenged, it could be dragged into the light and debated publicly. That publicity matters even if the general public doesn’t understand the technical issue, so long as some people do. Moral accountability isn’t really accountability to “the collective people,” but to each individual who forms a judgment.

And crucially, judgments can be weighed, not just counted: one informed person’s approval or disapproval can matter more than the noise of thousands who know nothing about the subject. The value, then, is that rulers may be forced to defend themselves, and a few members of the “jury” will form an opinion worth having—even if the rest will likely form opinions worse than none at all.

This, such as it is, is the main benefit India gets from British parliamentary and public control.


What A Free Country Can Do—and What It Can’t

If England wants to meet its duty toward a place like India, it shouldn’t try to rule India directly. It should aim to provide India with good rulers.

And it can scarcely choose a worse method than putting India under an English Cabinet minister:

  • A minister focused on English, not Indian, politics.
  • A minister who rarely stays in office long enough to develop a serious, informed interest in a complicated subject.
  • A minister pushed around by a manufactured “public opinion” in Parliament—two or three fluent speakers—treated as though it were the nation’s settled judgment.
  • A minister lacking the training and the institutional position that would encourage, or even enable, independent and honest judgment.

A free country that tries to govern a distant dependency—inhabited by a very different people—through a branch of its own executive will almost inevitably fail.

The only arrangement with a real chance of tolerable success is government through a delegated body of relatively permanent character, while giving the shifting administration at home only:

  • a right of inspection, and
  • a veto—what you might call a negative voice—over major changes.

India once had such an intermediate instrument of government. And it’s hard not to fear that both India and England will pay a heavy price for the short-sighted policy that abolished it.


Choosing The Least Bad Option

It’s pointless to object that a delegated body can’t have all the features of ideal government—especially not the complete, constant identity of interest with the governed that is difficult to secure even when people are somewhat capable of managing their own affairs.

Perfect government is not possible under these conditions. The choice is between imperfect arrangements. The real problem is design: build the governing body so that, within the constraints of the situation, it has as much incentive as possible to govern well—and as little incentive as possible to govern badly.

Those incentives are best approximated by an intermediate body. Compared with direct rule by the home executive, a delegated administration has at least this advantage: it has no practical “job” except to govern for the people in the dependency. It has no competing domestic agenda to serve.

Its ability to profit from misgovernment can be reduced to almost nothing—as it eventually was under the last constitution of the East India Company—and it can be kept free from the bias of any particular individual or class interest at home.

And when the home government and Parliament, in their ultimate reserved power, are swayed by partial pressures, this intermediate body becomes the dependency’s steady advocate and defender before the imperial tribunal.

The “intermediate body,” in the ordinary run of things, ends up being made mostly of people who actually know this corner of government. They learned the job on the ground, in the place itself, and then spent their working lives running it.

That matters, because when you have that kind of expertise—and you’re not constantly at risk of being fired because politics back home shifted—you start to tie your reputation to the work. You want your administration to succeed, and you have a steady, long-term stake in the prosperity of the country you’re administering. A cabinet minister in a representative system rarely has that kind of sustained investment in any country other than the one he serves at home.

And when this intermediate body gets to choose the officials who do the day-to-day work locally, it shields those appointments from the worst habits of party politics: backroom deals, parliamentary “jobbing,” and the temptation to use public jobs as a currency—paying off supporters or buying off enemies. Even politicians with average levels of honesty often feel those pressures more strongly than they feel a clean duty to appoint the most qualified person.

Keeping this class of appointments as far as possible out of that mess is more important than preventing almost any other abuse in the state. Here’s why: in most domestic departments, even if you appoint someone mediocre, the general public opinion and the surrounding institutions push him toward a roughly acceptable course. But in a dependency—especially one where the population is not yet in a position to control government itself—the quality of rule depends overwhelmingly on the moral and intellectual character of the individual administrators. There isn’t a strong corrective force beneath them.

In a country like India, you can’t repeat this too often: everything hinges on the personal qualities and capabilities of government agents. That isn’t a minor detail—it’s the central rule of Indian administration. The moment people start believing that you can fill offices of trust for mere convenience—something already scandalous in England—and “get away with it” in India, that will mark the beginning of the decline of British power there.

Even if your intention is sincere—you really do mean to pick the best person—you still can’t rely on luck to produce a steady supply of capable administrators. The system has to be built to create them.

Until now, it largely has. And because it has, British rule in India has lasted—and has brought steady, if not always rapid, improvements in prosperity and administration.

What’s striking, then, is the bitterness now aimed at this system, and the eagerness to tear it down—as if educating and training public officers for their work were some outrageous scheme, an intolerable interference with the “rights” of ignorance and inexperience.

In reality, there’s a quiet alliance at work:

  • People at home who want to place well-connected friends and relatives into top Indian posts.
  • People already in India who want promotions straight from an indigo factory or a lawyer’s office into roles like administering justice or setting the government payments owed by millions.

They rail against what they call the Civil Service “monopoly.” But it’s the same kind of “monopoly” as the legal profession’s effective hold on judgeships: it’s a way of ensuring that specialized positions go to people trained for them. Abolishing it would be like letting anyone become a judge at Westminster Hall because their friends swear they’ve skimmed a bit of Blackstone.

If Britain ever adopted the practice of sending men out—or encouraging men to go out—into high office without learning the work by moving through the lower levels first, the most important posts would quickly become prizes for cousins, clients, and adventurers. These would be people with no professional bond to the country or its administration, no tested competence, and one overriding plan: make money fast, then go home.

The real protection is that administrators are sent out young, not as instant rulers but as candidates. They start at the bottom, and only rise if, after a suitable period, they prove themselves qualified.

The East India Company’s old system had one serious flaw. It tried hard to find the best men for the highest posts—but if an officer stayed in the service long enough, promotion would usually arrive eventually, in one form or another, for the least competent as well as for the most capable.

To be fair, even the weaker members of that corps were still men who had been trained for their duties and had performed them for years, at minimum without disgrace, under supervision. That softened the harm. Still, the harm was real. Someone who never becomes fit for more than an assistant’s work should remain an assistant for life, while younger men who can do more should be promoted over him.

With that one exception, there was little truly wrong with the old appointment system. In fact, it had already taken on the greatest improvement it could reasonably receive: selecting original candidates through competitive examination. That change did two important things:

  • It drew recruits from a higher level of ability and hard work.
  • It reduced personal favoritism, because—except by accident—there were no personal ties between candidates and the people who helped decide their appointments.

It isn’t unjust for officers selected and trained in this way to have exclusive eligibility for posts that demand specifically Indian knowledge and experience. In fact, if you open even a small side door into the higher appointments—letting people skip the lower ranks “just occasionally”—that door won’t stay small for long. Influential people will pound on it without pause, until it becomes impossible to close.

There should be only one exception: the very highest office. The Viceroy of British India should be chosen from the whole pool of Englishmen for his broad capacity to govern. If he has that, he can recognize, rely on, and direct the local expertise he doesn’t personally possess.

There are also good reasons why, except in unusual cases, the Viceroy should not come from the regular service. Every service develops some degree of class prejudice, and the supreme ruler should be as free of that as possible. Also, even very able men who have spent their lives in Asia are less likely to carry the most up-to-date European ideas about general statesmanship—ideas the chief ruler should bring with him and blend with the hard lessons of Indian administration.

And there’s another practical advantage: if the Viceroy comes from a different class, and especially if he is appointed by a different authority, he is less likely to have personal favorites whose careers he will quietly advance.

Under the old mixed arrangement between the Crown and the East India Company, this safeguard against biased patronage existed in rare strength. The top distributors of office—the Governor-General and the Governors—were appointed, in practice if not formally, by the Crown (that is, by the general government), not by the intermediate body. A great officer of the Crown was unlikely to have deep personal or political ties inside the local service, while the delegated body—full of people who had served in the country—often did.

That built-in guarantee of impartiality would be seriously weakened if the Indian civil service—even if its members still began young as mere candidates—came to be filled, in any significant proportion, from the same social class that supplies viceroys and governors. Even competitive examinations would no longer protect you enough. Examinations can keep out sheer ignorance and inability; they can force privileged youths to begin with the same basic educational preparation as everyone else. The very dullest son can’t be shoved into the Indian service quite as easily as a family can shove him into certain comfortable institutions at home.

But after entry, nothing would prevent favoritism.

Instead of being equally unknown to the person who controls their fate, part of the service would stand in close personal relationship to him, and an even larger part in close political relationship. Members of certain families, and of the broader web of influential connections, would rise faster than others. They would often be kept in posts they weren’t fit for, or placed into jobs where someone else was clearly better suited.

In short, the same forces that distort promotions in the army would begin distorting promotions in India. And only those—if such miracles of simplicity exist—who believe the army’s promotions are truly impartial would expect impartiality in India under those conditions.

This danger, I’m afraid, can’t be cured by any general measure available under the present system. Nothing now on offer provides security comparable to what the so-called double government once produced almost automatically.

One last irony runs through all of this. People often praise the English system at home because it developed organically—piecemeal, by practical fixes, not by rigid design. But that very quality has been a misfortune in India. The Indian machinery also grew by successive expedients, adapting tools originally built for other purposes. And because it wasn’t created to meet the felt needs of the country whose support it depended on—Britain—its benefits didn’t register clearly in the British public mind. To gain acceptance, it would have needed a tidy theoretical justification.

Unfortunately, it looked like it had none. The standard theories of government didn’t seem to fit it, because those theories were built for situations that differ from India’s in the most important ways.

Yet in government, as in other human endeavors, the principles that last are usually discovered the same way: by observing a particular case where familiar forces combine in a new, revealing pattern. British and American institutions have famously inspired many of the political theories now, over generations, reawakening political life across Europe—for better and for worse.

It has been the strange destiny of the East India Company’s government to point toward the true theory of how a “civilized” country governs a less-developed dependency—and then, after providing that lesson, to die.

It would be a bitter outcome if, two or three generations from now, that abstract lesson were the only remaining “benefit” of British rule in India—if posterity concluded that Britain stumbled into better arrangements than it ever would have designed on purpose, and then, the moment it began to reason about them, smashed them—letting real, ongoing improvements collapse because it didn’t understand the principles that made them possible.

I

This essay has one main job: to explain—as plainly as I can—the reasons behind a belief I’ve held since I was first old enough to have opinions about society and politics. And time hasn’t softened that belief. If anything, thinking about it and watching the world has only made it stronger.

Here it is: the basic principle that shapes today’s relationship between the sexes—the law-backed subordination of women to men—is wrong in itself. Worse, it’s now one of the biggest obstacles to human progress. It should be replaced with full equality: no special power or privilege for one side, and no legal handicap or exclusion for the other.

Just saying that out loud hints at how difficult the project is. But the difficulty isn’t that the reasons are vague or weak. The real problem is something else entirely: this is a subject wrapped up in powerful feelings. And whenever feelings are deeply invested, argument behaves strangely. If a belief rests on logic, then strong counterarguments can shake it. But if a belief rests on feeling, the more it loses in debate, the more its defenders insist there must be some deeper truth that the arguments “aren’t touching.” As long as the feeling remains, it keeps inventing new lines of defense—fresh arguments to patch whatever holes reason manages to punch in the old ones.

And the feelings tied to this question run especially deep. They’re among the most intense emotions people have for protecting long-standing institutions and customs. So it shouldn’t surprise us that, even as modern life has upended so many traditions, this one has been shaken less than almost any other. The cruel habits people cling to the longest aren’t necessarily the least cruel; they’re often just the most emotionally guarded.

That’s the burden anyone faces when challenging an almost universal belief. You don’t just have to be right—you have to be unusually capable, and unusually lucky, to be heard at all. And even if you do get a hearing, you’ll be judged by standards no one applies in ordinary disputes.

In most situations, we expect the burden of proof to sit with the person making the accusation or pushing the claim. If someone is accused of murder, the accuser must prove guilt; the accused doesn’t have to prove innocence. If people disagree about some historical event that doesn’t stir strong emotions—say, whether the Siege of Troy really happened—those who claim it did happen are expected to bring evidence, and the skeptics only need to show that the evidence isn’t convincing.

Likewise, in practical politics and law, we usually assume the burden lies with the people who want to restrict freedom. If you argue for prohibitions, limitations on personal action, or special disabilities and unequal privileges—treating one person or group worse than another—you’re expected to justify it. The default presumption is:

  • Freedom should be the starting point.
  • Impartiality should be the rule.
  • Restraints are acceptable only when the general good clearly requires them.
  • Unequal treatment is acceptable only when there are strong, positive reasons—reasons of justice or public policy—that actually demand it.

But anyone arguing for equality between the sexes doesn’t get the benefit of those ordinary rules. I can point out—perfectly reasonably—that the people who claim “men have the right to command and women must obey,” or “men are fit to govern and women are unfit,” are the ones making the affirmative claim and should have to prove it. I can also point out—again, reasonably—that if someone denies women a freedom or privilege rightly given to men, they’re fighting two presumptions at once: against freedom and in favor of unfair partiality. In any other case, we’d demand extremely strong proof, and if doubts remained, we’d rule against restriction and inequality.

But on this subject, those arguments won’t be accepted. To have any chance of persuading people, I’m expected to do something absurd: not only answer everything that has ever been said against my view, but also anticipate everything that could be said—supplying my opponents with arguments they haven’t thought of yet, and then refuting those too. And on top of that, I’m expected to offer airtight, “positive” proof for what is treated as a negative: the claim that there is no legitimate basis for male authority over women.

Even if I managed this miracle—if I answered every argument on their side, and they still stood there with a pile of unanswered objections against them and none left for themselves—I would still be told I’d done little. Why? Because a practice backed by universal custom and overwhelming popular sentiment is assumed to carry a presumption stronger than anything reason can produce—except, perhaps, in a small number of unusually independent minds.

I’m not listing these obstacles to complain. Complaining would be pointless, and the obstacles come with the territory: you’re trying to reach people’s minds while their feelings—and their everyday habits—are actively resisting you. Most people’s reasoning powers aren’t cultivated enough for them to feel confident abandoning principles they grew up with, principles woven into the existing order of society, just because those principles can’t withstand the first serious argument aimed at them. So I don’t blame them for distrusting argument. I blame them for trusting custom and public feeling far too much.

In fact, one of the defining prejudices of the nineteenth-century backlash against the eighteenth century is that we’ve started treating the unreasoning parts of human nature as if they were infallible. Where an earlier age may have overpraised Reason, ours has begun to worship Instinct instead. And we slap the label “instinct” on anything we feel strongly but can’t justify.

This worship is not an improvement; it’s more degrading, and it props up many of the most harmful delusions of our time. It will likely keep its hold until better psychology exposes what’s really going on—until we learn to recognize how much of what people bow down to as “Nature’s intention” or “God’s ordinance” is actually rooted in far less noble sources.

As for my subject, I’m willing to accept the unfair terms prejudice sets for me. Fine: let established custom and general feeling count as decisive against me—unless it can be shown that, across the ages, that custom and feeling exist for reasons other than their truth or goodness, and that their force comes more from the worse parts of human nature than the better. In other words, I’m willing to lose unless I can show the judge has been bribed. And that concession sounds bigger than it is, because proving that is actually the easiest part of what I have to do.

Sometimes, widespread practice really is evidence for something. A custom can be a strong sign that it is—or once was—useful for good ends, when it was adopted deliberately as a means to those ends and supported by experience about what works best.

So imagine, for a moment, that male authority over women had been established after a serious and conscientious comparison of different ways to organize society. Imagine that people had actually tried alternatives—women ruling men, full equality, or mixed and shared arrangements—and then, on the evidence of experience, decided that the best route to everyone’s happiness was a system where women have no public role at all and, in private life, are legally required to obey the man to whom they have tied their fate. If that were the history, the near-universal adoption of the arrangement could be taken as some evidence that, at the time it was chosen, it seemed the best—though even then, the conditions that once recommended it might later have disappeared, as so often happens with ancient social arrangements.

But reality is the opposite.

First, the modern opinion defending this system—where the physically weaker sex is placed entirely under the stronger—rests on theory alone. No one has ever seriously tried other arrangements, so “experience,” in the usual sense people pit against theory, can’t honestly claim to have delivered any verdict.

Second, the system wasn’t adopted because anyone sat down, weighed justice and social benefit, and chose it. It wasn’t the product of foresight, moral theory, or any idea about what would best serve humanity. It arose from a blunt fact of early human life: from the earliest dim beginnings of society, every woman—because men valued her, and because she generally had less physical strength—was in bondage to some man.

Legal systems don’t start by inventing relationships from scratch. They begin by recognizing whatever relationships already exist in brute reality, and then turning those facts into rights. They replace irregular struggles of strength with organized, public enforcement. What had been physical compulsion becomes legal obligation.

That’s how slavery in general became “law.” What began as raw force between master and slave was gradually regularized into a kind of pact among the masters: they bound themselves together for mutual protection, and by their combined strength guaranteed each man’s possessions—including his slaves. And in early times, slavery wasn’t confined to women. A large majority of men were slaves too, alongside virtually all women.

For many centuries—even in periods of high culture—hardly any thinker dared to question whether slavery was right, or whether it was socially “necessary.” Over time, some did. With the broader progress of society, the slavery of men has, in Christian Europe at least (though in one country only within the last few years), finally been abolished. The slavery of women has been softened into a milder dependence.

But let’s not misunderstand what that means. The dependence women live under today isn’t some new institution built from fresh ideas about justice and social good. It’s the original slavery, still alive, simply moderated over time by the same forces that have made manners gentler and pushed human relationships closer to justice and humanity. It still carries the stain of its brutal origin. So the mere fact that it exists gives it no moral presumption in its favor.

At most, any presumption would have to come from this: the arrangement has survived into the present even as so many other practices with the same ugly ancestry have been abolished. And that is precisely why it sounds so shocking to many ears to hear someone say that the unequal rights of men and women come down, in the end, to nothing more than the law of the strongest.

That this claim strikes people as a paradox is, in one way, a compliment to civilization. In the most advanced nations, at least, we now live in a world where people say they reject the rule of brute force as the guiding principle of human affairs. No one openly defends it, and in most human relationships, no one is even allowed to practice it. When someone does manage to impose their will by sheer power, it usually happens behind a respectable pretext—something that makes it look like the action serves society rather than mere domination.

Because that’s the public story we tell ourselves, many people assume the reign of force is over. They assume that anything still thriving in a modern age must be sustained by a sound sense of its usefulness, its fit with human nature, and its contribution to the common good.

They underestimate how stubborn institutions are when they place “right” on the side of “might.” They don’t see how fiercely such systems are defended, or how easily the good impulses as well as the bad impulses of the powerful get tangled up with keeping power. They don’t recognize how slowly these institutions give way—usually one at a time, weakest first, beginning with those least woven into daily life. And they overlook a brutal historical pattern: people who gained legal power because they first had physical power almost never lose it until the physical balance shifts to the other side.

That shift hasn’t happened between men and women. And given the special features of this case, it was predictable from the start that this branch of “right founded on might”—even if it was softened earlier than some other brutal institutions—would be the last to disappear. It was almost inevitable that this one force-based relationship would survive long after many others had been replaced by equal justice, standing as a nearly solitary exception within otherwise more civilized laws and customs. And as long as it doesn’t openly advertise its origin, and as long as discussion hasn’t exposed its true character, it doesn’t feel to most people like an insult to modern civilization—any more than domestic slavery felt inconsistent to the Greeks with their self-image as a free people.

The deeper truth is that people today—and for the last few generations—have mostly lost any practical sense of what early humanity was actually like. Only those who have studied history carefully, or spent time among societies that preserve ways of life resembling the distant past, can form a real picture of it.

Most people don’t realize how completely, in earlier ages, superior strength ruled life. It wasn’t even hidden. It was openly accepted—not with cynicism or shame, because those words assume the presence of a moral discomfort that barely existed then, except perhaps in rare figures like saints or philosophers. History offers a harsh lesson: the amount of respect given to any class’s life, property, and happiness was measured almost exactly by how much force they could bring to bear. Anyone who resisted an armed authority—no matter how outrageous the provocation—found not only force but “law,” and the whole idea of social obligation, lined up against them. In the eyes of those they resisted, they weren’t just criminals; they were the worst kind of criminals, deserving the cruellest punishment human beings could deliver.

The first faint beginning of an obligation—of a superior recognizing any right in an inferior—appeared when it became convenient for the superior to make a promise. Even then, for many centuries, promises were casually broken, even when sealed by solemn oaths, for the smallest temptation. Still, it’s likely that—except among people with especially low morals—even this wasn’t usually done without at least a prick of conscience.

The earliest republics were often built on some kind of mutual agreement—or at least on an alliance of people who weren’t wildly unequal in strength. That mattered because it created, for the first time, a little protected zone of human life where the default rule wasn’t simply might makes right. Inside that narrow space, something like law—something other than brute force—could take over.

To be clear, force didn’t disappear. It still governed:

  • the relationship between free citizens and their slaves,
  • the relationship between the state and its subjects (except where explicit agreements limited the state), and
  • the relationship between one independent community and another.

Still, even banning the “law of force” from a small corner of social life was a turning point. It began what you could call a slow renovation of human nature, because it gave rise to moral feelings that turned out to be valuable even in the most practical, self-interested sense. Once those feelings existed, the hard part was no longer inventing them—it was expanding their reach.

And here’s a striking irony: although slaves weren’t considered part of the political community, it was in free states that people first began to sense that slaves had rights as human beings. The Stoics were, as far as I know (with the Jewish law as a partial exception), the first to teach plainly that morality includes obligations to one’s slaves.

After Christianity rose to dominance, hardly anyone could be entirely ignorant of that idea, at least in theory. And once the Catholic Church became a major institution, there were always some people willing to defend it out loud. But turning that belief into reality—actually restraining how people treated those beneath them—was one of the hardest tasks Christianity ever took on.

For over a thousand years the Church fought that battle with barely any visible success. It wasn’t because it lacked influence. Its power over minds was enormous. It could persuade kings and nobles to give up treasured property to enrich the Church. It could persuade thousands—healthy, young, well-positioned people—to lock themselves into convents and monasteries for lives of poverty, fasting, and prayer. It could mobilize hundreds of thousands to cross continents and seas to die for the Holy Sepulchre. It could even force kings to abandon wives they loved, because the Church declared the marriage invalid on the grounds of kinship.

The Church could do all that—and yet it couldn’t get people to stop fighting each other, or to stop crushing the serfs, or (when they got the chance) to stop bullying the town-dwellers. It couldn’t make them give up either kind of force:

  • force in motion (war and violence), and
  • force in victory (the use of power to dominate those already beaten down).

People didn’t abandon those habits because they became gentler. They did it when stronger force left them no choice. Private warfare faded only as kings grew powerful enough to end fighting among nobles—except when kings fought other kings, or rivals fought for the crown. The worst noble tyranny over peasants and townspeople was checked only when new counterweights appeared: fortified towns filled with a wealthy, armed bourgeoisie, and a plebeian infantry that proved more effective on the battlefield than undisciplined chivalric cavalry.

Even then, the oppression didn’t end the moment the oppressed gained leverage. It continued long after—long after those below had enough power to take dramatic revenge when opportunities arose. On the Continent, much of it lasted until the French Revolution. England curbed it earlier, largely because the democratic classes organized sooner and more effectively, and pressed their advantage into equal laws and free national institutions.

Custom outlives the violence that created it

People today often don’t realize how completely, through most of human history, force was openly treated as the normal rule of behavior—while anything like moral restraint was the rare exception, confined to special relationships. And people are just as likely to forget something equally important: institutions and customs born from force can survive for centuries into eras whose general values would never have allowed those institutions to be created in the first place.

Take the most extreme example: slavery. Less than forty years ago, English law still allowed people to own human beings as saleable property. Even within this century, people could kidnap others, ship them away, and work them literally to death. This wasn’t some barbaric outlier—it existed in “civilized,” Christian England within the living memory of people still alive at the time of writing. And in half of Anglo-Saxon America just three or four years earlier, slavery wasn’t merely legal; the slave trade itself flourished, including the deliberate “breeding” of enslaved people for sale between slave states.

This case is so shocking that it almost makes other examples unnecessary. It’s condemned even by many who tolerate almost every other kind of arbitrary power, because its ugliness is so plain to anyone trying to look without prejudice.

Yet notice something else: there was stronger public disgust toward slavery—and at least in England, less emotional or material support for it—than for almost any other long-running abuse of force. Why? Because the motive was bare and undisguised profit, and the people who profited were a small fraction of the country. For everyone else, the spontaneous feeling—when not dulled by self-interest—was close to pure abhorrence.

Or consider the long life of absolute monarchy. In England, almost everyone now sees military despotism as a straightforward case of rule by force, with no legitimate origin. And yet across Europe (England mostly excepted) it either still existed, had only recently ended, or still had a serious party in its favor—among all social ranks, especially those with status and influence.

That’s the grip of an established system: it can persist even when it isn’t universally loved; even when history offers famous counterexamples; even when the most admired and prosperous societies have often been the ones organized on opposite principles. And notice the imbalance here. In monarchy, the person with the undue power is literally one individual, while those subjected to it are everyone else. The yoke is naturally humiliating to nearly all—except the monarch, and perhaps the heir.

Why male power over women lasts longer than other injustices

Now compare all of that to the power of men over women. I’m not yet arguing here whether that power is justified. I’m making a different point: even if it were unjustified, it has structural reasons to be far more durable than slavery, despotism, or aristocratic privilege—systems that nevertheless survived into the modern era.

In other unjust dominations, the personal benefits of power are concentrated in a narrow class. With male power over women, the “profits” of power—both pride and practical advantage—aren’t confined to a few leaders. They’re shared, in some degree, by the entire male sex.

And unlike political struggles where most supporters care only in the abstract (and where the real private stakes often matter mainly to the leaders), this power reaches directly into everyday life. It sits at the center of home and family. Every male head of a household, and every boy expecting to become one, is personally involved. The farm laborer claims his portion of this authority as much as the aristocrat does.

It also taps into the strongest form of the desire for power: power over the people closest to you. If someone wants to rule, they usually want it most over those they live with, depend on, and share daily concerns with—because those are the people whose independence is most likely to clash with their preferences.

So if societies are slow and reluctant to dismantle other powers that clearly rest on force and have less support behind them, they will be even slower to dismantle this one—even if it rests on nothing better.

There’s another factor, even more decisive: the holders of this power have unique advantages in preventing resistance. Each person in the subjected class lives under the direct eye—and in a sense, in the hands—of one of the “masters,” in closer intimacy than with any fellow-subject. She has little ability to join forces with others, no local strength to overpower him, and, on the other side, powerful incentives to win his favor and avoid provoking him.

In political struggles, we all know how often reformers are bought off with rewards or terrified into silence. For women, each individual lives under a chronic mixture of bribery and intimidation. To resist openly, many leaders—and even more followers—would have to give up most of what makes their personal lives bearable, not just the “extras,” but often the basic comforts and protections of their situation.

If any system of privilege ever fastened its yoke tightly onto those beneath it, this one has. Again, I haven’t yet proved it is wrong. But anyone thinking seriously can see that if it is wrong, it was almost guaranteed to outlast other kinds of unjust authority. And since some of the grossest of those other injustices still exist in many “civilized” countries, and have only recently ended in others, it would be surprising if the deepest-rooted domination of all had already been visibly shaken everywhere. In that sense, the surprising thing is not how resilient it is, but how many serious, influential protests against it have existed at all.

“Natural” usually means “customary”

Some people will object that you can’t fairly compare male rule to other injustices because those are arbitrary usurpations, while male rule is “natural.” But has any domination ever failed to look natural to the people who benefited from it?

There was a time when dividing humanity into a small class of masters and a huge class of slaves seemed not merely acceptable, but the only natural arrangement—even to highly educated minds. Aristotle, a giant in the history of thought, believed it without hesitation. He argued from premises that sound very familiar in modern defenses of male dominance: that humans come in different “natures,” some fit for freedom and others fit for servitude. In his case, he claimed Greeks were naturally free, while certain “barbarian” peoples—Thracians and Asians—were naturally slave-like.

But we don’t even need to go back to Aristotle. Didn’t slaveholders in the southern United States defend slavery with the same fierce certainty people bring to theories that excuse their passions and validate their interests? Didn’t they insist, with religious intensity, that white dominion over Black people was “natural,” that Black people were naturally unfit for freedom and destined for slavery—some even claiming that free manual labor anywhere was an unnatural disorder?

Theorists of absolute monarchy have likewise long insisted that monarchy is the only natural government: an outgrowth of the patriarchal family, the “primitive” social form, modeled on paternal authority, which they treat as older than society itself and therefore the most natural authority of all. And if someone had no argument beyond raw conquest, even the law of force itself has always seemed “natural” to those who relied on it. Conquering peoples routinely call it Nature’s decree that the conquered must obey. Or, put more prettily: the weaker and less warlike should submit to the brave and manly.

A quick look at medieval life shows how “natural” feudal domination felt to nobles—and how “unnatural” it seemed to imagine an inferior demanding equality or authority. It scarcely felt less unnatural to the subordinate class. Even when freed serfs and burgesses fought hardest, they rarely claimed a share of rule. They demanded limits on oppression, not participation in authority.

That tells you something important: “unnatural” usually just means unfamiliar. What’s customary feels natural. Since the subordination of women to men has been nearly universal, any deviation naturally strikes many people as unnatural. But how much that feeling depends on custom becomes obvious the moment you look around the world.

People in distant societies, on first hearing about England, are often astonished to learn it is ruled by a queen; it sounds so “unnatural” to them as to be almost unbelievable. To English people, it doesn’t feel unnatural at all, because they’re used to it. Yet many of them do find it unnatural for women to be soldiers or members of Parliament.

In the feudal world, by contrast, war and politics didn’t seem unnatural for women—because they weren’t unheard of. It seemed natural that high-born women should be “manly” in character, inferior only in physical strength to their fathers and husbands. Among the Greeks, women’s independence seemed less unbelievable than among other ancient peoples, partly because of the stories of the Amazons (which they took as history), and partly because Spartan women offered a partial real-world example. Spartan women were legally subordinate like other Greek women, but in practice they were freer; trained in physical exercise like men, they provided vivid evidence that women weren’t “naturally” incapable. It’s hard not to suspect that Sparta helped inspire Plato’s arguments for the social and political equality of the sexes.

“Women consent” isn’t the whole story

“But,” someone will say, “male rule isn’t force. Women accept it voluntarily. They don’t complain. They consent.”

First: many women do not accept it. Ever since women have been able to make their views public through writing—the main kind of publicity society has allowed them—more and more have recorded protests against their condition. More recently, many thousands, led by some of the most publicly prominent women of their time, have petitioned Parliament for the vote. The push for women to receive a serious education—in the same subjects and with the same rigor as men—has grown stronger and looks likely to succeed. And the demand for entry into professions and occupations long closed to women becomes more urgent each year.

In this country there may not be, as in the United States, regular conventions and a formally organized “Rights of Women” party, but there is a large and active society—organized and run by women—focused on the more limited goal of winning the political franchise. And this isn’t confined to England and America. France, Italy, Switzerland, and Russia also show women beginning, in one way or another, to protest collectively against the disabilities imposed on them.

How many more women privately want the same changes is impossible to know. But there are plenty of signs that many would want them, if they weren’t trained so relentlessly to crush such wishes as “improper” for their sex.

Second: no oppressed class ever demands full freedom all at once. When Simon de Montfort first called representatives of the commons to Parliament, did any of them imagine demanding the power to make and unmake ministries, or to dictate national policy to the king? Not remotely. Those ambitions already belonged to the nobility. The commons asked for far less: protection from arbitrary taxation and from the crude, personal abuses of royal officials.

There’s a general political pattern here: when a power has ancient roots, those living under it almost never begin by attacking the power itself. They begin by complaining about how harshly it is used. That’s exactly what we see with women. There are always women who complain of mistreatment by their husbands. There would be far more, if speaking up weren’t often the surest way to trigger a repeat—and an escalation—of the abuse.

That is why every attempt to keep the power but merely restrain its worst uses tends to fail. In almost no other case—except that of a child—does the law take someone who has been legally proven to have been harmed, and place them back under the physical power of the person who harmed them.

Accordingly, even when a husband’s violence is extreme and goes on for years, wives almost never feel safe using the laws that supposedly protect them. And if, in a flash of anger they can’t hold back—or because neighbors step in—they do go to the authorities, they usually spend the rest of the process trying to reveal as little as possible and to plead for their abuser to be spared the punishment he plainly deserves.

Why collective rebellion is so unlikely

Social pressures and basic human realities line up to make one thing predictable: women, as a group, are unlikely to rise up against men’s power. Women aren’t positioned like other oppressed classes, because their “masters” demand more than labor. Men don’t just want women’s obedience; they want women’s feelings.

Most men—except the truly brutal—don’t want to live with someone who obeys only because she’s forced. They want a partner who seems willing. Not merely a servant, but a favorite. And to get that, societies have spent enormous energy not only controlling women’s actions, but shaping their minds.

With other forms of slavery, rulers mainly rely on fear—fear of the master, or fear reinforced by religion. But with women, simple fear wasn’t enough. So education, morality, and sentiment were all recruited to do the deeper work.

From early childhood, girls are taught that the “ideal” feminine character is the mirror-image of the masculine one. Men are encouraged toward self-direction and self-control; women are trained toward submission and letting others steer their lives. The moral teachings insist it’s women’s duty, and the popular romantic stories insist it’s women’s nature:

  • to live for others
  • to erase themselves
  • to have no real life except through their relationships

And even those relationships are tightly fenced. “Affection,” in practice, means the only attachments women are permitted to center their lives on: the men they’re connected to, and the children who bind them to a man with an extra, unbreakable tie.

Now add three forces together:

  • the natural attraction between the sexes
  • the wife’s near-total dependence on her husband (every comfort, privilege, and pleasure either comes from him or hangs on his permission)
  • the fact that social standing, respect, and ambition are usually reachable for her only through him

Put those in the same room and it would be shocking if “being attractive to men” didn’t become the guiding star of girls’ upbringing and character. Once that lever existed, men—out of plain self-interest—used it hard. They trained women to believe that meekness, submissiveness, and the surrender of their own will into a man’s hands aren’t just virtues, but major parts of what makes a woman desirable.

Ask yourself: would any other oppressive system that humans have managed to shake off have lasted this long if it had possessed this same psychological machinery? If every young plebeian had been raised to make the personal favor of a patrician the great prize of life; if every young serf had been taught to chase the affection of a lord; if living in his household and winning his personal warmth had been held out as the reward—especially for the brightest and most ambitious—and if, once the prize was won, they were sealed off from any interests beyond him, from any feelings or desires except those he shared or approved… would those classes have blurred together over time?

Or would “plebeian” and “patrician,” “serf” and “lord,” still feel like permanent categories—just as “man” and “woman” are treated as permanent categories now? And wouldn’t almost everyone—except a rare thinker here and there—take that division as a basic, unchangeable fact of human nature?

Why “custom” proves nothing here

All of that is more than enough to show why tradition—even universal tradition—doesn’t deserve the usual respect in this case. Custom creates no real presumption for arrangements that place women under men socially and politically.

In fact, we can say something stronger. If you look at history and at the direction modern societies have been moving, you don’t find support for a permanent inequality of rights. You find the opposite: a steady pressure against it. Everything about the broad arc of human improvement suggests that this old leftover doesn’t fit the future and will have to fade out.

So what most clearly defines the modern world—what truly separates modern institutions and social ideas from those of long ago?

It’s this: people are no longer “born to” a fixed place and chained there by an unbreakable link. They’re increasingly free to use their abilities, plus whatever opportunities come their way, to pursue the kind of life that seems best to them.

Older societies were built on a different blueprint. Almost everyone was born into a set rank and kept there—often by law, and always by barriers that made climbing out nearly impossible. As surely as some people are born with different skin colors, in those societies some were born slaves and others free; some citizens and others outsiders; some patricians and others plebeians; some nobles and others commoners. A slave or serf couldn’t simply make himself free. Freedom, if it came at all, came only by the master’s decision.

Even among the privileged, birth ruled. In many European countries, commoners couldn’t be ennobled until late in the Middle Ages, and that change came largely from the growing power of kings. Within noble families, the eldest son was treated as the automatic heir, and it took a long time before people fully accepted that a father could disinherit him.

Work was locked down, too. Among the laboring classes, only people born into a guild—or admitted by its members—could legally practice a trade in a given town. And even when you were allowed to work, you were often required to do it only in officially approved ways. People have literally been punished for improving a manufacturing process without permission.

Modern Europe—especially the parts most changed by “progress”—has moved toward the opposite view. Governments generally no longer claim the right to decide who may perform a given kind of work, or which methods are legal. Those choices are left to individuals. Even apprenticeship requirements have been repealed in this country, on the very practical reasoning that when apprenticeship is genuinely necessary, the need itself will make people adopt it.

Behind this shift is a deep reversal in philosophy. The old assumption was that individuals, left to themselves, would almost certainly choose badly, so the “wisest” authorities should decide as much as possible for them. The modern conviction—earned through centuries of costly experiments—is that whenever people are directly interested in the outcome, things go best when decisions are left to their own judgment. Authority should intervene only to protect other people’s rights. Beyond that, regulation tends to do harm.

This idea didn’t win quickly. It emerged slowly, after almost every imaginable version of the opposite approach had been tried—and had failed disastrously. And now, at least in industry, it’s broadly accepted in the most advanced countries.

Notice what this does not mean. It doesn’t mean every method is equally good, or every person is equally suited to every job. It means that freedom of choice is how better methods actually get adopted, and how work tends to end up in the hands of the people most capable of doing it. Nobody thinks we need a law saying only strong-armed men may be blacksmiths. Open competition does the sorting: people who aren’t built for that work find other work they can do better—and earn more from.

Once you accept that principle, it becomes a clear overreach for authorities to declare in advance—on the basis of broad assumptions—that certain people are unfit for certain roles. Even if a general assumption were true in most cases (and it often isn’t), it will never be true in all cases. There will always be exceptions. And when there are exceptions, barring those individuals is both an injustice to them and a loss to society, because it blocks talent from being used where it could benefit everyone.

Meanwhile, when someone truly is unfit, ordinary human motives usually handle the problem. People tend not to keep pursuing careers where they fail, earn less, and get beaten out by competitors.

So the question becomes stark. If this basic principle of social and economic life is wrong—if individuals, with advice from those who know them, are not better judges of their abilities and calling than the law and the state—then we should drop the whole modern idea and return to the old world of permissions, prohibitions, and fixed ranks.

But if the principle is right, then we should live as though we believe it. We should not decree that being born a girl rather than a boy—any more than being born Black rather than white, or a commoner rather than a noble—decides a person’s entire life; locks them out of higher social positions; and blocks them from all but a few “respectable” occupations.

Even if we granted the strongest claim people make—that men are generally more suited to the functions now reserved for them—the argument against legal exclusion still holds. It’s the same logic that argues against adding restrictive qualifications for members of Parliament. If, even once in twelve years, a rule of eligibility excludes the right person, that’s a real loss. And excluding thousands of wrong people gains you nothing, because if voters are inclined to choose poorly, there will never be a shortage of unfit candidates anyway.

In any demanding, important role, truly capable people are always fewer than the need. Shrinking the pool makes it less likely society will find them, without reliably protecting society from incompetence.

The modern anomaly: women (and royalty)

In the more “improved” countries of the day, women’s legal disabilities are almost the only case—except one—where law and institutions look at a person at birth and declare: you may never compete for certain things, no matter what you do.

The one exception is royalty. People are still born to thrones. No one outside the ruling family can ever occupy them, and even within the family, the throne is reached only by inheritance.

But everyone recognizes royalty as an exception—an oddity inside the modern world, sharply at odds with its usual principles, defended only by special practical arguments. And even then, free nations try to preserve the spirit of modern competition in practice: they limit the monarch’s real power, and the work of governing is done by a responsible minister who reaches office through political competition—competition from which adult male citizens are not legally barred.

Women’s disabilities, by contrast, come purely from being born female, and they stand alone in modern legislation. Apart from this—covering fully half the human race—higher social functions aren’t closed by an unchangeable “birth fate” that no effort or changed circumstance can overcome. Even religious disabilities (and in England and much of Europe these have nearly vanished in practice) don’t permanently shut anyone out, because conversion removes them.

That makes women’s subordination look like a strange fossil embedded in an otherwise modern landscape: a single, stubborn leftover from an older world that has been discarded almost everywhere else—yet kept in the one area of the widest, most personal importance.

It’s as if a massive ancient monument—a dolmen, or a temple to Jupiter—sat on the site of St. Paul’s Cathedral and still drew daily worship, while the surrounding churches were visited only on special days. The mismatch is that dramatic: one social fact sitting in open contradiction to nearly everything around it, and in direct opposition to the progressive movement modern societies celebrate—precisely the movement that has already swept away other institutions built on the same kind of inherited inequality.

For anyone honestly watching the direction of human society, that contradiction should provoke real thought. At the very least, it creates an immediate presumption against the current arrangement—stronger than any presumption custom could create in its favor. It should make the issue, like the debate between republicanism and monarchy, a genuinely open question.

What a real discussion requires

At minimum, we should refuse to treat the matter as already decided by existing practice and popular opinion. It should be argued on its merits—as a question of justice and practical benefit—like any other social arrangement. The decision should rest on what a clear-eyed understanding of consequences shows to be best for humanity as a whole, without regard to sex.

And the discussion can’t be fake. It has to go down to foundations, not float on vague generalities.

For example, it won’t do to claim, in broad strokes, that “the experience of mankind” supports the existing system. Experience can’t choose between two options when humanity has only ever tried one. And if someone says equality is “only theory,” remember: inequality is theory too. The only thing direct experience proves is that humans can survive under inequality—and can reach the level of prosperity we see. But experience can’t tell us whether we would have gotten there faster, or climbed higher, under a different system.

What experience does suggest is something else: across history, improvements in society have almost always gone hand in hand with some rise in women’s social position. This pattern has led historians and philosophers to treat the condition of women—whether they are elevated or degraded—as one of the clearest measures of a civilization’s overall development.

Throughout the long period of human progress, women’s status has generally moved closer toward equality with men. That alone doesn’t logically prove the movement must end in complete equality. But it certainly gives a reason to suspect that it will.

“Nature” is not something we’ve actually tested

Finally, it doesn’t help to insist that the “nature” of the two sexes perfectly fits their current roles and makes those roles appropriate. On plain common sense, I deny that anyone can truly know the natural mental and moral differences between the sexes so long as we have only ever seen them in their present relationship to each other.

If humans had ever found societies of men without women, or women without men—or even a society of men and women in which women were not under men’s control—then we might have learned something solid about what differences, if any, are truly innate.

As things stand, what people call “women’s nature” is, to an extraordinary degree, something manufactured: the product of harsh repression in some directions and unnatural stimulation in others.

It’s hard to name any group of dependents whose character has been so thoroughly bent out of shape by dependence itself. Conquered peoples and enslaved races have often been crushed more brutally, yes—but whatever wasn’t ground down by force was usually left to grow on its own, and when it had any room to develop, it tended to develop by its own internal logic.

With women, society has done something different and, in a way, more insidious. It has run a kind of intensive, artificial “cultivation”—like forcing certain parts of a plant to flourish in a greenhouse—selecting and exaggerating some traits for the comfort and pleasure of men, while starving, freezing, or even burning off others. Then, after producing these results on purpose, men look at the outcome and lazily conclude, “See? That’s just how women naturally are.” They mistake their own handiwork for nature. They treat a tree they’ve trained with heat on one side and ice on the other as if it would collapse without the very machinery that warped it.


The biggest obstacle to clear thinking about life and social arrangements is how shockingly ignorant people are about what shapes human character. Whatever a group of people happens to be like right now, we assume they were born leaning that way—even when the most basic facts about their situation plainly explain why they turned out as they did.

You can see this reflex everywhere:

  • A tenant farmer deep in debt to a landlord isn’t energetic and enterprising, so people conclude “the Irish are naturally lazy.”
  • A constitution collapses when the authorities charged with defending it turn their weapons against it, so people decide “the French can’t handle free government.”
  • Greeks outsmart Turks, Turks prey on Greeks, and suddenly someone declares “Turks are naturally more honest.”
  • Women are said to care about politics only when it touches their personal lives, so people assume “women just don’t care about the public good the way men do.”

But history—now understood far better than it used to be—teaches the opposite lesson. Human nature is extraordinarily sensitive to external conditions, and even the traits we like to call “universal” are wildly variable. Still, people often read history the way they travel: they mostly notice what they already expected to see. The people who learn the most from history are the ones who bring the most to it—curiosity, discipline, and a willingness to be corrected.


That’s why the question “What are the natural differences between the sexes?” is so difficult—and why it’s so reckless that almost everyone feels qualified to pontificate about it. In our current society, it’s actually impossible to get full, clean evidence. And yet the one workable method for getting even partial insight is the one most people ignore: a serious, analytical study of how circumstances shape character.

Here’s the key point: even if the moral and intellectual differences we think we see between men and women look deep and permanent, that alone doesn’t prove they’re innate. The only differences you could confidently call “natural” would be whatever remains after you subtract everything that can plausibly be traced to education, training, incentives, punishments, expectations, and the surrounding social environment. In other words, you’d be left with a residuum—the part that can’t possibly be explained as an artifact.

But to do that subtraction honestly, you’d need a deep science of character formation. And we don’t have it. In fact, for a subject this important, it has been studied astonishingly little. So at present, no one is entitled to strong, positive claims about what the “real” psychological differences between the sexes are. At best, we can offer conjectures—more or less plausible depending on how well they fit what psychology has managed to establish so far.


Even the simpler, supposedly “preliminary” question—what differences between the sexes exist right now, setting aside how they were produced—is still answered in a crude, incomplete way.

Doctors and psychologists have identified some differences in bodily constitution, and that matters. But most doctors aren’t psychologists, and when they speak about women’s mental characteristics, their observations often carry no more weight than the opinions of ordinary men.

And there’s a deeper problem: you can’t know anything final about women’s inner lives as long as the people best positioned to know them—women themselves—have had so little freedom to testify, and as long as what testimony exists is so heavily shaped by pressure, expectation, and fear.

It’s easy to think you “know” stupid people, because stupidity tends to look similar everywhere: you can predict a dull person’s opinions and feelings from whatever their circle repeats. But that approach fails for people whose beliefs and emotions genuinely come from their own minds. A man may have a passable sense of what women can do (or think he does), but he often has a poor grasp of what the women closest to him actually think and feel day to day.

And even “capabilities” are almost impossible to judge, because so many women’s abilities have never been called into full use—not even by themselves. What’s missing isn’t raw talent; it’s the opportunity that reveals and strengthens it.


Men often mistake a narrow slice of experience for broad understanding. A man may think he understands women because he has had romantic relationships with several—maybe many. If he’s observant and his experience includes depth as well as variety, he may learn something real about one department of a woman’s nature—an important one, no question. But outside that department, he is often unusually ignorant, because so much is carefully kept out of his view.

The most favorable case for a man trying to study a woman’s character is typically his wife. The opportunities are greater, and complete sympathy—though rare—is at least possible. In practice, much of whatever knowledge men have gained about women has come from this relationship.

But even there, most men have only had the chance to study one case. So it becomes almost comically easy to predict what a man’s wife is like from what he confidently proclaims about “women in general.” For even a single marriage to teach anything reliable, two things have to be true:

  • the woman must be worth knowing (in the sense of having an inner life and individuality that can be learned), and
  • the man must be both capable of judging and emotionally suited to her—so sympathetic that he can genuinely read her mind, or at least so safe a person that she has no reason to conceal it.

That combination is extraordinarily rare.

It’s common to see couples with total unity of feeling about external matters—money, family obligations, social life—yet with almost no access to each other’s inner worlds. They share a household but remain strangers in the private territory of thought.

Even when affection is real, authority on one side and subordination on the other block full trust. Nothing may be deliberately hidden, and yet much remains unspoken and unseen.

You can notice a similar pattern between parents and children. How often does a father, despite genuine love, fail to know essential parts of his son’s character—parts the son’s friends recognize instantly? The position of “looking up” to someone makes openness harder. The fear of losing standing in the other person’s eyes pushes people—even decent people—into an almost automatic habit of showing only their best side, or at least the side the authority figure prefers to see.

Real mutual knowledge is rare even among intimates. It usually happens only when the people involved are not just close, but equals.

How much stronger, then, are these distortions when one person is not merely under another’s authority, but has been taught that it is her duty to rank everything below his comfort and pleasure—and to let him see and feel from her only what pleases him?


All of this makes it hard for a man to gain thorough knowledge even of the one woman he most often has access to. Then add the obvious complications:

  • understanding one woman doesn’t mean understanding another;
  • studying many women of one social rank or one country doesn’t automatically translate to women of other ranks or places; and
  • even if it did, it would still only describe women of a single historical moment.

So yes: men’s knowledge of women—even as women have been and are, leaving aside what they could be—is shallow and miserable. And it will remain so until women themselves can speak fully and freely about their own experience.

That time hasn’t come yet. It will come only gradually.

It’s very recent that women have been educated well enough—or allowed socially—to address the public at all. Even now, very few can risk saying things their literary success depends on men being willing to hear. Remember how harshly, until quite recently, even male authors were treated for expressing unfamiliar opinions or “eccentric” feelings—and how that reaction hasn’t entirely disappeared. If that’s what men faced, imagine the obstacles for a woman raised to treat custom and public opinion as absolute law, trying to publish anything drawn from the depths of her own nature.

One of the greatest women writers felt it necessary to place, as a motto on her boldest work, the line: “A man can defy opinion; a woman must submit to it.” That tells you the temperature of the room.

Much of what women have written about women has been flattery aimed at men. For unmarried women, it has often been—consciously or not—a strategy to improve the odds of marriage. Some women go further and preach a servility so extreme that only the crudest men find it appealing. This kind of writing is less common than it used to be, but it hasn’t vanished.

The good news is that women writers are becoming more candid and more willing to state what they actually feel. The bad news is that, especially in this country, many of them are still so thoroughly shaped by artificial social expectations that their “real sentiments” often consist of a small amount of direct personal observation mixed with a large amount of inherited, absorbed associations. That will diminish over time, but it won’t disappear as long as social institutions deny women the same free development of originality that men take for granted.

Only when those institutions change will we be able to see, not just hear, what we truly need to know about women’s nature—and how the rest of society should be fitted to it.


I’ve lingered on the obstacles that keep men from knowing women’s true nature for a reason: in this area, as in many others, the illusion of abundance is one of the greatest causes of scarcity. People think they already understand, so they stop looking. And while that self-confidence lasts, there’s little hope of sane thinking about a subject where most men know almost nothing—and where, for now, it is impossible for any man (or even all men together) to have the kind of knowledge that would justify laying down the law to women about what is or isn’t their proper calling.

Fortunately, society doesn’t need that knowledge to act wisely.

By the principles modern society claims to live by, this question belongs to women themselves. It should be decided by their own experience and their own judgment. There is no way to find out what one person—or many people—can do except by letting them try. And there is no way for someone else to discover, on their behalf, what will make them happy or fulfilled.


One thing, though, we can be confident about: if something truly runs against women’s nature, then giving women freedom won’t make them do it. People often panic and rush to “help” nature along, as if nature might fail unless we police it. That anxiety is misplaced.

  • If women can’t do something by nature, it’s pointless to forbid it.
  • If women can do something but do it less well than men who compete with them, then competition itself will limit how many succeed—without any special barriers.

No one is demanding tariffs and subsidies to protect women from competition. The request is simply to remove the existing subsidies and protections that favor men.

And if women are naturally more drawn to some pursuits than others, you don’t need laws or social preaching to push them that way. Incentives will do it. The tasks society most needs women to do will offer the strongest reasons to do them. And, as the phrase suggests, what is “most wanted” is usually what someone is best fitted for—so that, overall, the combined abilities of women and men can be used where they produce the greatest total benefit.


People often claim that a woman’s natural calling is to be a wife and mother. I say “people claim” because, if you judge by actions—by the structure of society as it actually exists—you could easily conclude that many men believe the opposite.

Society is arranged as if men suspect that marriage and motherhood are, by nature, so unattractive to women that if women had any real alternatives—any other way to live, any other path that could plausibly seem desirable—there wouldn’t be enough women willing to accept the “natural” role at all.

If that is truly what men believe, it would be better for them to say it out loud. I would like to hear someone openly state the doctrine that is already implied in much writing on the topic:

“Society needs women to marry and bear children. They won’t do it unless forced. Therefore, we must force them.”

At least then the issue would be clearly defined. It would be the logic of the slaveholder, plain and undisguised:

“Cotton and sugar must be produced. White men can’t do the labor. Black people won’t do it for the wages we choose to pay. Therefore, they must be compelled.”

An even closer analogy is military impressment—the forced taking of sailors. The argument runs like this: a country must have sailors for defense; sailors often won’t voluntarily enlist; therefore the state must have the power to force them.

That reasoning has been used again and again, and it likely would still succeed—except for one fatal reply:

Pay sailors the honest value of their labor. Make service as worthwhile as other work, and you’ll have no more trouble recruiting than any other employer.

There is no logical answer to that except, “I refuse.” And because people today are generally ashamed— and, more importantly, not eager— to rob workers of their wages, forced recruitment is no longer defended.

The same reply applies to those who try to push women into marriage by shutting every other door. If they mean what they say, they must believe that men do not make the married condition desirable enough to women for women to choose it on its own merits. After all, you don’t offer someone a wonderful gift and then insist on Hobson’s choice: take it or get nothing.

And here, I think, is the real clue to why some men feel a deep hostility to women’s equal freedom. They aren’t truly afraid that women won’t marry—almost no one seriously fears that. What they fear is something else:

  • that women will insist on equal terms in marriage; and
  • that women with spirit and ability will choose almost any other honorable life over a marriage that requires them to hand themselves over to a master—and a master, at that, of all their earthly possessions.

And honestly, if this were an unavoidable consequence of marriage, the fear would be completely justified.

I can easily believe that very few women with real abilities—women who could build a life in any other way—would choose that kind of fate, unless they were swept up by an overwhelming rush of feeling that, for a while, makes everything else disappear. Because if there are other paths open to her—other ways to hold a socially respectable place in the world—why would she voluntarily sign up for a life of legal subordination?

But here’s the ugly logic on the other side: if men insist that marriage law must function as a system of rule and obedience—basically, domestic despotism—then from a cold, strategic standpoint they’re doing the sensible thing by leaving women only Hobson’s choice: take the one option offered, or take nothing at all.

And if that’s truly the model—if marriage is meant to be domination—then we should stop pretending that women’s education was ever a good idea. In that world, everything modern society has done to loosen the mental shackles on women would have been a mistake from the start. Women shouldn’t have been taught literature. They shouldn’t have been encouraged to read—certainly not to write.

Because under the current setup, an educated woman isn’t just inconvenient; she’s a contradiction. She doesn’t fit the role the system assigns her, so she becomes a disruption.

If women are to be raised for this kind of marriage, then, the argument goes, it would have been more “consistent” to train them only in the skills of an odalisque—a decorative captive in a harem—or, at best, a domestic servant.

II

Let’s start where the problem shows its sharpest edge: the legal terms society attaches to marriage—especially for women.

For centuries, marriage has been treated as women’s assigned destination: the future they’re trained for, the goal they’re expected to chase, and, for most, the only “respectable” life plan on offer. You might think that if society insists on steering half the population into one lane, it would at least make that lane as safe and appealing as possible. Instead, society has often gotten what it wanted through coercion rather than persuasion—and in marriage, that coercive streak has lasted longer than almost anywhere else.

Historically, the story is blunt. Women were taken by force or sold, sometimes literally, by fathers arranging marriages as transactions. In Europe, not that long ago, a father could marry off his daughter however he pleased, regardless of what she wanted. The Church did at least demand a spoken “yes” at the ceremony, but in practice that “consent” could be nothing more than ritualized compliance. If a father insisted, a girl had almost no realistic way to refuse—unless she could escape into the protection of religion by choosing convent life.

Once married, the husband’s legal power over his wife was, by today’s standards, horrifying. In the ancient world (before Christianity), a man could claim even the power of life and death over her. For a long time he could cast her off at will, while she had no matching right to leave him. English law embedded this dominance in language and punishment: the husband was “lord” of the wife, and if a wife killed her husband, the law treated it as a kind of treason—“petty treason”—and punished it with exceptional cruelty, even burning.

Because many of these extremes have faded from everyday practice (often without being formally repealed until long after they became rare), people tell themselves the modern marriage contract is now fair—that “civilization” and “Christianity” restored women’s rightful place. But look at the legal structure beneath the sentiment, and a different picture appears: under the law, the wife is bound to the husband as a servant is bound to a master.

At the altar she vows lifelong obedience, and the law holds her to that vow. Moral theologians might argue she isn’t required to help him commit a crime, but the demanded obedience covers essentially everything else. In practice, she is not a fully independent legal actor:

  • She can’t perform legal acts except with his permission, at least implied.
  • She can’t truly own property for herself; anything that becomes hers—inheritance included—becomes his immediately.
  • Her legal identity is folded into his.

In one striking way, the common law put wives in a position worse than some systems put enslaved people. Under Roman law, a slave could have a peculium—a personal fund or property the law recognized to some extent as the slave’s own to use. A wife, by contrast, could have wealth “in her name” and still see it swallowed whole by her husband the moment it arrived.

Wealthy families tried to soften this by writing special marriage settlements—pin-money arrangements and other contracts designed to sidestep the default rules. Often it wasn’t men’s solidarity with other men driving this, but ordinary parental instinct: a father tends to prefer protecting his daughter over enriching a son-in-law who is, to him, a stranger. Through these settlements, the rich could sometimes keep a wife’s inherited property from being completely squandered by her husband.

But even then, the law didn’t reliably place that property under her control. The best these arrangements could usually do was lock the property away from both of them: the husband couldn’t waste it, but the wife—the rightful owner—often couldn’t freely use it either. And even the settlement most favorable to the wife, the one described as “to her separate use,” offered a chilling kind of protection. It prevented the husband from receiving the income directly instead of her; it required the money to pass through her hands first. But if he then took it from her by violence the moment she received it, the law offered no meaningful remedy: he could not be punished for taking it, and he could not be forced to return it. That is the fullest “protection” even a powerful nobleman could realistically secure for his daughter against her own husband.

In the vast majority of marriages, there was no settlement at all. In those cases, the absorption was total: rights, property, and freedom of action all collapse into the husband’s control. The law calls husband and wife “one person,” but the oneness runs only one way. The logic “what is hers is his” is enforced; the equal and opposite logic “what is his is hers” is not. The only time the law applies the unity against the man is to make him liable to outsiders for what she does—much the way a master is held responsible for the actions of his slaves or even his animals.

To be clear: this is about legal power, not the everyday kindness or cruelty of individual husbands. Many wives are not treated with the day-to-day brutality that slavery brings to mind. Still, in one crucial sense the wife’s legal status can be even more complete. Most enslaved people are not under direct command every minute of the day; they may have set tasks, off-hours, and some personal space—limits, yes, but limits nonetheless. In Uncle Tom’s Cabin, “Uncle Tom” under his first owner has a cabin life of his own, a private sphere that many working people recognize: you go out, you work, you come home to something that is yours.

A wife, under the legal model, does not get that kind of off-duty boundary. And the most brutal difference concerns her own body. In Christian countries, even a female slave was generally recognized as having a moral right—indeed, a duty—to refuse sexual access to her master. The wife, however, is not granted that refusal. However cruel or degrading her husband may be—however much he despises her, however much he may make it his sport to torment her, however impossible it may be for her not to recoil from him—he can still claim, and the law effectively backs him in claiming, the most humiliating form of domination: forcing her to serve his sexual desires against her will.

Now add children to the picture. What rights does the mother have over the children she bears? Under the law, they are his children. He holds the legal authority over them, not her. She cannot act for them or in relation to them except as his delegate. Even after his death, she is not automatically their legal guardian unless he chooses to make her so in his will. He could even take the children from her and prevent her from seeing them or corresponding with them—until reforms began to limit this power.

And if she tries to escape? Legally, she has no clean exit. If she leaves, she leaves empty-handed: no children, no property, nothing that is “hers” in any meaningful sense. If he wants her back, he can compel her return through the courts, or through physical force, or he can simply seize whatever she manages to earn and whatever her family gives her.

The only shield is a formal legal separation granted by a court. That decree is what gives her the right to live apart without being dragged back into what becomes, in effect, custody under an enraged jailer. It is what lets her keep her own earnings without the fear that a man she hasn’t seen in decades could suddenly reappear, take everything, and leave her with nothing.

But for a long time, legal separation was available only at a cost that effectively reserved it for the upper classes. Even now, it is typically granted only in cases of desertion or extreme cruelty—and yet people complain that the courts grant it too readily. If a woman is denied every independent life path and told she must gamble her entire future on finding a tolerable “master,” it is a cruel additional punishment to tell her she may make that gamble only once.

Logically, if her fate depends on finding a good master, you might even argue she should be allowed to change masters repeatedly until she finds one who treats her decently. That doesn’t mean divorce and remarriage are necessarily the right policy—this passage isn’t trying to settle that question. The point is narrower: if you insist on forcing people into servitude, then at the very least the ability to choose and re-choose the person who holds power over them is the only—though still pitiful—relief you’ve left them. Deny even that, and the resemblance between wife and slave becomes painfully close.

In some slave systems, a person subjected to severe abuse could, under certain conditions, legally force an owner to sell them. In England, no level of mistreatment—unless it is combined with adultery—will free a wife from her tormentor.

None of this needs exaggeration. It is the legal framework, not a claim about every household. In fact, if marriage life actually matched what the law permits a husband to do, society would be unbearable—a kind of hell on earth. Thankfully, most men are restrained by ordinary human feeling, by self-interest, by conscience, and by affection. The bond many men feel toward their wives is often the strongest check on their worst impulses; and the bond to children usually reinforces that, rather than competing with it.

But defenders of the current institution point to these real-world softening effects and treat them as a justification. They say: look, things aren’t as bad as they could be, so the complaints must be petty—just grumbling about the inevitable “price” of a great social good. That logic is upside down. The fact that people often behave better than their legal power would allow is not an argument for keeping tyranny on the books. It is evidence that human nature can push back against rotten institutions—and that seeds of goodness can survive and spread even in poisoned soil.

And the excuses offered for despotism at home are the same excuses offered for despotism in politics. Not every absolute monarch is a cartoon villain who delights in suffering. Many kings have ruled without constantly torturing their people or stripping them to rags. Different despots vary in cruelty. But “not the worst possible tyrant” is not a defense of tyranny. France did not need a Caligula to justify a revolution; an ordinary absolutism, harmful enough in its structure and effects, was sufficient.

People also point to the intense affection that can exist between husband and wife and say this proves the arrangement is natural and good. But intense devotion can flourish under the worst systems. In ancient Greece and Rome, enslaved people sometimes endured torture and death rather than betray their masters. During the Roman civil wars, observers even remarked on how often wives and slaves stayed loyal while sons turned traitor. None of that prevented Romans from treating slaves with savage cruelty. Deep personal attachment is not proof of justice; it can be a survival response in a world where one person has power to crush another and chooses, sometimes, not to use it.

There’s a bitter irony here: the strongest gratitude human beings can feel often rises toward those who hold absolute power over them and merely refrain from exercising it to the full. You can see the same psychology in religion, when people’s devotion intensifies as they compare their own lives to the miseries suffered by others and interpret their relative safety as mercy.

Whenever someone defends slavery, political absolutism, or the absolutism of the “head of the family,” they ask you to judge the institution by its best examples. They show you the ideal tableau: wise authority exercised lovingly, willing obedience offered gratefully, dependents smiling under benevolent rule. If the argument were simply that good men exist, no one would dispute it. Of course a good person can rule absolutely and still produce affection, happiness, and order in the small world he governs.

But laws aren’t written for saints. Institutions must be designed to protect people from the worst kinds of human beings, not to assume the best. Marriage is not reserved for a carefully screened elite. No man must prove, before taking vows, that he is fit to be trusted with sweeping power over another adult. Some men are deeply shaped by social feeling and affection; others are barely touched by it. There are every grade of moral character, down to those who cannot be bound by any tie and whom society can restrain only through the blunt force of criminal penalties.

And every one of those men, across that whole descending scale, can legally become a husband.

The worst criminal can have some miserable woman bound to him—someone he can brutalize in almost any way short of killing her, and, if he is cautious, perhaps even do that with little risk of punishment. Beyond those notorious monsters, there are also the thousands among the poorest classes—men who may not count as criminals in other settings because others can fight back—who routinely beat their wives because she, alone among the adults around them, cannot effectively resist and cannot escape. For such men, the very dependence the wife is forced into does not inspire generosity or restraint. It feeds a mean, savage sense of ownership: the law has handed her over as their thing, to use as they please, and they imagine they owe her less consideration than they owe anyone else.

Only recently has the law made weak attempts to curb these “aggravated assaults.” But it cannot do much while it keeps the victim in the hands of the aggressor. As long as repeated violence does not automatically entitle the woman to divorce or at least to a judicial separation, punishment will keep failing for predictable reasons: there will be no prosecutor, or no witness willing or able to testify, because the woman must go on living under the power of the person she accuses.

Once you notice how many men in any large country are only a step above brute violence—and how marriage law still gives those men access to a victim—the scale of misery created by this single institution becomes hard to fathom. And these are only the worst cases: the bottom of the pit.

Above them lies a grim staircase of lesser miseries. As with political tyranny, the existence of absolute monsters matters because it reveals the full range of horrors the system permits whenever the ruler chooses. Monsters may be rare, as rare as angels or rarer. But the more common figure—the ferocious brute with occasional flashes of decency—is common enough. And between that and a genuinely worthy human being stretches a wide territory packed with smaller forms of selfishness and cruelty: people who wear a polished exterior, sometimes even the appearance of education, who live peaceably within the law and seem respectable to everyone not under their control—yet who still manage to make the daily lives of those dependent on them a constant torment and a heavy burden.

It gets old, honestly, to rehash all the familiar warnings about how unfit most people are for unchecked power. Political writers have been making that point for centuries, and everyone can recite the clichés. The problem is that almost nobody applies those warnings where they fit best: not to power handed to a few exceptional men, but to power casually offered to every adult male—right down to the cruelest, most reckless, and most brutish.

A man’s public reputation tells you surprisingly little about how he behaves when no one can push back.

  • Maybe he hasn’t obviously broken the Ten Commandments.
  • Maybe he’s polite and “respectable” with strangers, coworkers, and anyone who can walk away.
  • Maybe he keeps his temper in check around people who won’t tolerate a scene.

None of that lets you safely predict what he’ll be like at home, where the people around him often can’t leave, can’t refuse, and are expected to endure. Even perfectly ordinary men commonly save their most violent, sullen, and openly selfish traits for the people who have the least power to resist them.

Power over dependents is a training ground for vice. When you see someone who’s harsh with equals, you can almost always trace it back to a life spent among “inferiors”—people he could intimidate, nag, or wear down into submission. And while the family, at its best, really can be a school for sympathy and tenderness, it is at least as often—especially for the person cast as “head of the household”—a school for:

  • willfulness
  • bullying
  • limitless self-indulgence
  • a deep, polished selfishness

That selfishness can even dress itself up as “devotion.” A man may “sacrifice” for his wife and children while treating them as extensions of himself—his property, his interests, his reputation. Under that mindset, their separate happiness is routinely burned up to satisfy his smallest preferences.

Why should we expect anything better from the institution as it currently stands? We already know a basic fact about human nature: bad impulses stay contained mostly because they don’t have room to operate. Give people scope to indulge them, and those impulses grow. And another fact: from sheer habit—often without any conscious plan—people tend to push and take more, little by little, whenever others keep yielding, until the moment comes when the other side finally has to resist.

Now put those tendencies into the situation society creates inside many homes: one person is given near-unlimited power over another human being, someone who lives under the same roof and is always there. That kind of power doesn’t just allow selfishness; it hunts it down. It digs out the faintest sparks of it, fans them into flame, and hands the man a license to indulge traits he’d have to hide everywhere else—traits that, with regular restraint, might have faded into something like self-control.

To be fair, there is another side. A wife who can’t effectively resist can often still retaliate. She can make life miserable. And with that leverage, she can sometimes win disputes—some she should win, and some she shouldn’t.

But this form of self-defense—call it the “scold” power, the shrewish sanction—has a poisonous flaw: it works best against the least tyrannical men and in favor of the least deserving women. It’s easiest for the irritable and self-willed, the sort of person who would misuse power if she had it and who often does misuse this smaller, indirect kind. The gentle and principled can’t bring themselves to wield it; the high-minded refuse to. Meanwhile, the husbands most vulnerable to it are typically the mildest—the ones who won’t, even when provoked, answer unpleasantness with harsh authority. So the “power to be disagreeable” often just creates a counter-tyranny, and its main victims are the very men least inclined to be tyrants.

So what actually softens the corrupting effects of this built-in power—and makes it compatible with the real good we do sometimes see?

Not, in general, “feminine charms.” Those can matter in particular cases, but as a broad social force they’re weak and temporary. They last only while a woman is young and attractive—or even only while her novelty hasn’t worn off—and for many men they hardly matter at all.

The real mitigating forces are more practical and more human:

  • Affection that grows over time, when the man’s nature can feel it and the woman’s character genuinely fits with his.
  • Shared commitment to the children, which ties their interests together in a concrete way.
  • Shared interests beyond the family, though those are often limited.
  • The wife’s daily importance to his comfort, so that valuing her “for his sake” can—if he’s capable of it—become valuing her for her own.
  • The influence of proximity, the quiet power nearly everyone gains over those they live close to: through direct requests and through the subtle contagion of moods and dispositions. Unless some other strong influence counteracts it, this can give the “inferior” partner a surprising, even unreasonable, degree of control over the “superior.”

Through these channels, a wife often ends up exercising a lot of influence—sometimes too much, and not always for the better. She may steer her husband in areas where she isn’t equipped to judge, where her influence is not merely uninformed but morally misguided, and where he’d act better if left to his own sense of right.

And here is the bitter irony: under present arrangements, the men who treat their wives most kindly are often made worse—not better—by their wives’ influence in anything that reaches beyond the home. Why? Because she is trained to believe that public matters aren’t her business. As a result, she rarely forms a serious, conscientious view about them. When she does intervene, it’s seldom for principled reasons and usually for interested ones. She may not know (or care) which political side is right—but she knows exactly which side will bring money, social access, invitations, a title for her husband, a post for her son, or a “good match” for her daughter.

At this point someone will object: “Fine, but a society can’t exist without a government. And a family can’t function without a final authority. When spouses disagree, who decides? They can’t both get their way. A decision has to be made.”

But that claim is wrong. Even in voluntary associations between just two people, it’s not true that one must be an absolute master—let alone that the law must preselect which one it will be.

Look at the obvious comparison: a business partnership. We don’t assume (and we certainly don’t legislate) that in every partnership one partner must have total control and the others must obey. Nobody would agree to be responsible like an owner but treated like a clerk. If the law handled ordinary contracts the way it handles marriage, it would decree that one partner runs the joint business as if it were purely his own; the others would have only delegated powers; and the “boss” would be chosen by some arbitrary legal presumption—say, whoever is oldest. We would laugh at that. The law doesn’t do it. Experience doesn’t require it. Partnerships work with whatever terms the partners write into their agreement.

In fact, you could argue that granting one partner exclusive power is less dangerous in business than in marriage, because a business partner can always withdraw and dissolve the arrangement. A wife typically can’t—and even if she could, it’s usually better for her to try every other remedy before reaching for that last resort.

Now, it’s true that certain day-to-day decisions can’t be postponed, slowly adjusted, or endlessly compromised. Some things need a single executive will in the moment. But it does not follow that the same person must always be that decider.

A more natural arrangement is division of powers:

  • Each partner is fully in charge of executing within their own sphere.
  • Major shifts in system or principle require both to consent.

And crucially, that division can’t—and shouldn’t—be fixed in advance by law, because it depends on the particular capacities and fit of the two people involved. If a couple wanted, they could even set such terms in a marriage contract, the way financial arrangements are sometimes set ahead of time. Most couples wouldn’t struggle to reach mutual consent on practical matters—unless the marriage is already one of those miserable ones where everything becomes fuel for petty conflict. In ordinary cases, the split of rights tends to follow the split of responsibilities and functions, and that split is already shaped by consent (or at least by custom), not by statute—and it can be reshaped by the couple’s own choices.

In real life, too, who actually decides will depend heavily on who is better qualified—just as it does now. In many couples the man’s voice will carry more weight simply because he is often older, at least until both reach an age where a few years’ difference doesn’t matter. And whoever brings the income will usually have added influence as well. But that inequality comes not from marriage law itself; it comes from the broader conditions of society as it is currently organized.

Beyond that, mental superiority—general or in some specific area—and strength of character will inevitably shape outcomes. They already do. And that fact undermines the fear that two life-partners can’t sensibly apportion powers and responsibilities by agreement. They generally do apportion them—except in the cases where the marriage, as a partnership, has simply failed.

It rarely comes down to raw dominance on one side and obedience on the other unless the relationship itself was a mistake—and unless, frankly, it would be better for both people to be freed from it.

Some will insist that what makes amicable compromise possible is the threat of legal force in the background—like people accepting arbitration because they know a court exists to compel compliance. But that analogy only works if the court is impartial. Imagine a “court” that doesn’t weigh evidence but automatically rules for the same party every time—say, always for the defendant. In that world, the plaintiff would rush into arbitration to avoid certain defeat, while the defendant would have no reason to compromise at all.

That’s the marital parallel: the law’s despotic power may pressure a wife to accept any compromise that gives her a share of practical power, but it can’t be what persuades the husband to share. The very fact that decent couples commonly reach workable compromises—even when one party is under no physical or moral necessity to do so—shows that, in most cases, the natural motives that support a jointly acceptable life together usually win out.

What the law certainly does not improve is the situation where we build “free government” on top of a domestic legal foundation of despotism for one and subjection for the other—where every concession the despot makes can be withdrawn at any moment, without warning. Freedom held on that kind of precarious lease isn’t worth much, and the arrangement is unlikely to be fair when the law dumps such an enormous weight onto one side of the scale—declaring one person entitled to everything, the other entitled to nothing except at the first person’s pleasure, and then piling on the strongest moral and religious pressure not to resist even extreme oppression.

A stubborn opponent, backed into a corner, might try this line: “Sure, husbands will be reasonable without being forced, but wives won’t. Give women rights and they’ll recognize no one else’s; they’ll never yield unless the man has the authority to compel them to yield in everything.”

People did say things like that a few generations ago, when mocking women was fashionable and men congratulated themselves for insulting women for being what men had trained them to be. But today, no one worth answering seriously holds that view. Modern doctrine does not claim that women are less capable than men of good feeling and consideration toward those they’re bound to by the strongest ties. If anything, we constantly hear the opposite: that women are better than men—often from the very people who refuse to treat them as if they were equally good. The compliment becomes a dull form of hypocrisy: a sugary coating on an injury, like the “royal mercy” that, in Gulliver’s Travels, the king of Lilliput supposedly praised before issuing his bloodiest orders.

If women are “better” than men in anything, it’s often in the willingness to make personal sacrifices for their families. Still, that fact can’t be leaned on too heavily while women are systematically taught that they exist for self-sacrifice. The healthier expectation is this: with equal rights, the exaggerated self-erasure now held up as the feminine ideal would likely ease. A good woman wouldn’t be more self-sacrificing than the best man.

And men? Men would become far more unselfish than they are now, because they would no longer be raised to idolize their own will—taught, implicitly and explicitly, that their preferences are so inherently grand that they amount to law for another rational human being.

This kind of self-worship is ridiculously easy to learn. Every privileged person and every privileged class has shown it. And it grows more intense the further down you go on the social ladder—most intensely of all among those who are not, and never expect to be, above anyone except a powerless wife and children. The honorable exceptions are fewer than for almost any other human failing. Philosophy and religion, instead of restraining it, are too often recruited to defend it. Almost nothing checks it except a lived, practical sense of the equal humanity of all people—a principle Christianity teaches in theory, but that it will never teach in practice so long as it blesses institutions founded on arbitrary preference of one human being over another.

Of course, there are women—just as there are men—whom equal respect will never satisfy: people who can’t tolerate peace unless their own will is the only will that counts. For those cases, divorce law exists for a reason. Such people are fit only to live alone; no one should be forced to tie their life to them.

But here is the twist: legal subordination tends to create more of these characters among women, not fewer. If the man uses his full legal power, the woman is crushed. But if he treats her indulgently and lets her take on influence, there’s no clear rule—no recognized boundary—to stop her encroachments.

The law doesn’t spell out a wife’s rights. In theory, it gives her none. In practice, it tells her this: whatever you can manage to take for yourself is what you “have a right” to.

Why equality in marriage matters beyond marriage

Legal equality between spouses isn’t just one nice option among many. It’s the one arrangement that fits basic justice and makes marriage a source of happiness for both people. More than that, it’s the only arrangement that can turn everyday life into something morally educational in the deepest sense.

Here’s the uncomfortable truth: the best “school” for real moral character is living among equals. You can’t fully learn fairness, self-restraint, and mutual respect in a world built on domination. Even if it takes generations for society to recognize it, genuine moral sentiment grows out of equal relationships, not out of fear or hierarchy.

So far, most moral education has come from what you might call the law of force—the ethics that make sense in relationships created by power. In less developed societies, people barely recognize “equal” relationships at all. An equal is treated like a rival. The whole social structure looks like a ladder: everyone is either above or below the next person, and if you don’t command, you obey. No surprise, then, that traditional morality has mostly been designed for command-and-obedience relationships.

But command and obedience are not the ideal. They’re an unfortunate necessity. Equality is the normal condition of human society. And in modern life—more and more as it improves—relationships of command become the exception, while cooperation among equals becomes the rule.

You can almost trace moral history in stages:

  • Early societies emphasized the duty to submit to power.
  • Later societies emphasized the right of the weak to the protection of the strong (chivalry, generosity, paternalism).
  • What we need now is the morality of justice.

How long should a modern society keep using moral rules designed for a different kind of world? We’ve had the morality of submission, and the morality of gallantry. It’s time for the morality of fairness.

Whenever past societies have even partly approached equality, justice has pushed its way to the center as the foundation of virtue. That happened in the free republics of the ancient world—but with a huge asterisk: “equals” meant only free male citizens. Slaves, women, and residents without political rights still lived under force.

Then Roman civilization and Christianity, taken together, weakened those old lines—at least in theory—by putting the claims of the human being above sex, class, or social rank. Those barriers started to fall. Later, northern conquests built them back up again. Much of modern history is the slow grinding process of those barriers wearing down once more.

We’re moving into a social order where justice will again be the primary virtue—still rooted in equality, but now also rooted in sympathy. Not the old “I must protect myself from my equals” instinct, but a learned, cultivated ability to feel with one another. And this time, no one is supposed to be excluded; the same measure is meant to extend to everyone.

Society changes faster than our feelings

Humans rarely see their own future clearly. Our institutions train us for yesterday long after tomorrow has already started to arrive. To understand where society is going has always been a privilege of a small intellectual minority (or of those who learn from them). To feel it—to have the emotions and moral instincts suited to the coming world—has been rarer still, and often punished.

Schools, books, social expectations, even family life keep shaping people for the old order, not just after the new one has arrived, but even while it’s still on the horizon.

Yet the real virtue human beings need is simple to state, even if it’s hard to practice: the ability to live together as equals. That means:

  • Claiming nothing for yourself that you wouldn’t freely concede to others.
  • Treating command as an exception—sometimes necessary, always temporary.
  • Preferring relationships where leadership can shift back and forth, where leading and following are reciprocal.

And here’s the problem: in the world as it’s currently arranged, we get very little practice at those virtues.

The family as a moral training ground—currently, a bad one

The family, as it’s typically organized, is a training ground for despotism. It cultivates the virtues that help people rule, but it also cultivates the vices that come with ruling.

Citizenship in a free country can teach some habits of equality. But citizenship is a small slice of modern life; it doesn’t reach into daily routines or the private feelings that shape character.

A properly organized family, by contrast, could be the real school for the virtues of freedom. It will always teach other things too: children must learn obedience; parents must sometimes command. That part won’t disappear.

What must change is what happens between the parents. The family needs to become a school of sympathetic equality—living together in love, with neither power on one side nor enforced obedience on the other. If the parents lived as equals, they’d be practicing the same virtues required in every other healthy relationship. And children would see, up close, a living model of the conduct their temporary training in obedience is meant to prepare them for—so that, as they grow, fairness and mutual respect feel natural.

Human moral training will never truly match the kind of society we’re building until the family itself follows the moral rule suited to normal human society: equality.

A man who treats his closest relationships as a realm where he is absolute master cannot have a genuine love of freedom. What he has isn’t Christian liberty or principled devotion to universal freedom. It’s the older kind of “love of freedom” common in antiquity and the Middle Ages: an intense sense of his own dignity and importance. He refuses the yoke for himself—not because he hates domination in principle, but because he hates being dominated. He remains perfectly willing to put the yoke on others when it serves his interest or flatters his pride.

“But my marriage is equal”—and why the law still matters

It’s true—and it’s the strongest reason for hope—that many married couples, even under unjust laws, live in the spirit of equality. In the higher classes of England, probably most do. Laws would never improve if there weren’t plenty of people whose moral instincts are better than the rules on the books.

Those people should support equality in law, because the goal is not to change them. It’s to make other marriages possible on the same humane terms.

But there’s a common mistake: decent people often assume that practices whose harms they haven’t personally felt must not cause harm—especially if those practices are widely accepted. They may even imagine they do good, and that objecting to them is unnecessary or disruptive.

A couple may go months or years without thinking about the legal terms of their marriage. They may feel and live as equals. But it’s a serious error to assume that this is what happens in every household as long as the husband isn’t a famously violent brute. Believing that shows ignorance of both human nature and plain fact.

The rule is grim but predictable: the less a man is fit to hold power, the more he clings to it when the law hands it to him. If he couldn’t command anyone by genuine respect or voluntary cooperation, he’ll often take special pride in the authority he possesses by legal right. He pushes those rights to the furthest edge custom will allow—custom shaped by men like himself—and he enjoys exercising power simply because it feels good to have it.

Worse still, among the roughest and least morally educated portion of the lower classes, the law’s treatment of the wife as a kind of slave—and the sheer physical fact of her being subject to his will as an instrument—can breed a particular kind of contempt. A man may feel a disrespect for his own wife that he doesn’t feel toward other women, or even other people he meets. The legal relationship itself makes her seem to him a fitting target for humiliation.

Anyone with sharp eyes and real access to observe these households can judge whether this is true. If it is, no one should be surprised by the disgust and anger people feel toward institutions that so naturally produce this corruption of mind and character.

The religious defense—and why it doesn’t hold

People will say, as they often do when a practice is too ugly to defend on honest grounds, that religion commands obedience. Yes, church formularies enjoin it. But it’s hard to wring any such command from Christianity itself.

St. Paul wrote, “Wives, obey your husbands.” He also wrote, “Slaves, obey your masters.” That tells you something important: it wasn’t his role—nor would it have helped his central mission of spreading Christianity—to urge people to revolt against existing laws. His acceptance of the social institutions he found around him is no more a condemnation of later efforts to improve them than his statement that “the powers that be are ordained of God” is a blessing on military despotism as the one Christian form of government, or a command that people must submit passively to tyranny.

To claim that Christianity was meant to freeze governments and social structures forever—to protect them against change—is to turn it into the moral equivalent of systems that do aim to lock society in place. The reason Christianity became the religion of the more progressive parts of humanity is precisely that it did not rigidly sanctify every existing arrangement. Other systems became associated with societies that stagnated or declined—not because any society truly stands still, but because some stop advancing and begin slipping backward.

There have always been people who tried to remake Christianity into a religion that forbids improvement: a kind of Christian “Mussulman” ideal, with the Bible treated like a Koran that blocks reform. They have often had immense influence, and many people have died resisting them. But they have been resisted—and that resistance has shaped what we are, and will shape what we still may become.

Property: the simplest case of all

After everything said about forced obedience, it’s almost redundant to argue the narrower point of a woman’s right to her own property. Anyone who needs persuasion that a woman’s inheritance or earnings should remain hers after marriage just as they were before is unlikely to be moved by this essay.

The rule is straightforward:

  • Whatever would belong to the husband or wife if they were unmarried should remain under that person’s exclusive control during marriage.

This doesn’t prevent parents from arranging settlements to preserve property for children.

Some people recoil from “separate” financial interests, claiming it clashes with the romantic idea that two lives fuse into one. But there’s a difference between a true community of goods—flowing naturally from complete unity of feeling—and a community based on a one-sided principle: what’s mine is yours, but what’s yours is not mine. I support shared property when it grows out of real mutuality. I want nothing to do with a partnership built on asymmetry, even if I were the one who benefited from it.

Reform is already happening—and why earnings complicate things

This kind of injustice is also, to most people, the easiest to see. And because it can be fixed without entangling every other problem at once, it will likely be among the first repaired.

In many of the newer American states, and even in several of the older ones, written constitutions have already included provisions that secure women equal rights in property. That materially improves the position of married women who have property: it leaves them at least one source of independence they haven’t been forced to sign away. It also blocks a particularly scandalous abuse of marriage—when a man lures a woman into marrying without any settlement solely to get control of her money.

Now consider the more common case: the family’s support depends not on property but on earnings. In that situation, the usual arrangement—he earns the income; she manages the household spending—often makes sense as a division of labor.

If, on top of pregnancy and childbirth, and on top of the full responsibility for early childcare, the wife also takes on the careful, economical management of the husband’s earnings for the family’s comfort, she is not taking “less” than her share of work. She’s usually taking more—both physically and mentally. And when she adds outside labor on top of that, it rarely frees her from domestic burdens; it typically just means she can’t do them properly.

What she can’t do—because she’s stretched too thin—often doesn’t get done by anyone else. The children who survive grow up however they can. The household management becomes so poor that, even in pure financial terms, it can wipe out much of what her earnings bring in.

So in a genuinely just society, it isn’t generally a desirable custom for the wife to contribute wages to the household income during marriage. In an unjust society, her earning may protect her in one narrow sense: it can make her more “valuable” to the man who is, in law, her master. But it can also enable the worst abuses—he can compel her to work, dump the burden of supporting the family onto her, and spend his own days in idleness and drinking.

Still, the power to earn matters. If a woman has no independent property, the ability to support herself is essential to her dignity.

And here is the larger point: if marriage were an equal contract—without an imposed duty of obedience; if the relationship were no longer enforced when it becomes purely harmful to one party; if a woman who was morally entitled to it could obtain a separation on just terms (not necessarily a divorce); and if honorable work were as open to her as it is to men—then she wouldn’t need, for sheer self-protection, to insist on paid labor during marriage.

In that kind of society, marriage would typically imply a choice of vocation similar to a man choosing a profession. For many women, it would mean choosing household management and childrearing as the primary call on their efforts for as many years as those duties require. That wouldn’t mean giving up every other purpose in life, only giving up pursuits incompatible with that central responsibility.

By this principle, most married women would not regularly take on occupations that must be done outside the home, or cannot be carried on at home. But the rule must not be a cage. There should be wide room to adapt general expectations to individual abilities. If someone has talents exceptionally suited to another calling, marriage shouldn’t silence that vocation—provided that practical arrangements are made so the household and children do not suffer from any unavoidable gap in her ordinary duties.

If public opinion were properly guided, these matters could safely be left to social judgment and personal conscience, without legal interference.

III

On the other side of women’s equal status—their right to enter all the jobs and public roles that men have long treated as their private property—I don’t expect much trouble persuading anyone who’s already accepted the case for equality inside the family.

My strong suspicion is this: many men cling to women’s restrictions outside the home mainly to keep women subordinate inside it. A lot of men still can’t stand the idea of sharing daily life with an equal. If that weren’t the real motive, then in today’s political and economic climate most people would openly admit how indefensible it is to bar half of humanity from most well-paid work and from nearly all positions of public influence.

Think about what the exclusion claims, in practice. It says that women are born either:

  • incapable, in principle, of doing jobs that the law allows even the most foolish or corrupt men to hold, or
  • forbidden, even when capable, because those jobs must be “reserved” for men.

That is not a neutral policy; it’s a monopoly.

Interestingly, if you look back a century or two, people didn’t usually try to justify women’s exclusion by saying women were intellectually inferior. In eras when there was at least some genuine testing of individual ability in public life (and not every woman was completely shut out), hardly anyone truly believed women lacked mental capacity. The blunt older argument was different: it was “the good of society”—which usually meant “the convenience of men.” It’s the same logic behind raison d’état: the idea that whatever protects existing power counts as a sufficient excuse, even when it covers serious wrongdoing.

Modern power talks more politely. Today, when authority blocks someone’s path, it almost always claims it’s doing it “for their own good.” So when something is barred to women, people feel the need to say—and prefer to believe—that women can’t do it, and that they’re stepping off the “real” road to happiness when they even try.

But notice what that claim requires. To make it sound believable (not valid, just believable), you can’t stop at the soft version: that women are on average less gifted in certain high-level abilities, or that fewer women than men will reach the very top in some intellectually demanding roles. That won’t justify legal exclusion. To justify closing the door, you’d have to insist on something far stronger:

  • that no woman at all is fit for such work, and
  • that even the most outstanding women are mentally below the most mediocre men who currently hold those positions.

Because if access to these roles is decided by open competition—or any selection method that genuinely protects the public interest—then there’s no reason to fear that important jobs will end up in the hands of women who are worse than the average man, or worse than the average of their male competitors. At most, you’d predict that fewer women than men would end up in those posts. And that would happen anyway, if only because many women will naturally prefer the one vocation where there is no rival competitor: running their own household and raising a family.

Now, even the harshest critic of women rarely dares to deny what experience—ancient and modern—shows plainly: many women have demonstrated that they can do what men do, and do it well. The strongest argument the skeptic can make is narrower: that in many fields women haven’t matched the absolute peak reached by the very greatest men. Fine. But in almost every pursuit that depends mainly on mental ability, women have at least reached the level just below the very top.

Isn’t that not just enough, but far more than enough, to make exclusion both a tyranny against women and a loss to society? And isn’t it almost a cliché to point out that these functions are often filled by men who are less qualified than many women—men who would lose in any fair competition?

Someone might object: “But maybe there are men elsewhere who would be even better than those women.” Sure. That happens in every competition. The existence of potentially better candidates who happen not to apply doesn’t invalidate the right of qualified people to compete. Besides, do we really have such an excess supply of men fit for high responsibility that we can afford to reject any competent person? Are we so confident that whenever an important post opens, the perfect man will always be waiting, ready-made? If not, then banning half the species is not only unjust—it’s irrational.

And even if society could muddle through without women’s abilities, would that make it just to deny them their share of honor and distinction? Would it be just to deny them the equal moral right every human being has—to choose their occupation (so long as it doesn’t harm others), according to their own preferences, and at their own risk?

This injustice doesn’t only injure women. It injures everyone who might benefit from their work. When law declares that certain people may not be physicians, or lawyers, or members of parliament, it harms not just the excluded group, but:

  • patients who must choose from a smaller pool of doctors,
  • clients who must choose from a smaller pool of advocates,
  • citizens who must elect representatives from a narrower set of candidates,
  • and everyone who loses the sharper effort and higher standards that broader competition tends to produce.

For the rest of my argument, it will probably be enough to focus on public functions. If women have a right to those, it will be hard to deny their right to any other occupation where it matters whether they’re admitted. And I’ll start with one public right that doesn’t depend on any argument about capacity at all: the vote, both in national and local elections.

Voting—the right to help choose who holds public trust—is a completely different thing from competing to hold that trust yourself. If only people who were fit to be candidates were allowed to vote, government would shrink into a tiny oligarchy. Having a voice in choosing who governs you is a basic tool of self-protection. Everyone deserves it, even if they never personally hold office.

In fact, society already assumes women can make a choice of the highest practical importance in their own lives. The law treats women as competent to choose the man who, in many legal and social senses, will “govern” her for life. If a woman is presumed capable of that decision, it’s absurd to claim she’s incapable of choosing a member of parliament.

Constitutional law may surround voting with whatever safeguards and limits it thinks necessary. But whatever rules are considered sufficient for men are sufficient for women. Under whatever conditions men are admitted to the suffrage, there is no justification—none—for refusing it to women under the same conditions.

In most social classes, the majority of women probably won’t diverge much in political opinion from the majority of men of the same class—except on questions where women’s interests as women are directly at stake. And when they are at stake, that is exactly when women most need the vote: as their guarantee of fair and equal consideration. This should be obvious even to people who reject everything else I’m arguing. Even if every woman were a wife, and even if every wife “ought” to be a slave, that would only make the case stronger: slaves need legal protection most of all. And we know how much legal protection slaves get when the laws are written by their masters.

Now consider the next question: are women fit not only to vote, but also to hold offices or practice professions that carry real public responsibility? I’ve already pointed out that this isn’t the key practical issue. In an open profession, any woman who succeeds proves by success itself that she was qualified. And for public offices, if a country’s political system filters out unfit men, it will filter out unfit women too. If it doesn’t filter out unfit men, then it can’t claim there’s some special additional danger in admitting unfit women—the harm comes from the broken selection system, not from women.

So as long as it’s conceded that even a few women could be fit for these duties, a blanket legal prohibition can’t be defended by any general theory about “women’s nature.” Still, even if that point isn’t strictly necessary, it matters. A fair look at the evidence strengthens the case against exclusion and adds a hardheaded argument from public usefulness.

So let’s set aside, for the moment, all psychological debate about whether supposed mental differences between men and women are mainly the product of education and circumstance rather than nature. Let’s take women simply as they are known to have been, and judge them by what they have demonstrably done. Whatever women have done, that much is proven possible for them.

And remember: women have been systematically trained away from the occupations men reserve for themselves, not trained toward them. So I’m actually arguing from modest ground when I rest their case only on what they’ve already achieved. In this context, negative evidence—“no woman has done X”—proves very little, because barriers and discouragement can prevent the attempt. But positive evidence—“a woman has done Y”—is decisive.

For example, you can’t confidently conclude that it’s impossible for a woman to be a Homer, an Aristotle, a Michelangelo, or a Beethoven simply because no woman has yet produced works equal to theirs. At most, that absence leaves the question open for deeper discussion. But it is absolutely certain that a woman can be a Queen Elizabeth, a Deborah, or a Joan of Arc, because that isn’t speculation—it’s history.

And here’s the twist: the only things the law reliably keeps women from doing are precisely the things women have already proved they can do. There is no law preventing a woman from writing all of Shakespeare’s plays or composing all of Mozart’s operas. But Queen Elizabeth or Queen Victoria—if they hadn’t inherited a throne—could not legally have been entrusted with even the smallest political duty, even though Elizabeth showed herself capable of the greatest.

If you tried to draw a conclusion from experience alone, without psychology, you might almost be pushed toward an ironic one: women are barred from the very roles they seem especially good at. Their talent for government has forced itself into view through the few openings history has allowed, while in the fields that looked, at least on paper, more “open” to them, women have not stood out as overwhelmingly.

Look at history’s tally: there have been far fewer reigning queens than kings, but among that smaller number, a strikingly large proportion have shown real talent for rule—often while governing in difficult times. What’s more, many of these queens were notable for qualities that cut directly against the stereotype of “feminine” character: they were remembered for firmness and vigor as much as for intelligence. And if you add not only queens and empresses, but also regents and provincial governors, the list becomes long indeed.

This is so obvious that someone once tried to flip the argument into a sneer: “Queens are better than kings, because under kings women govern, but under queens men do.”

It’s a bad joke, but bad jokes still influence people. I’ve heard men repeat that line as if it contained a serious insight, so it’s worth answering.

First: it’s not true that “under kings women govern.” Those cases are exceptional. Weak kings have just as often ruled badly under the influence of male favorites as of female ones. And when a king is led mainly by romantic obsession, you shouldn’t expect good government—though even there, exceptions exist.

French history, for example, includes two kings who deliberately entrusted the direction of affairs for many years to close female relatives: one to his mother, one to his sister. In one case—Charles VIII—he was a boy, and he was following the plan of his father, Louis XI, one of the most capable monarchs of his age. In the other case—Saint Louis—he was among the best and most energetic rulers since Charlemagne. In both cases, these women governed in a way that few male contemporaries could match.

Or take the emperor Charles V: one of the shrewdest rulers of his time, served by an abundance of able men, and not the type to sacrifice his interests to personal whim. He appointed two princesses of his family, one after the other, as Governors of the Netherlands, keeping one or the other in that post throughout his life (and they were later followed by a third). Both governed successfully. One—Margaret of Austria—ranked among the era’s most capable statesmen.

So much for the claim that women only govern by manipulating kings.

Now for the other half: “under queens, men govern.” What is meant by that? Do people mean queens run the state through the men who happen to be their personal pleasures? That’s rare even among rulers as morally casual as Catherine II, and in any case those are not the examples where the supposed “benefit” of male influence shows up.

If the point is instead that, under a queen, the administration tends to be in the hands of better men than under an average king, then the implication is the opposite of the insult: it would mean queens have a superior ability to choose capable ministers. And that would suggest women are not only suited to be sovereigns, but especially suited to be chief ministers—since a prime minister’s central job is not personally doing everything, but finding the best people to run each department.

One advantage commonly credited to women is quicker insight into character. If that is even roughly true, then with anything like equal preparation in other respects, women would be better than men at selecting those “instruments” of government—a task that is nearly the most important duty of anyone responsible for ruling people. Even someone as unprincipled as Catherine de’ Medici could recognize the value of a Chancellor de l’Hôpital.

But the deeper truth is this: most great queens were great largely through their own governing talent, and they were well served precisely because of that. They kept the ultimate direction of affairs in their own hands. And when they listened to wise advisers, that wasn’t proof of weakness—it was evidence of judgement suited to the hardest political questions.

So ask yourself: is it reasonable to believe that someone fit for the highest functions of politics is somehow incapable of learning the lesser ones? Is there any “natural” reason why the wives and sisters of princes—when circumstances call upon them—should repeatedly prove as competent as princes themselves, yet the wives and sisters of statesmen, administrators, business directors, and public managers should be unable to do what their brothers and husbands do every day?

The real reason is simpler than any theory about ability. Princesses sit far above most men by rank, even if society wants to place them below men by sex. They have never been taught that it is “improper” for them to take an interest in politics. Instead, they have been allowed to feel what any educated person naturally feels: curiosity and concern about the major events happening around them—and they have been permitted, sometimes required, to take part in them.

The women who’ve been allowed anything like the same freedom to grow as men are mostly the women born into ruling families. And notice what happens in that one “natural experiment”: you don’t find some mysterious built-in inferiority. In fact, wherever women have actually been tested in governing—and to the extent they’ve been tested—they’ve shown they can do the job.

That lines up with what the world’s still-limited experience seems to suggest about women’s typical strengths as women have been shaped up to now. But that last phrase matters. I’m not claiming these traits are fixed forever. It’s pure guesswork for anyone to declare what women “naturally” are or aren’t, can or can’t be. For centuries, women’s lives have been so artificially constrained that whatever we call “women’s nature” has been bent, disguised, and distorted. If women were allowed to choose their direction as freely as men—and if society tried to shape both sexes only as much as living together requires, and in the same way for both—there might be no meaningful difference at all in the abilities and character that would emerge. I’ll argue shortly that even the differences people fight over least may well come from circumstance rather than biology.

Still, if we look at women as experience has actually presented them, one generalization is truer than most: their talents tend to lean practical. Public history says so, and everyday life says so.

What people call “female intuition” is a good example. “Intuition” doesn’t mean pulling grand principles out of thin air. It means seeing what’s in front of you—quickly and accurately. No one discovers a law of nature by intuition, and no one derives a serious rule of duty or prudence that way. Those come from slow work: collecting experience, comparing cases, testing and revising. And “intuitive” people—men or women—usually aren’t the stars in that department, unless the necessary experience is something they can gather firsthand.

But that same quick, reality-based insight makes intuitive people especially good at extracting general truths from what they themselves observe. So when women happen to have the benefit that men are routinely given—access to other people’s accumulated experience through reading and education—they often have unusually strong equipment for effective action. I say “happen” on purpose: when it comes to the kind of knowledge that prepares someone for the larger business of life, educated women are mostly self-educated.

There’s another contrast worth naming. Men who have been heavily schooled often lose their grip on the present fact. They don’t see what’s actually there; they see what their training taught them to expect. Women with real ability are less prone to that. Their so-called intuition protects them. With equal experience and equal general ability, a woman will usually notice far more of what’s right in front of her.

And that sharp sensitivity to “the present case” is the core of practical talent. Speculative ability discovers broad principles. Practical ability recognizes which particular situations those principles fit—and which ones they don’t—and adjusts accordingly. For that kind of discrimination, women (as they’ve been formed so far) often have a special knack.

Of course, good practice needs principles. And because quick observation is such a strong feature in many women’s minds, it can tempt them into building sweeping conclusions too quickly from their own limited experience. But they’re also quick to correct those conclusions as their experience widens. The real cure is simple: access to the experience of the whole human race—general knowledge—which is exactly what education is best at providing.

That’s why a woman’s typical mistakes look like those of a clever, self-taught man: he often spots what routine-trained people miss, but he also blunders because he doesn’t know things that have been settled for a long time. He’s learned some of that accumulated knowledge—he couldn’t function otherwise—but he’s picked it up in scraps and by chance, the way women usually must.

Now, this pull toward the immediate and the real can cause errors when it becomes exclusive. But it’s also a powerful antidote to the opposite error—the one that speculative minds are especially prone to. The classic failure of pure theorists is precisely a lack of vivid, ever-present awareness of objective fact. Without that, they not only miss the ways the world contradicts their theories; they can lose sight of why speculation exists at all. Their thinking drifts into a realm populated not by real things—not even idealized versions of real things—but by personified shadows: metaphysical ghosts, or verbal tangles that look profound only because words have tricked the mind. And then they treat these shadows as the true subject matter of “the highest” philosophy.

For a man who spends his time not gathering knowledge by observation but organizing it into broad scientific truths and rules of conduct, few things are more valuable than doing that work alongside—and under the critique of—a genuinely superior woman. Nothing does more to keep his thoughts tethered to the real world and the actual facts of nature. A woman is less likely to go chasing abstractions for their own sake.

Two habits help explain why:

  • Many women are used to dealing with things as individuals, not as anonymous categories.
  • They tend to care intensely about how people will actually feel and be affected by anything meant to be put into practice.

Together, those tendencies make it hard to believe in any grand theory that forgets individuals and treats the world as if it exists for some imaginary entity—some mental invention that can’t be cashed out into the experiences of living people. In that way, women’s thinking gives reality to men’s; men’s thinking gives range and breadth to women’s. And when it comes to depth (as distinct from breadth), I strongly doubt that women—even now—are at any disadvantage.

If these qualities are valuable even for correcting speculation, they matter still more when it’s time to turn speculation into action. For the reasons already given, women are less likely to make a common male mistake: clinging stubbornly to a rule in a case whose special features either put it outside the rule’s proper class or demand that the rule be adapted.

Consider another often-admitted strength of clever women: quickness of understanding. That’s almost tailor-made for practical life. In action, everything constantly depends on deciding promptly. In pure theory, almost nothing does. A thinker can wait, gather more evidence, and revise. He isn’t forced to finish his philosophy before the moment passes.

The ability to draw the best available conclusion from incomplete data does have uses in philosophy—forming a provisional hypothesis that fits the known facts can be a necessary step toward deeper inquiry. But it’s more of a helpful tool than the main qualification. And whether for the main work or the supporting work, the philosopher can take as much time as he wants. What he needs most isn’t speed; it’s patience—staying with slow labor until dim hints become clear and a guess ripens into a theorem.

For people whose work deals with the fleeting and perishable—with particular cases, not classes of cases—speed of thought is second only to thinking power itself. If you can’t bring your faculties instantly to bear when circumstances demand it, you might as well not have them. You may be fit to critique, but you’re not fit to act.

And here, women—and the men most like them—are widely acknowledged to excel. The other kind of man, however brilliant, often reaches full command of his powers only slowly. Even in what he knows best, quick judgment and prompt, wise action come late—after hard effort has slowly been trained into habit.

At this point, someone will say: “But women are more nervously sensitive. Doesn’t that make them unfit for serious work outside the home—too changeable, too driven by the moment, unable to persevere, unreliable in the steady use of their abilities?” Those phrases contain most of the standard objections to women’s fitness for the higher kinds of business.

A lot of what people call “nervousness” is simply energy with nowhere to go. Give it a clear purpose and much of it stops spilling over. A lot of it is also learned behavior, encouraged consciously or unconsciously—as you can see from how almost completely “hysterics” and fainting fits vanished once they went out of fashion.

And consider how many upper-class women have historically been raised: like greenhouse plants, protected from ordinary swings of weather and temperature, not trained in the work and physical exercise that strengthen circulation and muscles, while their nervous systems—especially the emotional side—are kept in a state of constant stimulation. Under those conditions, it’s no surprise that many who don’t die of tuberculosis grow up with bodies easily thrown off balance by minor causes and with little stamina for any sustained task, mental or physical.

But women raised to work for their living don’t show these “morbid” traits—unless they’re chained to excessive sedentary labor in cramped, unhealthy rooms. And women who, as girls, share their brothers’ healthy freedom of movement and physical training, and who later get enough fresh air and exercise, rarely have the kind of extreme nervous susceptibility that would disqualify them from active pursuits.

None of this denies that there is a certain proportion of people—of both sexes—whose nervous sensibility is unusually strong by constitution, so much so that it shapes their whole character and bodily life. That kind of temperament is hereditary. Sons inherit it as well as daughters. It may well be that more women than men inherit what people call a “nervous temperament.” Fine—let’s grant it. Then ask the obvious question: are men of nervous temperament considered unfit for the duties and pursuits men typically follow? If not, why should women with the same temperament be deemed unfit?

Like any bodily constitution, this temperament can hinder success in some occupations and help it in others. But when the work suits it—and sometimes even when it doesn’t—men with high nervous sensibility constantly provide the most dazzling examples of success.

What distinguishes them in practical life is this: because they’re capable of a higher pitch of excitement than calmer constitutions, their powers under excitement rise more dramatically above their everyday level. They can, in those moments, do with ease what they can’t do at all at other times. And that elevated state isn’t, except in weak bodies, a mere flash that dies away instantly. The nervous temperament is often capable of sustained excitement—of holding up through long, continuous effort. That’s what people mean by spirit. It’s what lets a high-bred racehorse run at full speed until it collapses. It’s what has enabled many physically delicate women to endure astonishing constancy—not only in sudden martyrdom, but through long sequences of mental and bodily torment.

People with this temperament are especially suited to what you might call the executive side of human leadership: they make great orators, great preachers, powerful carriers of moral influence. You might think their constitution less favorable for the calm, measured qualities expected of a cabinet statesman or a judge. It would be—if excitability meant they must always be excited. But that’s largely a question of training.

Strong feeling can be the raw material of strong self-control, but only if it’s cultivated in that direction. When it is, it produces not only heroes of impulse, but heroes of self-mastery. History and experience show that the most passionate characters can become the most rigid—almost fanatically rigid—in their sense of duty, when their passion has been disciplined to serve it.

Think of a judge whose feelings strongly pull one way, yet who delivers a just decision the other way. The same intensity of feeling that tempts him also supplies the fierce sense of obligation that enables him to conquer himself.

And the ability to rise into those moments of lofty enthusiasm doesn’t leave daily character untouched. When someone has known what they can aspire to and accomplish in an exceptional state, that standard becomes the measure they apply to their ordinary thoughts and actions. Their everyday purposes begin to take on a shape molded by those higher moments—even though, given the body’s nature, such peaks can only be temporary.

Looking at whole peoples, not just individuals, experience doesn’t show that excitable temperaments are, on average, less fit for either theory or practice than calmer ones. The French and Italians are plainly more nervously excitable than the Germanic peoples, and compared with the English they live with far more daily emotion. But have they been less great in science? Less capable in public administration, law, the judiciary, or war?

The ancient Greeks were famously one of the most excitable peoples—and it would be pointless to ask what human achievement they didn’t excel in. The Romans, as another southern people, likely began with a similar temperament; yet their severe national discipline, like that of Sparta, made them a model of the opposite type—calm, stern, controlled—while the strength of their original feelings showed itself mainly in the intensity they could pour into a disciplined system.

If those examples show what a naturally excitable people can become under strong shaping institutions, the Irish Celts are among the clearest examples of what such a temperament looks like when left more to itself—if any people can be said to be “left to themselves” after centuries of bad government, and the direct training of a Catholic hierarchy alongside a sincere Catholic faith. The Irish case should therefore be treated as an unfavorable one; yet even so, whenever an individual’s circumstances have been at all favorable, what people have shown greater capacity for the widest range of distinct and varied personal excellence?

Like the French compared with the English, the Irish with the Swiss, or Greeks and Italians compared with Germans, women—compared with men—often show the same broad range of abilities, just with a different “style” of excellence on average. And if girls’ and women’s education were designed to correct the weaknesses their temperament can bring, instead of intensifying them, there’s no good reason to doubt they’d do the same work just as well overall.

Focus vs. Flexibility

Let’s grant, for the sake of argument, a common claim: that women’s minds are naturally more mobile than men’s—less inclined to stay locked on a single long effort, better at splitting attention across several things at once than at driving one narrow track to its absolute peak. That might describe women as they’re typically shaped by society today (with plenty of exceptions), and it could help explain why, at the very top end, men have more often dominated fields that seem to reward total absorption in one pursuit.

But even if that difference were real, it would change the type of excellence, not the value of it. And it still doesn’t answer a deeper question: is that “single-subject tunnel vision” actually the healthiest, most normal condition for a human mind—even for purely intellectual work?

I don’t think so. What you gain in specialized development, you often lose in broader capacity for life. Even in abstract thinking, the mind usually gets more done by circling back to a hard problem repeatedly than by grinding at it without a break. And for anything practical—from the highest professions to everyday tasks—the more valuable power is often this:

  • the ability to switch quickly from one topic to another
  • without letting your mental energy slump between tasks

That’s exactly the ability critics complain about when they call women “too changeable.” In practice, it’s a strength.

Women may have some of this flexibility by nature, but they certainly acquire it through training—because so much of what women are expected to do is the constant management of small, numerous details. Each detail is brief; the mind can’t camp out on any one of them for long. And when something does require sustained thought, women often have to grab minutes here and there, thinking in the margins of a day that never fully pauses.

People have noticed this for ages: women often manage to think clearly in circumstances where many men would excuse themselves from even trying. And even when the subject matter is “small,” a woman’s mind is rarely allowed to be empty in the way a man’s mind often is when he isn’t engaged in what he considers the central business of his life. A man’s ordinary life may revolve around one “main pursuit.” A woman’s ordinary life is, more often, everything at once—and it keeps moving as steadily as the world itself.

The Brain-Size Argument (And Why It Doesn’t Settle Anything)

Then comes another claim: that anatomy proves men have greater mental capacity because men have larger brains.

First, the fact itself isn’t securely established. You can’t simply infer brain size from body size. If you could, the conclusions would be absurd: tall, large-framed men would automatically be far smarter than smaller men—and elephants or whales would outclass humans by miles.

Anatomists say that brain size varies far less than body size, or even head size, and you can’t reliably infer one from the other. In fact, it’s certain that some women have brains as large as any man’s. I personally know of a man who had weighed many human brains and reported that the heaviest he’d ever seen—heavier even than Cuvier’s, the heaviest previously recorded—belonged to a woman.

Second, even if average differences in brain size did exist, the relationship between brain and intelligence still isn’t fully understood and remains disputed. We can be confident of a close relationship: the brain is the physical organ of thought and feeling. And it would be strange if size had no relation to power—if increasing the instrument never increased the function. But it would be just as strange if size were the only thing that mattered.

In nature’s most delicate operations—especially in living creatures, and most of all in the nervous system—differences in outcome depend as much on quality as on quantity. If you judge an instrument by the fineness and delicacy of the work it can do, the signs actually point toward a greater average fineness in women’s brains and nervous systems than in men’s.

And even if we set “quality” aside as hard to measure, there’s another factor: an organ’s efficiency depends not only on size but on activity. One rough way to estimate activity is blood circulation—because circulation delivers both stimulus and repair.

So it wouldn’t be surprising—and it would even fit some commonly observed differences—if men, on average, had an advantage in brain size, while women had an advantage in cerebral activity (stronger or more energetic circulation). If that were true, you might expect:

  • Men’s thinking and feeling to start up more slowly—large systems take longer to get fully moving.
  • But once fully engaged, men’s brains might endure longer on the same track, with more persistence and less need to shift.
  • Women’s thinking might be quicker to ignite and quicker to respond, but also more quickly fatigued—while recovering faster after fatigue.

Do men, in fact, most often excel in tasks that demand long, steady hammering at a single thought, while women tend to do best in what must be done rapidly? That does seem like a familiar pattern.

But I have to underline this: all of this is speculation. It’s meant only to suggest a line of inquiry, not to claim settled knowledge. I’ve already rejected the idea that we can presently know there is any natural difference at all in the average strength or direction of men’s and women’s mental powers—much less specify what it is.

And why is this so hard to know? Because the psychological laws by which character is formed have barely been studied with scientific seriousness, and in this particular case they’ve almost never been applied. The most obvious external causes of differences in character—education, habit, social role—are constantly ignored: the observer overlooks them, and the dominant schools of both natural history and mental philosophy tend to treat them with a kind of contempt. Whether they look for the source of human differences in matter or in spirit, they often unite in sneering at those who explain differences through people’s different relationships to society and everyday life.

Why “Women’s Nature” Looks Different Everywhere

The popular “knowledge” about women is often nothing more than shallow generalization: unexamined conclusions drawn from whatever examples happen to be nearby. That’s why different countries carry different stereotypes, depending on what local customs and circumstances have produced in the women who live there.

  • In parts of the East, people claim women are naturally more voluptuous; you can see this in the harsh abuse of women on that basis in some Hindu writings.
  • In England, men commonly claim women are naturally cold.
  • The famous sayings about women’s fickleness are largely French in origin, going back at least to a well-known couplet attributed to Francis I.
  • Yet in England you’ll often hear the opposite claim: that women are more constant than men.

One reason is social pressure. Inconstancy has long been counted more discreditable to women in England than in France, and Englishwomen are also, deep down, more subdued by public opinion.

And here’s an aside that matters: Englishmen are in especially poor circumstances for deciding what is “natural”—not only in women, but in men, or in human beings at all—if they rely mainly on English experience. Nowhere does human nature show less of its original outline. For better and for worse, the English are further from a “state of nature” than any other modern people. They are, more than most, a product of civilization and discipline.

England is the country where social discipline has succeeded not so much in conquering human impulses as in suppressing whatever might conflict with the rules. The English don’t just act according to rule; they often feel according to rule. In many countries, social requirements may dominate, but individual nature is still visible underneath—sometimes resisting. Rule may be stronger than nature, but nature is still there. In England, rule has often substituted itself for nature. Much of life is carried on not by following inclination under the guidance of rules, but by having no inclination except the inclination to follow rules.

That has an admirable side, no doubt—and also a miserable one. But it makes an Englishman unusually unfit to judge original human tendencies from his own experience.

Elsewhere, observers fall into different errors. An Englishman is often ignorant; a Frenchman, prejudiced. The Englishman’s mistakes are usually negative: he assumes something doesn’t exist because he’s never seen it. The Frenchman’s mistakes are often positive: he assumes something must exist everywhere because he’s seen it somewhere. The Englishman doesn’t know nature because he hasn’t had the chance to observe it. The Frenchman often knows a lot about it, but misreads it because he’s mostly seen it distorted—civilization doesn’t just conceal nature; it can also reshape it. Society can disguise natural tendencies in two different ways:

  • by starving them until only a weak residue remains
  • or by transforming them so they grow in directions they wouldn’t have taken on their own

What We Can (And Can’t) Infer

So yes: right now we can’t know how much of the mental difference we think we see between men and women is natural and how much is artificial—or whether there are any natural differences at all—or what would remain if artificial causes were removed.

I’m not going to pretend to do what I’ve just called impossible. But uncertainty doesn’t forbid careful conjecture. When certainty can’t be reached, you can still sometimes move toward probability.

The most approachable question is the first one: where do the differences we currently observe come from? And the only way to approach it is to trace the mental consequences of external influences. We can’t isolate a human being from life and society and run an experiment on “pure nature.” But we can look at what people are, look at the circumstances that shaped them, and ask whether those circumstances could plausibly have produced what we see.

So let’s take the one striking case observers point to when they argue for women’s inferiority to men—setting aside the purely physical difference in average bodily strength.

No first-rank work in philosophy, science, or art—no achievement that clearly sits at the very top of the scale—has been produced by a woman. Must we explain that by saying women are naturally incapable?

Is the Sample Size Even Big Enough?

First, we should ask whether experience gives us enough evidence to draw a legitimate conclusion. It’s been barely three generations since women—apart from rare exceptions—have even begun to test their powers in philosophy, science, or art. Only in the present generation have their efforts become at all numerous, and even now, outside England and France, they remain extremely few.

So ask yourself: purely by the odds, should we expect a mind of the very highest speculative or creative rank to have appeared in that short time among the small number of women whose interests and social position even allowed them to devote themselves to such pursuits?

In all areas where there has actually been time for development—meaning everything except the very topmost heights of excellence—women have done as much, and won as many of the available “prizes,” as you’d expect given the length of time and the number of competitors. And if we look back to the earlier period, when very few women even attempted such work, some of those few succeeded with distinction.

The Greeks counted Sappho among their great poets. And if Myrtis (said to have taught Pindar) and Corinna (who reportedly beat him five times in poetic contests) could even be compared with a name like his, they must have had real merit. Aspasia left no philosophical writings, but it’s an accepted fact that Socrates sought her out for instruction and openly acknowledged that he benefited from it.

Where the Gap Seems to Be

If we compare women’s work with men’s in modern times—whether in literature or the arts—whatever inferiority appears tends to shrink down to one central point. And it’s a serious one:

  • a relative lack of originality

Not a total lack. Any work of real value has its own originality; it’s a product of the mind, not a mere copy. Women’s writings contain plenty of original thoughts in the sense of being unborrowed—arriving from the writer’s own observation and reasoning. But women have not yet produced many of the great, blazing ideas that change the era of thought, nor the fundamentally new artistic conceptions that open a whole new range of effects and found a school.

Most women’s compositions build on the existing stock of ideas, and their creations don’t stray far from established types. That is the main kind of inferiority their works seem to show. In execution—the careful application of thought, the finishing of detail, the perfection of style—there is no inferiority. Some of the best modern novelists, in composition and management of detail, have been women. Few writers in modern literature have been a more eloquent vehicle of thought than Madame de Staël. And as a specimen of pure artistic excellence, it’s hard to find prose superior to Madame Sand’s—her style acts on the nervous system the way a Haydn or Mozart symphony does.

So the chief thing missing, again, is high originality of conception. The next step is to ask whether there is any way to account for that deficiency.

A Historical Reason to Expect Fewer “Era-Making” Ideas

Remember this: for a vast stretch of human history—during much of the period when great and fertile new truths could be reached by sheer genius with relatively little accumulated study and formal preparation—women did not participate in speculative thought at all.

From Hypatia’s time to the Reformation, the illustrious Héloïse is almost the only woman for whom such an achievement might have been possible; and we can’t know how much speculative power in her was lost to the world through the misfortunes of her life.

Never since a meaningful number of women have been encouraged to think seriously has originality been something you can stumble into cheaply.

That’s not because women lack talent. It’s because the bar for “original” has moved. Most of the ideas you can reach just by being naturally bright—by sheer native horsepower—have already been found. Today, truly high-level originality almost always comes from people who’ve done the long, unglamorous work: years of training, and a deep familiarity with what earlier thinkers have already built.

One writer (Mr. Maurice, I believe) put it neatly: the most original minds of our era are often the ones who know the history of ideas best. That pattern isn’t going away. Knowledge now stacks like a skyscraper. If you want to add a new floor, you have to climb all the floors below it, and you have to haul your materials up with you.

So the real question is practical: how many women have been allowed to do that climb?

Take mathematics. Perhaps only Mrs. Somerville among women has mastered enough advanced mathematics to be in range of a major discovery. And even then—does it prove “inferiority” that she didn’t happen to be one of the tiny handful of people in her lifetime who attached their name to a dramatic leap forward? In political economy, since it became a science, only two women have known enough to write usefully about it. But look at the flood of men who’ve written on the same subject: how many of them could honestly claim more than “useful,” or even meet that standard?

Or consider fields that demand massive background knowledge:

  • If no woman has yet been a great historian, which woman has had the erudition that great history requires?
  • If no woman is a great philologist, which woman has been able to study Sanskrit and Slavonic, the Gothic of Ulphilas, and the Persian of the Zend Avesta?

Even in everyday, hands-on domains, we all know what “originality” looks like when it comes from an untrained genius. It usually means reinventing, in crude form, something other people already invented—and then refined—over many rounds. You don’t judge someone’s capacity to create at the frontier if you’ve never let them get anywhere near the frontier. When women have had the same preparation that men now need to be outstandingly original, then it will make sense to judge their originality by experience rather than by prejudice.

That said, something else is true. People who haven’t studied widely and carefully can still have real flashes of insight—an elegant intuition they can’t yet prove, an idea they can suggest but can’t properly defend. And sometimes, once developed, that idea becomes a genuine contribution to knowledge.

But notice the catch: until someone with the relevant background picks it up—tests it, shapes it into scientific or practical form, and fits it into the existing body of knowledge—the insight can’t receive full justice.

Do such flashes occur to women? Of course they do. They occur constantly—by the hundreds—to any woman with a real intellect. The tragedy is that they’re often lost, because there’s no husband or friend nearby with the education needed to recognize the idea’s value and carry it into the public arena. And even when the idea does make it out, it frequently shows up under his name, not hers. Who can say how many supposedly “male” original thoughts began as a woman’s suggestion, and became “his” only because he verified them, did the formal work, and published? If I judge by my own experience, the number is very large.

Now shift from philosophy and science to literature in the narrower sense, and to the fine arts. There’s an obvious reason women’s writing has often looked, in its overall shape and instincts, like an imitation of men’s: the men got there first.

Critics love to say Roman literature imitates Greek literature. Why? Not because Romans were incapable of originality, but because Greek literature already existed as a mature tradition. If women had lived in a separate country from men and never read men’s books, they would have developed their own literature. But they didn’t walk onto an empty stage. They arrived to find a sophisticated literature already built, with recognized models and famous masters. Under those conditions, it’s unsurprising that they didn’t create a distinct tradition from scratch.

We can see the same dynamic in architectural and cultural history. If the knowledge of classical antiquity had never been interrupted—or if the Renaissance had arrived before the Gothic cathedrals—those cathedrals likely never would have been built. In France and Italy, the return to ancient models actually halted local originality even after it had started to grow. Strong, prestigious examples can smother emerging styles.

So it goes with authorship. Women who write are, almost inevitably, students of the great male writers. And that’s not a special weakness; it’s how learning works. A painter’s earliest canvases—even if he’s destined to become a Raphael—often look like the work of his teacher. Even Mozart didn’t reveal the full reach of his originality in his first compositions. What a few years are to a single prodigy, generations are to a large group that is only beginning to enter a field.

If women’s literature is going to develop a collective character distinct from men’s—assuming any differences in natural tendencies exist—then more time is required than has yet passed for it to free itself from inherited templates and follow its own impulses.

But I doubt there will turn out to be some single “female” set of natural literary tendencies that separates women’s genius from men’s. What is certainly real is individuality: each writer has her own mind and direction. Right now, though, that individuality is often pressed down by the sheer weight of precedent and imitation. It may take generations before women’s individual voices, in large numbers, become strong enough to resist that pull.

At first glance, the fine arts, in the strict sense, seem like the place where women’s supposed lack of original power looks most obvious. People can say, “Society doesn’t bar women from art—if anything, it encourages them.” In well-off families, a girl’s education is often heavily weighted toward music, drawing, and similar accomplishments. Yet women have still produced fewer figures at the very top of artistic fame than men have.

But that gap has a familiar explanation—one that matters in the arts even more than elsewhere: the enormous advantage of professionals over amateurs.

Educated women are commonly taught some branch of art, but not so they can earn a living from it, or build social standing through it. Women artists, in general, are trained as amateurs. The rare exceptions mostly prove the rule.

Consider music. Women are taught to perform—to execute—more than to compose. And in music, the area where men clearly surpass women is composition. The only branch of the fine arts women have pursued, to a notable extent, as a lifelong profession is acting; and there women are widely admitted to be equal to men, if not better.

A fair comparison would look like this: match women’s artistic output against the output of men who are also not professionals. In musical composition, for instance, women have surely produced work fully as good as anything produced by male amateurs.

Painting provides another clue. A few women now practice painting professionally, and they are already beginning to show as much talent as one could reasonably expect at this stage. And even male painters—whatever Mr. Ruskin may say—haven’t been especially remarkable for centuries; it may be a long time before they are again. The old masters were not “better” because of a mysterious golden essence in their brushwork. They were better because a better class of minds poured into the art.

In fourteenth- and fifteenth-century Italy, painters were among the most accomplished men of their age: broad, encyclopedic intellects, comparable to the great minds of ancient Greece. Art then was one of the grandest forms of human excellence. It brought artists into the company of rulers; it made them social equals of the highest nobility—much as only political or military greatness tends to do now.

Today, people of similar caliber tend to find other arenas that promise greater impact and greater fame, more aligned with the needs of the modern world. Only now and then does someone like a Reynolds or a Turner (and I won’t pretend to rank them among great men generally) devote himself fully to painting.

Music is somewhat different. It seems less dependent on broad general powers, and more dependent on a natural gift. So it may seem surprising that none of the great composers has been a woman. But even a natural gift can’t build cathedrals without scaffolding. To become capable of great creation, musical talent still requires study and a professional, life-shaping devotion.

And look at the countries that have produced the top tier of composers: Germany and Italy. In those countries, women—both in specialized training and in general education—have historically lagged far behind women in France and England. In fact, it’s not an exaggeration to say many women there have been only lightly educated, with little cultivation of the higher intellectual faculties. Meanwhile, among men in those countries, those who know the principles of composition number in the hundreds—probably the thousands—while the women who know them barely reach into the scores.

If you believe in simple averages, the conclusion is straightforward: you wouldn’t expect more than one eminent woman composer for every fifty eminent men. And the last three centuries haven’t produced fifty eminent male composers in Germany and Italy combined.

Beyond all this, there are additional reasons women tend to fall behind men even in pursuits that are, in theory, open to both.

First: very few women have time. That may sound odd, but it’s an undeniable social fact. A woman’s time and attention are pre-claimed by heavy practical demands.

There is, to begin with, the supervision of the household and the family’s spending—work that typically falls on at least one woman in every family, often the mature woman with experience. Only very rich families can outsource this completely, and even then they pay for it in the waste, cheating, and mismanagement that so often comes with hired administration.

Even when household management isn’t physically exhausting, it’s mentally relentless. It requires constant watchfulness, attention to every detail, and nonstop problem-solving—anticipated and unanticipated—hour by hour. The person responsible can hardly ever truly drop it.

And if a woman’s rank or wealth lightens those duties, society tends to pile on another: she becomes the manager of the family’s social life. The less she’s demanded at home, the more she’s expected to orchestrate the outer world—dinners, concerts, evening gatherings, visits, correspondence, and all the machinery of “society.”

On top of all that sits a duty that society assigns almost exclusively to women: being charming.

For a clever woman in the upper ranks, cultivating manner and mastering conversation can consume nearly all her abilities. And even if you look only at the most superficial side of the matter, consider the sheer quantity of thought that many women—any who care about dressing well, not extravagantly but with taste and appropriateness—must spend on their clothing, and perhaps on their daughters’ as well. That mental energy alone could produce respectable results in art, science, or literature. Instead, it drains time and attention that might have gone elsewhere.

If all these small practical interests—which are made large for women—still left them with abundant leisure and mental freedom for sustained study or creation, then women would need a far greater natural surplus of active power than most men. But matters don’t even stop there.

Independently of the routine tasks assigned to women, they are expected to keep their time and faculties perpetually available to other people. A man, even without a profession, can usually excuse himself by saying he has a pursuit; no one is offended if he devotes himself to it. “He’s busy” is accepted as a legitimate reason not to meet every casual demand.

But are a woman’s occupations—especially the ones she chooses voluntarily—treated as an excuse from the so-called calls of society? Almost never. Even her most necessary duties barely count as exemptions. It usually takes something extraordinary—illness in the family, some disruption to the normal order—before she’s permitted to put her own work ahead of someone else’s entertainment.

She must live at other people’s convenience: at the beck and call of someone, and usually of everyone. If she has a study, she must steal minutes for it—snatching whatever short interval happens to open. A celebrated woman once remarked, truly, that everything a woman does is done in the margins, in odd scraps of time.

Given that, is it surprising she doesn’t reach the highest peaks in pursuits that demand long, unbroken concentration and make themselves the central passion of life? That is what philosophy requires, and it is especially what art requires—where not only the mind and feeling, but the hand itself must be kept in constant training to reach great skill.

One more point matters here.

In any intellectual or artistic field, there is a level of competence sufficient to earn a living, and there is a higher level from which the works are produced that make a name immortal. For professionals, the first level has obvious motivations: survival, income, status. But the second level—true greatness—is rarely reached without, at least at some period of life, an intense hunger for fame.

Nothing less usually drives people through the long, patient drudgery that even the greatest natural gifts must endure if they are to stand out in fields already crowded with masterpieces.

Now, whether from nature or from circumstances, women rarely show this fierce eagerness for celebrity. Their ambition often stays within narrower boundaries. They seek influence over the people around them. They want to be liked, loved, admired by those they see every day. And for that, a moderate degree of knowledge and accomplishment usually suffices—so it suffices.

But this trait cannot be ignored when judging women as they presently are. And I do not believe it is innate. It is the predictable product of their situation.

In men, the love of fame is encouraged by education and by public opinion. The willingness to “scorn delights and live laborious days” for glory is treated as the mark of noble minds—even when it is wryly called their “last infirmity.” Fame also opens doors to many other ambitions, including, not least, the favor of women. For women, those doors are closed, and the desire for fame itself is labeled bold, improper, unfeminine.

And how could a woman’s interests not become concentrated on the impressions she makes on her immediate circle, when society arranges that her duties revolve around that circle, and contrives that her comforts depend on it? The craving for respect from others is as strong in women as in men. But society has organized things so that, in ordinary life, public respect is usually attainable for a woman only through her husband or male relatives—while private respect is jeopardized if she becomes personally prominent, or appears in any role other than an appendage to men.

Anyone who can honestly estimate what a whole domestic and social position does to the mind—what it means to live under that daily habit—can see in it a complete explanation for almost all the apparent differences between women and men, including those that seem to imply inferiority.

As for moral differences, as distinct from intellectual ones, the conventional story flatters women. Women, we are told, are “better” than men. It’s an empty compliment—one that must make any woman with spirit smile bitterly—because there is no other sphere of life in which it is considered natural and proper for the “better” to obey the “worse.”

If this cliché contains any truth at all, it amounts to an unintended admission by men: power corrupts. That is the only general lesson the claim could prove or illustrate. And there is truth in the broader point. Servitude—unless it brutalizes outright—corrupts both sides, but it corrupts the masters more than the enslaved. It is healthier for the moral nature to be restrained, even by arbitrary power, than to exercise arbitrary power without restraint.

Women are often praised for staying out of trouble with the law: they show up less often in criminal records than men. But that “fact” proves far less than people think. You could say the same thing—just as accurately—about enslaved people. When someone’s life is tightly controlled by another person, they don’t get many chances to commit crimes except when their controller orders it or benefits from it.

That’s why it’s so revealing (and so frustrating) to watch educated society miss what’s right in front of them. People casually explain women’s situation as if it were a natural report card on women’s minds and morals—dismissing women’s intellect with a smirk, then applauding women’s “moral nature” with syrupy compliments—without noticing how power, dependence, and training shape behavior.

“Women are morally better” vs. “women are morally biased”
These two sayings are really a matched pair: one flatters, the other insults, and both dodge the real question.

We’re told women can’t resist personal likes and dislikes—that when serious decisions arise, their judgment gets bent by sympathy or antipathy. Even if we accept that claim for the sake of argument, something still needs proving: are women actually misled by feeling more often than men are misled by self-interest?

If there’s a difference, it looks less like “emotion versus reason” and more like this:

  • Men are tempted off the path of duty by their own interests—money, status, power, comfort.
  • Women, who aren’t permitted to have much in the way of interests of their own, are pushed off course by someone else’s interests—a husband, children, family, or whoever their lives are organized around.

And the way society educates women makes this outcome almost inevitable. From childhood, women are taught—explicitly and implicitly—that their obligations are owed mainly to the small circle attached to them: the people in their household, their relatives, their immediate connections. They’re rarely taught the basic ideas that support an intelligent concern for wider responsibilities: public welfare, civic duty, larger moral aims, or the structures that govern a society. So when critics accuse women of narrowness, the accusation often boils down to a simple irony: women are being blamed for practicing, with impressive fidelity, the only duty they’ve been taught—and nearly the only kind of duty they’ve been allowed to practice.

Why “they don’t complain” doesn’t settle anything
Privileges aren’t usually surrendered because the privileged suddenly grow noble. More often, concessions happen when the disadvantaged gain enough leverage to force them. So as long as men can tell themselves, “Women don’t complain,” arguments against male privilege are likely to be ignored.

But even if that silence helps men keep an unfair advantage a bit longer, it doesn’t make the advantage any less unfair.

The same pattern shows up in other systems of confinement. Think of women in an Eastern harem: many don’t complain that they lack the freedom European women have. In fact, they may view European women as shockingly bold and unfeminine. And why wouldn’t they? People rarely protest the overall structure of their world—especially if they’ve never seen a real alternative.

Women, in the broad sense, often don’t protest “the condition of women” as a political arrangement. Or rather, they do complain—but in a way that’s designed to be safe. Women have long written mournful reflections about their lot, and in earlier times this was even more common precisely because such writing could be treated as harmless: a sad song, not a demand. Those laments often resemble the way men complain about the general disappointments of life. They express suffering without assigning blame, and without insisting that anything should change.

Yet there’s a telling detail: even when women don’t challenge husbands as a class, each woman complains about her husband—or about the husbands around her. That’s exactly how early resistance tends to look in every form of servitude. People begin by condemning abuses, not the system that enables them.

History repeats this rhythm:

  • Serfs initially complained not about lordship itself, but about cruel lords.
  • Commoners first asked for local privileges, then demanded not to be taxed without consent—and only much later would it have occurred to them to claim a share in sovereign power.

So it shouldn’t surprise us that challenging entrenched rules is still treated as outrageous—much the way it once seemed outrageous for a subject to claim the right to resist a king.

For women, the obstacle is sharper still. A woman who joins a public movement her husband opposes effectively volunteers for suffering without even gaining the ability to lead. Under the law, a husband can simply stop her—shut down her public work, interrupt her participation, and end what she’s trying to do.

So it’s unrealistic to expect women, on their own, to carry the full burden of women’s emancipation. They can’t be expected to devote themselves to that cause until a substantial number of men are ready to join them as partners in the fight.


In fact, if you widen the lens beyond Europe and look at parts of Asia as well, there’s evidence that cuts sharply against the idea that women are naturally unfit to govern. In many Indian states, when you find a principality that’s run firmly, carefully, and economically—where order is kept without oppression, agriculture expands, and ordinary people prosper—very often (by one experienced observer’s estimate, in most cases) the effective ruler is a woman.

The reason is legal and practical. In those systems, a woman may not be allowed to reign in her own right, but she can serve as the lawful regent when the heir is still a minor. And minorities are common, because male rulers often die early—frequently from lives drained by inactivity and indulgence.

What makes this especially striking is the degree of isolation imposed on these women. Many of these princesses are never seen in public. They don’t speak with men outside their families except from behind a curtain. Many don’t read; and even if they did, there may be little in their languages that would teach them practical politics. Yet, despite all that—despite being denied the normal channels of education and experience—they often govern with competence. If you want a real test of “natural capacity,” it’s hard to imagine a harsher one, and the results are hard to ignore.

A related point applies on a smaller scale: the kind of mind that can accurately grasp what’s true and what’s right often shows up across very different domains. The same sense that recognizes sound principles in “serious” art can also recognize good taste in something as changeable as clothing. Fashion swings from long to short, large to small, and back again—but it still rests on an underlying form. Someone who consistently invents well or dresses with real taste is probably showing the same practical judgment that, aimed at larger goals, could have produced excellence in more enduring work.

IV

A big question still hangs in the air—one that people who are starting to doubt the old order tend to ask loudest: What’s the payoff? What good will come from changing our customs and institutions? Would the world actually be better if women were free? And if not, why stir people up and risk a social upheaval in the name of some “abstract right”?

On one part of the issue, though, it’s hard to pretend the question is even open: marriage. The misery, moral damage, and sheer everyday harm produced—over and over again—by legally placing one adult human being under another’s control is too obvious, and too brutal, to dismiss. People who aren’t thinking carefully (or aren’t being candid) will point only to the most extreme cases, or the ones that make the news, and claim the problem is “exceptional.” But no one can honestly deny that these abuses exist, or that many of them are severe.

And here’s the key point: you can’t seriously restrain the abuse of power while leaving the power intact. This authority isn’t reserved for decent people. It’s handed—by law and custom—to all men, including the cruelest and the most criminal. The only real “check” is social opinion, and men of that sort usually live inside a moral echo chamber: the only approval they care about is the approval of men like themselves. If the legal power of a husband over a wife didn’t routinely produce tyranny—if the law could safely assume that men would always use domination only for someone else’s benefit—then society would already be a paradise. We wouldn’t need laws to restrain vicious impulses at all. You’d have to imagine not only that justice had returned to earth, but that even the worst man had become its home.

That’s why the legal servitude built into marriage is so jarring: it flatly contradicts the principles the modern world claims to live by, and the hard-earned experience that produced those principles. Now that Black slavery has been abolished, marriage remains the one place where the law still delivers a fully capable adult person into another person’s “tender mercies,” on the naïve hope that the one holding power will use it only for the other’s good. In our legal system, marriage is the only remaining real bondage. There are no legal slaves left—except, in effect, the woman who runs a household.

So it’s not mainly here that people will press the old question, cui bono?who benefits? Someone might argue that changing marriage would bring new problems, or that some costs would outweigh some benefits. But the existence of real benefits—less cruelty, less corruption, less legalized injustice—is not up for serious dispute.

The harder argument, for many people, is the broader one: removing women’s legal and social disabilities; recognizing them as men’s equals in everything that counts as citizenship; opening to them every honorable occupation; and giving them the education and training that make those occupations possible. For a lot of people, it isn’t enough to say, “The inequality has no legitimate justification.” They want a concrete answer: What do we gain by ending it?

First: justice in the most universal human relationship

Start with the simplest, most sweeping advantage: the most widespread and intimate of all human relationships would be governed by justice instead of injustice.

It’s hard to overstate what that would do to human character. In fact, the bare statement ought to be enough for anyone who believes moral words mean anything at all. A huge share of our worst tendencies—selfishness, self-worship, the reflex to value ourselves above others without reason—doesn’t come out of nowhere. It’s fed, day after day, by the current arrangement between men and women.

Think about what we teach a boy when he grows up believing this: without earning it, without effort, without merit—no matter how silly, shallow, ignorant, or dull he may be—he is “by right” superior to every person in an entire half of the human race, simply because he was born male. And that half will almost certainly include people whose real superiority he can feel every day. Even if his life is quietly guided by a woman’s judgment, he’s still trained to assume she can’t truly be his equal.

If he’s foolish, he’ll imagine she must be beneath him. If he isn’t foolish, something worse can happen: he may recognize that she’s better than he is in ability or judgment—and still believe that, despite her superiority, he is entitled to command and she is obliged to obey.

What does that do to a person’s character?

People from cultivated, “well-bred” circles often underestimate how deep this lesson runs, because in polite homes the inequality is carefully covered. Especially around children. Boys are expected to obey their mother as well as their father. They’re not allowed to bully their sisters. They may even see their sisters favored in small ways, with the “compensations” of chivalry made visible while the underlying servitude is tucked out of sight.

As a result, well-raised young men in the upper classes sometimes dodge the worst of this moral poisoning early on—and only collide with it later, when adulthood forces them to confront how things actually work. They don’t realize how early the opposite training begins in other households: how quickly a boy absorbs the idea that he is inherently above a girl; how the belief grows as he grows; how boys pass it to each other like an infection; how early a son can come to see himself as above his mother—owing her maybe patience, but not real respect; and how grand and sultan-like his sense of superiority can become, especially toward the woman he “honors” by allowing her to share his life.

Do we really think that doesn’t warp a man’s whole way of being—both privately and socially?

It’s the same psychology you see in hereditary rule: the king who thinks he is excellent because he was born a king; the noble who thinks he deserves deference because he was born a noble. The husband–wife relationship, as the law has shaped it, resembles the old lord–vassal relationship—except the wife is expected to give even more complete obedience than the vassal ever did. Whatever subordination did to the vassal, who can miss what command did to the lord? Whether he was taught that his dependents were truly beneath him, or whether he simply felt entitled to rule people as good as himself for no reason except birth—because, as Figaro joked, he “took the trouble to be born.”

The self-worship of monarchs and feudal masters has its match in the self-worship encouraged in men. People don’t spend their childhood holding unearned distinctions without learning to preen. A few—rare, and usually the best—respond to unearned privilege with humility, especially when they sense the privilege exceeds their merit. Most respond with pride, and the worst kind: pride in accidents of birth rather than achievements of character.

And the most dangerous mix is this: believing you stand above an entire sex, plus having personal authority over one individual member of that sex. For a conscientious and affectionate man, that situation can become a school for restraint and care. For a different kind of man, it becomes a formal academy for arrogance and domineering behavior. If he has to hold that arrogance in check around other men—his equals—it often bursts out where it meets no resistance: toward anyone forced to tolerate it. And the person most likely to pay the price is the wife, who becomes the target of revenge for the restraint he was required to practice elsewhere.

Why this contradiction poisons society

When you build domestic life on a relationship that violates the first principles of social justice, the example it sets—and the emotional education it gives—can’t help but corrupt. The effect is so large that, from inside our current world, it’s hard to imagine how much better things could become if we removed it.

Education and civilization have been working, slowly, to erase the old “law of force” from human character and replace it with the influence of justice. But as long as this fortress remains standing at the heart of society, the reform stays superficial. We’re trying to drain a swamp while leaving the spring that keeps feeding it.

The moral and political principle of the modern world is supposed to be this: people deserve respect because of what they do, not what they are. Merit—not birth—is the only rightful claim to authority. If no one were allowed lasting, non-temporary authority over another human being by mere status, society wouldn’t spend its energy building up domineering instincts with one hand and restraining them with the other. Children would finally be trained, from the beginning, in a consistent moral direction—and there would be a real chance they might keep to it.

But as long as the “right of the strong” governs the home—right in the center of social life—every attempt to make equal justice rule society’s outward actions will be a struggle uphill. The law of justice (which is also the law Christianity teaches) won’t take hold in people’s deepest feelings. Even when they outwardly submit to justice, their inner sentiments will keep resisting it.

Second: doubling the brainpower available to humanity

There’s another major gain from letting women use their abilities freely—by giving them the freedom to choose their work, and opening to them the same field of occupations, rewards, and encouragements available to men.

It would double the pool of mental power available for humanity’s higher work.

Right now, for every one person capable of greatly benefiting society—through public teaching, administration, or leadership in social and civic life—there could be two. And that matters because true ability is scarce. The world is constantly short of people who can do difficult work excellently. The loss we accept by refusing to use half the talent the species has is enormous.

Yes, some of that ability isn’t completely wasted. Much of it goes into managing homes and the few other roles women are allowed, and some of the rest influences the world indirectly through the personal influence women have over individual men. But these benefits are limited and local. Their reach is narrow.

And even if you count those existing contributions as a subtraction from what society would newly gain, you must add something on the other side: the intellectual stimulus men would feel from genuine competition—or, more accurately, from the new necessity of earning precedence instead of assuming it.

How freedom would educate—women and men

This increase in human intellectual power would come partly from a better, more complete education for women—education that would improve pari passu (step for step) with men’s. Women would generally be raised with the same ability to understand business, public affairs, and serious intellectual questions as men of the same social class. And the few—of either sex—who are capable not just of understanding what others do and think, but of doing or thinking something substantial themselves, would have equal opportunities to develop and train those abilities.

In that way, widening women’s sphere would improve society by lifting women’s education to the level of men’s, and letting women share fully in every advance made in men’s education.

But there’s an even deeper educational effect: simply removing the barrier would teach. Breaking the spell of the idea that big subjects—public questions, general interests, the wider world of thought and action—are “men’s business” would itself be a powerful kind of schooling.

Imagine what it would mean for a woman to live with the steady awareness that she is, straightforwardly, a human being like anyone else:

  • entitled to choose her pursuits
  • drawn by the same incentives as anyone else to care about whatever is interesting to human beings
  • entitled to hold and express opinions about public matters, and to exert whatever influence an individual mind naturally has—whether or not she takes an official role in public life

That consciousness alone would expand women’s abilities dramatically, and widen the range of their moral feelings as well.

Women’s influence has always been real—freedom would make it healthier

Even beyond the added talent available for managing human affairs—an area where we certainly can’t afford to throw away half of what nature offers—women’s opinions would likely have a better influence on society’s beliefs and moral sentiments, not necessarily a larger one.

Because women’s influence has always been substantial. Across recorded history, two forces have given women a powerful role in shaping civilization: mothers shaping the early character of sons, and young men striving to win the approval of young women. These have repeatedly guided major shifts in social development. Even in the Homeric world, Hector’s sense of aidōs—his shame or honor-consciousness—toward the Trojan women is recognized as a strong motive in his actions.

Historically, women’s moral influence has tended to work in two main ways.

  1. A softening, restraining influence.
    Those most likely to suffer from violence naturally push—whenever they can—to limit it and reduce its cruelty. Those not trained to fight tend to prefer ways of settling disputes other than fighting. And those who have suffered most from selfish passions often become the strongest supporters of moral rules that can bridle those passions.

    Women, for example, played a powerful role in persuading northern conquerors to adopt Christianity—an ethical system far more favorable to women than what came before it. The conversions of the Anglo-Saxons and the Franks, in particular, can be traced in large part to the influence of the wives of Ethelbert and Clovis.

  2. A stimulus to admired qualities in men.
    Because women were not trained in certain “protector” virtues, it mattered to them that men display those virtues. Courage and military excellence have long been admired by women. But the stimulus reaches beyond courage, because—given women’s position—the surest route to women’s favor has often been to be admired by men. Male reputation has acted as a kind of gatekeeper to female approval.

Out of these two forces grew chivalry—an attempt to combine the highest warlike virtues with a very different set of virtues: gentleness, generosity, and self-sacrifice toward the unarmed and defenseless in general, along with a special posture of submission and worship directed toward women. Women, unlike other defenseless groups, could reward men voluntarily with honor and favor, rather than being subdued by force.

Chivalry, of course, fell painfully short of its own ideal—practice almost always lags behind theory. Still, it stands as one of the most valuable monuments in our moral history: a rare case where a chaotic, violent society tried, in an organized way, to live by a moral ideal far ahead of its actual institutions. It failed in its central aim, and it was never fully effective, but it left a lasting—and mostly valuable—imprint on the feelings and ideas of later ages.

The chivalric ideal represents the high-water mark of women’s moral influence on humanity’s development. If women are to remain subordinate, it’s genuinely sad that the chivalric standard has faded, because it was the only widely available counterweight strong enough to soften the corrupting effects of that subordination. But broader changes in the condition of the human race made it inevitable that a completely different moral ideal would replace the chivalric one.

Chivalry was an early attempt to bolt a moral code onto a world where almost everything—good and bad—turned on individual strength. If one person could outfight or outbully everyone else, that single person’s character mattered enormously. So chivalry tried to “soften” raw power with ideals like delicacy, generosity, and honorable conduct.

Modern society doesn’t work that way. Even in the military, outcomes depend far less on one hero and far more on coordinated groups, logistics, and systems. And outside the military, daily life has shifted even more dramatically—from fighting to commerce, from a warrior culture to an industrial one. This new way of living doesn’t make generosity obsolete, but it changes what morality has to rest on.

In a modern civilization, the moral foundations have to be sturdier than “I hope the strong feel noble today.” They have to be:

  • Justice: each person respecting everyone else’s rights
  • Prudence: each person having the capacity (and expectation) to manage their own life responsibly

Chivalry, by contrast, left most wrongdoing untouched because it came with no legal checks. It didn’t systematically prevent abuse; it merely encouraged a handful of people to choose right over wrong by rewarding certain behaviors with praise and admiration. But if you want morality to reliably hold, you can’t build it mainly on applause. Morality, in practice, depends on penal sanctions—real consequences that deter harm. Honor is a comparatively weak motive for all but a few people, and for many it doesn’t work at all.

That’s why modern society has one huge advantage: it can repress wrongdoing across everyday life by using the collective strength that civilization makes possible—courts, laws, enforcement, institutions. In other words, the weak aren’t left defenseless. They’re protected by law, so their safety doesn’t depend on whether the powerful happen to feel chivalrous. The charm of the chivalrous personality hasn’t changed; what’s changed is that the rights of the vulnerable and the general comfort of life now rest on something steadier and more dependable—except, the author argues, in one major domain: marriage.


Women, public opinion, and what chivalry became

Women still have real moral influence, but it’s less distinct than it once was. Instead of operating as a separate “female” moral force, it has blended more into the general pressure of public opinion.

Even so, women continue to help keep parts of the chivalric ideal alive in two ways:

  • through the “contagion” of sympathy—people catching feelings from one another
  • through men’s desire to be admired by women, which pushes men to display spirit, generosity, and style

On those traits—spirit and generosity—the author says women generally set a higher standard than men do. On justice, he claims their average standard is somewhat lower. In private life, he argues, women’s influence tends to encourage the “softer” virtues (kindness, compassion) and to discourage the “sterner” ones (tough-minded fairness, severity where necessary), though individual character makes many exceptions.

When it comes to one of life’s hardest moral tests—the clash between interest and principle—the author thinks women’s influence is mixed.

  • If the principle at stake is one of the few that a woman’s moral or religious education has strongly impressed on her, she can be a powerful ally of virtue. Husbands and sons, he says, are sometimes pushed by women into real self-denial they wouldn’t have managed alone.
  • But he argues that, given women’s education and social position at the time, the set of principles deeply drilled into them covers only a small part of morality—and those principles are mostly negative: “don’t do this,” “don’t do that,” rather than guidance about the overall direction of one’s aims and actions.

Because of that, he claims women rarely encourage what he calls disinterestedness in the broad sense: devoting time, energy, and resources to goals that bring no private advantage to the family. He doesn’t blame women for this; he says it’s natural to resist causes you haven’t been taught to see as valuable—especially when those causes pull your husband or son away from you and away from the family’s immediate interests. Still, he concludes, this often makes women’s influence unfriendly to public virtue.


Anti-war sentiment, philanthropy, and the dangers of “short-sighted benevolence”

Women, the author adds, now shape public morality more than they used to, because their sphere has widened somewhat and many have taken up work beyond household life. He credits women’s influence with two distinctive features of modern Europe:

  • a strong aversion to war
  • an intense commitment to philanthropy

Both are admirable in the abstract. The problem, he says, is that the direction women often give these feelings in practice can be as harmful as it is helpful.

In philanthropy especially, he argues women tend to concentrate on two main areas:

  • religious proselytism
  • charity

He treats proselytism at home as a recipe for sharpening religious hostility. Abroad, he portrays it as rushing blindly toward a goal without understanding—or caring about—the damage caused by the methods used, damage that can undermine the religious aim itself and other valuable aims as well.

Charity, he says, has a built-in trap: the immediate effect on the person helped can clash with the long-term effect on the general good. And because women’s education has focused more on feeling than on analysis—and because daily life trains them to focus on immediate outcomes for specific people rather than distant outcomes for classes of people—they are, he claims, both less able to see and less willing to accept the long-run harm that can come from forms of help that feel compassionate in the moment.

He targets what he sees as a growing mass of unenlightened benevolence that:

  • takes responsibility for people’s lives out of their own hands
  • shields them from the unpleasant consequences of their choices
  • undermines self-respect, self-help, and self-control—the qualities he treats as essential for both personal prosperity and social virtue

This, he argues, wastes both money and goodwill by doing harm under the banner of kindness—and women, through both their giving and their influence, swell it dramatically.

He adds an important caveat: women are not especially prone to this mistake when they actually manage charitable schemes themselves. In fact, women who administer public charities sometimes see, with striking clarity, how certain kinds of aid can demoralize recipients. Their strength, he says, is their sharp perception of present realities—especially of the minds and feelings of people they deal with directly—and in this they can out-teach many male political economists.

The problem, he suggests, is with donors who give money from a distance. If you never have to face the results of what your giving does, how would you predict those results?

He then connects this to women’s social condition. A woman raised to the typical “lot” of women, and satisfied with it, is not trained to value self-dependence because she isn’t expected to live it. Her life is structured around receiving support, guidance, and “blessings” from those above her. So, he says, it’s easy for her to assume that what feels good for her—being provided for—must be good for the poor as well.

But, he argues, she misses a crucial difference: she is not free in the same way; the poor are. If you give people what they need without requiring them to earn it, you can’t force them to earn it. Not everyone can be carried by everyone else. Society needs motives that lead people to care for themselves, and if someone is physically capable, the only charity that remains charity in the end is help that enables them to help themselves.


Why emancipation would change the moral “signal” women send

All of this, the author says, is why women’s role in shaping public opinion would improve if it were paired with broader education and practical familiarity with the real-world arenas their opinions affect—changes he believes would follow from social and political emancipation. And he thinks the improvement would be even more striking in the most constant arena of women’s influence: the family.


Marriage as a brake on moral ambition

People often say that among classes most exposed to temptation, a wife and children help keep a man honest—partly through the wife’s direct influence, and partly because he worries about their future. The author grants this is often true for men who are more weak than wicked. He also argues this good effect doesn’t depend on women being subordinate. In fact, he thinks it’s reduced by a reality many don’t like to admit: men of an “inferior” type tend, deep down, to feel less respect for anyone subject to their power.

But higher up the social ladder, he says the forces change. A wife’s influence tends to keep a husband from falling below the country’s common standard of approval—but it tends just as strongly to keep him from rising above it. She becomes an agent of mainstream public opinion.

So if a man marries a woman he considers intellectually inferior, he often finds her not merely a weight but a drag on his attempts to become better than public opinion demands. Under those conditions, the author says, “exalted virtue” is hard to reach. If the man holds unpopular truths, or sees truths before society does, or wants to live out moral commitments more seriously than most people do, marriage can become his greatest obstacle—unless he happens to have a wife who is equally above the common level.

One reason is straightforward: acting on higher principle often requires sacrifices—social standing, money, sometimes even livelihood. A man may be willing to face those risks for himself, but he hesitates to impose them on his family. And by “family,” he usually means his wife and daughters. He hopes his sons will share his convictions and accept the sacrifices willingly. But his daughters’ marriage prospects might depend on the family’s standing. And his wife, unable to understand the cause, can’t share the enthusiasm or the inward satisfaction he feels. The very things he is ready to give up may be everything to her. Even the best and most unselfish man, the author suggests, will hesitate a long time before asking her to bear that cost.

Even if what’s at stake isn’t comfort but merely social respect, the burden still bites. “Whoever has a wife and children has given hostages to Mrs. Grundy”—the personification of respectable public opinion. The husband may not care about her approval, or he may find compensation in the respect of like-minded people. But, the author says, he can offer no comparable compensation to the women connected with him.

Women are often mocked for putting so much weight on “what society thinks,” as if it were a childish weakness. The author calls that unfair. Society turns a comfortable-class woman’s life into continuous self-sacrifice, demanding constant restraint of her natural inclinations, and then pays her back largely in one currency: consideration—social respect. That respect is tied to her husband’s standing. After she has paid dearly for it, she’s told she may have to lose it for reasons she cannot feel the force of. She has sacrificed her whole life to secure it, and now her husband won’t even sacrifice what looks to her like a whim—an eccentric cause the world will label foolish, if not worse.

He argues the hardest case falls on a conscientious type of man: someone without the brilliance to shine among fellow believers, but who holds his opinions from conviction and feels bound to serve them openly—giving time, labor, and money to the cause. Worst of all, he says, is when such a man sits at a social rank that neither guarantees entry into “the best society” nor rules it out, and admission depends heavily on personal reputation. In that situation, even impeccable manners won’t save him if he becomes identified with opinions or public conduct that the social leaders dislike; it can mean effective exclusion.

He paints a common domestic story: a woman convinces herself—usually wrongly, he says—that nothing blocks her and her husband from the highest local society, the very circles where their peers mingle freely, except her husband’s unacceptable label: perhaps he’s a religious Dissenter, or he has a reputation for “low” radical politics. That, she believes, is why George can’t get a commission or a post, why Caroline can’t make a good match, why they don’t receive invitations or honors they “deserve.” With that kind of influence present in household after household—sometimes outspoken, sometimes simply felt—he asks, is it surprising that people are held down in a bland mediocrity of respectability, a hallmark of modern life?


The deeper problem: education that makes spouses strangers

There’s another damaging effect, he says, that comes not directly from women’s legal disabilities but from the sharp divide those disabilities create between men’s and women’s education and character. Nothing, he argues, is more hostile to the ideal of marriage as a true union of thought and inclination. Deep intimacy between people who are fundamentally unlike each other is a fantasy. Differences may attract at first, but likeness is what sustains a shared life—and the more alike two people are, the more suited they are to make each other happy.

As long as women are made so unlike men, he says, it’s no mystery that selfish men feel they “need” arbitrary power to head off conflict at the start by deciding every question in their own favor. When two people are extremely unlike, there can be no real identity of interest.

Often, he adds, married couples disagree conscientiously about the highest duties. When that happens, what reality is there in the marriage union? This isn’t rare when the woman has real earnestness of character, and in Catholic countries it is especially common when she is backed by the only other authority she’s been taught to obey: the priest. Protestant and Liberal writers, he notes, often attack priests’ influence over women—not chiefly because it is harmful in itself, but because it competes with the husband’s authority and encourages rebellion against his presumed infallibility.

In England, similar clashes sometimes appear when an Evangelical wife marries a husband of a different religious “type.” But more often, he says, this kind of dissension is avoided in a cruder way: by shrinking women’s minds until they have no opinions except those of “Mrs. Grundy” or those their husbands hand them.

Even without moral or religious disagreements, differences of taste can still drain happiness from married life. And while exaggerated differences between the sexes may excite men’s romantic instincts, the author argues they don’t build lasting marital happiness. Polite, well-bred couples may tolerate each other’s preferences—but is “mutual toleration” really what people dream of when they marry?

Once tastes diverge, wishes will diverge too, unless restrained by affection or duty—especially in the endless domestic questions that arise. Consider, he says, the simplest consequence: how different the social world each spouse will want to visit, or invite into the home.

Each partner naturally wants friends who share their own tastes. So the people one spouse enjoys may bore—or actively irritate—the other. Still, in modern married life they can’t fully keep separate social worlds the way couples once did, back when husbands and wives might practically run different households and maintain different “visiting lists.” Like it or not, they overlap.

And the overlap doesn’t stop with dinner invitations. It reaches straight into the most emotionally loaded project a couple shares: raising children. Each parent wants to see their own values and preferences echoed in the next generation. That sets up an unavoidable tension:

  • Either they compromise, and both feel only half satisfied, or
  • the wife gives way—often painfully—and yet, without even intending to, her quieter influence still pushes against the husband’s plans.

It would be ridiculous to claim that men and women would have identical feelings if they were simply educated the same way. People differ. Always have. But it’s entirely fair to say that the way the sexes are raised doesn’t just reveal those differences—it magnifies them, and makes conflict feel inevitable. As long as women are educated as they typically are, a man and a woman will rarely find real day-to-day agreement about how to live.

So many couples give up on the hope of deep everyday alignment. They stop trying to find in the person closest to them what the old phrase calls idem velle, idem nolle—wanting the same things and refusing the same things—the glue that makes any group a true community. Or, if a man manages to get that kind of unity, he often gets it in the bleakest way possible: by choosing a woman so emptied of opinions and independent desires that she has no real “yes” or “no” of her own—she’ll do whatever someone tells her.

Even that strategy doesn’t reliably work. A lack of sparkle or ambition doesn’t always equal obedience. But suppose it did. Is that really anyone’s picture of a good marriage? In that arrangement, what does the husband gain—besides an upgraded servant, a nurse, or a mistress?

The more hopeful model is the opposite. When both people are fully formed—not “nothing,” but “something”—and when they care about each other and aren’t wildly mismatched from the start, daily life can do something quietly miraculous. Sharing routines, interests, and worries—plus the soft pressure of affection—often pulls out abilities and curiosities that were dormant. You start caring about what your partner cares about, not by pretending, but because it becomes genuinely interesting. Over time, tastes and characters can gradually grow closer:

  • partly through subtle, almost unnoticed changes in each person, but
  • even more through real enrichment—each gaining the other’s capacities in addition to their own.

We see this all the time in close friendships between people of the same sex who spend their lives together. And it could be common in marriage too—maybe the commonest outcome—if men and women weren’t typically raised so differently that a truly well-matched union becomes almost impossible. Fix that, and even if individual quirks remain, couples would usually share something more important: unity about the big aims of life.

When both partners care about serious, meaningful goals—and they support each other there—differences over smaller tastes don’t dominate. Then you get a foundation for enduring friendship, the kind that tends, over a lifetime, to make giving pleasure feel better than receiving it.

So far, this is the trouble caused by difference. But the problem becomes much worse when the difference is not just variety, but inferiority. A mismatch can actually help a relationship if it’s simply a difference in strengths. When each admires the other’s distinctive qualities and tries to learn them, the result isn’t “separate worlds,” but a deeper shared life. The contrast can create imitation, growth, and a stronger bond.

But when one partner is clearly below the other in intellect and cultivation—and isn’t actively trying, with the other’s help, to rise—the relationship starts to damage the more capable person. Strangely, it can be more damaging in a pretty happy marriage than in a miserable one, because the comfort makes the decline easier to slide into.

There’s a brutal rule here: any close relationship that isn’t improving you is slowly degrading you. The more intimate the relationship, the stronger the effect. Even a genuinely superior person tends to slip when he’s constantly “the king of his company”—the smartest, the most admired, the unquestioned authority. In the home, the husband with a wife he considers inferior becomes exactly that, day after day.

On one side, his self-satisfaction is constantly fed. On the other, he gradually absorbs the habits of mind—the emotional reactions and ways of seeing—that belong to a narrower or more ordinary intelligence. And unlike many harms, this one accelerates over time.

Why? Because men and women now share daily life far more closely than they used to. Men live more domestically. In earlier eras, a man’s pleasures and favorite pursuits were often outside the home, with other men; his wife received only a fragment of his life. But as civilization has shifted—partly because people have turned against rough entertainments and binge-drinking social life, and partly because modern morality has pushed men toward a more reciprocal sense of duty in marriage—many men now look to home for their main social life.

Meanwhile, women’s education has improved just enough to make them somewhat able to join in intellectual companionship—while still, in most cases, leaving them far below their husbands in training and breadth. The husband’s hunger for mental companionship gets “satisfied,” but in a way that teaches him nothing. Instead of being pushed to seek the company of his intellectual equals and fellow strivers, he settles into an easy domestic circle where he’s never challenged.

The result is familiar: young men with the greatest promise often stop improving soon after marriage—and once improvement stops, decline begins. If the wife doesn’t propel the husband forward, she will, without meaning to, hold him back. He stops caring about what she doesn’t care about. He first loses the desire for the kind of society that matches his higher ambitions, then ends up disliking it and avoiding it—partly because it would remind him, and everyone else, how far he’s drifted. His higher abilities—both intellectual and moral—stop being exercised. And as this shift combines with the new, often selfish pressures of family life, a few years later he becomes indistinguishable from people who never wanted anything beyond ordinary vanity and ordinary money.

What marriage can be, in the best case—two cultivated people, aligned in aims, equal in capacity but each in different ways superior, so that both get to admire, learn, lead, and be led—I won’t try to paint in detail. To anyone who has never seen it, it would sound like a romantic fantasy. But I insist, with complete conviction, that this is the only true ideal of marriage. Every opinion, custom, or institution that pulls marriage toward any other model—whatever respectable excuses it uses—is a leftover from primitive barbarism. Moral renewal won’t truly begin until the most basic social relationship is governed by equal justice, and until human beings learn to invest their deepest sympathy in a partner who is an equal in rights and in cultivation.

Up to this point, the gains from ending sex-based disqualification and subjection might look mainly social: more total thinking and acting power in the world, and better terms for how men and women live together. But we would badly understate the case if we stopped there. The most direct benefit is personal: the enormous, almost unspeakable increase in private happiness for the half of humanity that is set free—the difference between living under someone else’s will and living in rational freedom.

After food and clothing, freedom is the strongest human need. In lawless times, people want lawless freedom. As societies mature, people learn to value duty and reason, and they accept restraints that their conscience can endorse. But that doesn’t make them want freedom any less. It doesn’t make them willing to treat another person’s will as the rightful stand-in for their own moral judgement.

In fact, the societies where reason and social duty are strongest are typically the ones that defend personal liberty most fiercely: the right of each individual to steer their own conduct by their own sense of duty, within laws and customs their conscience can honestly accept.

If you want to understand how central independence is to happiness, you don’t need a philosophy textbook. Ask yourself how much you value it in your own life—and then notice how differently you judge it for other people. When someone complains that they aren’t allowed to manage their own affairs, the first impulse is often practical: “What’s the actual harm? What’s being mismanaged?” And if they can’t prove a concrete disaster, you shrug and dismiss it as whining.

But when the situation is yours, you judge differently. Even if a guardian or supervisor manages your interests competently, you still feel wronged. Being shut out of decision-making is itself the injury—so much so that the quality of the management becomes almost secondary.

Nations are the same. What citizen of a free country would accept excellent administration in exchange for surrendering self-government? Even if you believed a country could be well run by someone else’s will, wouldn’t the feeling of shaping your own destiny—under your own moral responsibility—compensate for plenty of roughness and imperfection in the details?

Hold onto that thought: women feel this just as intensely. Everything that writers from ancient historians onward have said about what free government does to the human spirit—the way it strengthens the mind, widens ambition, cultivates public spirit, broadens moral vision, and lifts the individual onto a higher moral and social platform—applies to women no less than to men.

Are those things not part of happiness? Think back to your own first taste of adulthood: the moment you stepped out from constant supervision, even loving supervision, and into responsibility. Didn’t it feel physical, like removing a weight? Like slipping out of constricting bands? Didn’t you feel more alive—more fully yourself? And do you seriously think women feel none of that?

Here’s something revealing: the pleasures and wounds of personal pride—the need to be respected as a person—dominate most men when their own case is at stake. Yet those feelings are granted the least sympathy when they show up in someone else. People treat pride as a flimsy justification for action. Maybe that’s because men often flatter their own pride with nicer names—“self-respect,” “honor,” “independence”—and so they don’t notice how powerfully it rules them.

But it rules women too. Women are trained to suppress this drive in its healthiest direction—self-direction—yet the inner force doesn’t disappear. It just comes out sideways. An energetic mind denied liberty will often reach for power instead. If it can’t govern itself, it tries to assert itself by governing others.

And when you tell human beings that they have no life of their own except what others permit, you create a strong incentive to bend others to their purposes. If liberty is hopeless but influence is possible, influence becomes the main prize. People who can’t manage their own affairs undisturbed will, if they can, compensate by interfering in the affairs of others for their own ends.

This also helps explain things people like to mock in women: an intense focus on beauty, dress, and display, and the social harms that can flow from it—wasteful luxury, unhealthy competition, and moral distortions. When a society blocks the path to freedom, it pushes people toward substitutes that look like status and control.

The love of power and the love of liberty clash in public. But in private psychology, the pattern is clear: where liberty exists, the thirst for power has less room to grow. Where liberty is denied, the craving for power can become fierce and reckless. The desire to dominate others stops being a corrupting force only when each person can live without it—which happens only when respect for personal liberty is a settled social principle.

And the unhappiness caused by being denied freedom doesn’t come only from wounded dignity. There’s another human need that matters just as much: the need to use your abilities. After illness, poverty, and guilt, few things crush the enjoyment of life more than having no worthy outlet for your active powers.

For many women, caring for a family supplies that outlet, at least while those duties last. But what about the rapidly growing number of women who never get the chance to do what they’re told is their “true vocation”? What about women whose children die, or move far away, or grow up, marry, and build households of their own?

We all recognize the case of the businessman who retires with enough money and expects to enjoy rest—only to find that, without new interests to replace the old, inactivity turns into boredom, sadness, and an early death. Yet people rarely notice the parallel tragedy: countless capable women who did what society demanded—raised children well, managed a household for decades—and then find themselves abandoned by the only work they were trained and permitted to do. They still have energy, competence, and a mind that wants to act, but nowhere to put it—unless a daughter or daughter-in-law yields her own household responsibilities so the older woman can take them over.

That is a harsh fate for old age: to have performed what the world calls your sole duty, and then be left with your strength intact and your purpose removed.

For such women—and also for those who were never given the family duty in the first place, many of whom live with the ache of blocked vocations and stunted capacities—the usual “resources” are religion and charity. But even when religion is sincere, it may be largely inward—emotion and ritual—rather than action, unless it expresses itself through charity.

Many women are naturally well suited to charitable work. But to practice charity usefully—or even harmlessly—requires education, preparation, knowledge, and the trained judgement of a skilled administrator. In fact, anyone truly capable of administering charity well would be capable of many functions of government.

And this is the recurring injustice: the few duties women are allowed to perform—above all, raising children—can’t be done at the highest level without training for broader responsibilities that society, to its own great loss, refuses to let women take on.

Let me point out a strange—and very common—move people make when the subject of women’s “disabilities” comes up. Instead of answering the argument, they turn it into a joke. Suggest that women’s practical skills and good judgment could sometimes matter in government, and the self-styled comedians instantly paint a ridiculous scene: teenage girls, or twenty-something newlyweds, plucked straight from the drawing-room and dropped—unchanged—into Parliament or the Cabinet for the world to laugh at.

But that’s not how political responsibility works for men, either. Men aren’t usually chosen at nineteen for a seat in Parliament or for serious executive duties. Basic common sense says the same would be true if women were eligible. If public trusts were opened to women, they would go to women who had prepared for them—people who either had no special calling to married life, or simply chose another use for their abilities (as many women already do, preferring marriage less than the handful of respectable careers currently available to them). Or, even more often, they would go to older women—widows, or wives in their forties and fifties—whose experience of managing households and navigating human affairs, paired with appropriate study, could be applied on a wider stage.

In fact, across Europe, capable men have repeatedly felt—often very sharply—how useful the advice and help of clever, seasoned women can be, both in private projects and in public ones. There are even areas of administration where few men are as strong as such women. One obvious example is the detailed oversight of spending.

Still, that isn’t the main question here. The point isn’t whether society “needs” women’s services in public business. The point is what society does to women when it forbids them to use the practical talents they know they have anywhere beyond a narrow arena—an arena that, for some women, was never open in the first place, and for others stops being open.

If anything is crucial to human happiness, it’s this: people need to take real satisfaction in what they do most days. That condition is only partly met—or completely denied—for huge numbers of people. And because of that, many lives quietly fail that look, from the outside, as if they have everything they need.

Some of those failures are unavoidable for now. Society isn’t yet smart enough to prevent every mismatch between a person and their work. A parent’s poor judgment, a young person’s inexperience, the lack of opportunities for the kind of work someone is suited to—or the abundance of opportunities for work they’re not—can trap plenty of men in lives where they do one thing grudgingly and badly, when they could have done something else well and gladly.

But for women, this isn’t just bad luck. It’s enforced—by law, and by customs that function like law.

In modern societies, some men are blocked by color, race, religion, or—if their country has been conquered—nationality. For women, sex plays that role across the board: a blunt exclusion from almost all honorable occupations, except:

  • work that others literally can’t do, or
  • work that others refuse because they consider it beneath them.

Pain like this often gets almost no sympathy, so most people don’t even realize how much unhappiness the feeling of a wasted life produces right now. And the problem will only grow. As women become more educated and cultivated, the gap widens between their ideas and abilities and the cramped space society gives them to act.

When you look honestly at the harm done to half the human race by disqualifying them, you see two layers of loss:

  • first, the loss of the most energizing and uplifting kind of personal fulfillment, and
  • second, the boredom, disappointment, and deep dissatisfaction that so often replace it.

And once you see that, one lesson stands out as urgently necessary: in fighting the unavoidable hardships of human life, people must not make those hardships worse by adding jealous and prejudiced restrictions on each other.

Those fears don’t prevent misery; they trade imagined dangers for real ones, often worse. And every time we restrict someone’s freedom of action—except where we are holding them responsible for actual harm they cause—we dry up, to that extent, one of the main springs of human happiness. We make the whole species poorer, in ways we can’t fully measure, in everything that makes life worth living to an individual person.