Featured Articles

Murray on Race Differences in IQ

Charles Murray’s Human Diversity contains little on the genetic basis of racial differences in average intelligence; it is clear Murray doesn’t want to be the subject of another moral panic like that which greeted The Bell Curve. He merely mentions that it is “tough” to defend the belief that “ethnic differences in IQ are meaningless,” (206) and explains why in a long endnote (416–18). Here he points out that attributing the Black-White gap in America (or other Western countries) to “racism” predicts that Blacks would score higher in all-Black countries; in fact, scores are uniformly lower in Black Africa and Haiti than among for American Blacks. If you then appeal to “the legacy of colonial racism” (416), you must explain how colonialism affects IQ. The most plausible suggestion is through parental SES. To test this hypothesis, one must adjust scores for parental SES. Murray notes that “this has been done frequently,” and the literature “consistently shows that doing so diminishes the size of the B/W difference by about a third.” In other words, two-thirds of the gap cannot be accounted for in this way. Moreover, most studies indicate that the B/W difference increases as parental SES rises; in other words, higher parental SES is associated with a rise in Black IQ, but with an even bigger rise in White IQ.

Other explanations offered for the race gap include Blacks’ relative unfamiliarity with standard English, the administering of tests by White rather than by Black teachers, or Blacks’ lack of motivation to work hard on tests which “clearly reflect White values.” In response, Murray cites the consensus statement “Intelligence: Knowns and Unknowns” published in the February 1996 issue of the APA flagship journal American Psychologist in response to the public controversy surrounding The Bell Curve. The eleven experts reported that controlled experiments have revealed no substantial contribution to the racial gap from any of these causes (although they may play a role in particular cases). The statement also notes the high predictive value of tests for academic performance.

Since that statement was issued, experimental evidence has been produced indicating that the racial gap is “effectively eliminated” when Black and White students are “tested on the basis of newly learned information.” The difficulty here is that such tests inherently measure short-term memory as much as, or more than, IQ; the gap disappears because the test is no longer so g loaded.

Murray acknowledges the possibility of arguing that the role of bias is too broad to be captured by any assessment of language and predictive ability. He calls this “the ‘background radiation’ theory of racism’s effect on IQ.” (418) This perspective would make “racism” an occult but omnipresent reality not unlike what Africans call “bad juju.” As Murray says, such a perspective cannot be refuted with data, since it conceives all data as vitiated a priori by racism; in other words, it is an unfalsifiable metaphysical commitment.

Recent Advances in the Study of Human Differences, Part 3 of 3: Behavior Genetics and Social Class

More is known about the influence of genes on social class than upon race and sex; indeed, Murray writes that “the basics have been known for decades.” (209) The technical literature treats socioeconomic status as the sum of heredity (genetic influence) and environmental influence. The latter component can be further divided into shared and nonshared environment. The nonshared environment includes things not shared by people in the same family, like birth order, differential parental treatment, extrafamilial networks, accidents and illnesses. Studying twins, especially monozygotic twins, is a useful technique for reducing the effects of the nonshared environment; usually such twins attend the same schools and have similar social circles. In practice, measurement error must be allowed for as well: e.g., the number of books in a child’s home is one factor sometimes counted as part of its environment, but it is clearly a very imperfect proxy for how much intellectual stimulation the child actually receives in the home.

Hereditability, as it is understood in the technical literature, is “a ratio calculated as the variance attributable to genes divided by the total variance in phenotype.” (210) It is a property of human groups, not individuals:

Suppose that genes explain 70 percent of a population’s variance in height. You can use this information to conclude that “genes probably have a lot to do with how tall Joe is,” but it does not mean that “genes explain 70 percent of how tall Joe is.” (211)

Heredity is not a fixed number for any particular trait. For example, the heritability of IQ rises with age, a result many find counterintuitive. A child’s IQ may be temporarily boosted, e.g., by a preschool educational program, but over time the effect fades and the full effect of genes increasingly apparent.

Heredity varies by population. For example, the heredity of SAT scores at an elite high school will be higher than at an ordinary school where the students were brought up in similarly excellent environments—educated, involved parents and plenty of money. This makes the variation due to the environment less within that sample so that the remaining variation is more likely due to variation in genes.

Heritability is a ratio—the ratio of the variation due to variation in genes to the total variation in the variable being measured. A basic formula is

H= Vg/(Vg+Ve)

where H is heritability, Vg is variation due to people having different genes and Ve is variation due to people experiencing different environments (shared and unshared). The narrower range of environments experienced by students at the elite school (i.e., a smaller Ve) means that a smaller denominator is used for calculating that ratio; since Vg remains the same in both the numerator and the denominator, the ratio as a whole is greater; at the extreme, if there is no variation due to the environment (i.e., Ve = 0, as would be the case if all the sample subjects were reared in exactly the same environment), the ratio would equal 1—all of the variation would be due to people having different genes; H=1.

As a result, the more environmental influences are equalized, the higher heredity becomes. As Murray explains:

It is a statistical necessity: the phenotype is the result of genes and environment. In a perfect world where everyone had completely full opportunity to realize their talents, heritability of those talents would converge on 100 percent because the environment relevant to those talents would no longer vary. (212)

As early as 1976, two researchers noted that

a consistent—though perplexing—pattern is emerging from the data. Environment carries substantial weight in determining personality—it appears to account for at least half the variance—but that environment is one for which twin pairs are correlated close to zero. We seem to see environmental effects that operate almost randomly [resulting in nonshared environmental influences]. (219)

As another researcher put it, “theories of socialization had assumed that children’s environments are doled out on a family-by-family basis. In contrast, the point of nonshared environments is that environments are doled out on a child-by-child basis.” (227) For example, within a family, two siblings may have different experiences at school, or they may seek out or elicit different environments (say, music vs. sports) because of genetic differences. Accordingly, Murray’s eighth proposition states: “The shared environment usually plays a minor role in explaining personality, abilities and social behavior.”

In 2015, a group of seven scholars published a meta-analysis of nearly every twin study carried out between 1958 and 2012; it involved 2748 publications, 14,558,903 twin pairs, and explored 17,804 traits. From this enormous assemblage of data, Murray extracted the evidence on thirty traits relevant to personality, abilities and social behavior. Only for two of them was the contribution of the shared environment greater than one third: 36 percent for ‘basic personal interactions’ and 34 percent for ‘problems related to upbringing.’ “Yes, these data seem to say, you can have some effect on your kids’ manners and you can also cause problems.” (223) For all 28 other traits, shared environment accounts for no more than 26 percent of variance. For such important traits as temperament and personality functions, work and employment, intimate relationships, and family relationships, shared environment contributes no more than 6 percent.

One must add the caveat that an extremely bad home environment can make a significant difference: i.e., truly awful parenting which involves severe deprivation and abuse can damage children permanently.

Wealthy parents can give their children a high standard of living. Often, they can get them out of youthful scrapes or into desirable first jobs. But, says Murray, “it’s not so easy for parental influence to get the child promoted. The more competitive the industry and the more cognitively demanding the job, the less influence family wealth has.” (221) And, of course, wealthy and high-status parents pass on to their children the genetic factors which partly explain their own wealth and success. What they cannot do is use their wealth or status to make their “children more than trivially ‘better’ than they would otherwise have been where ‘better’ is defined in terms of personality, abilities, or social behavior.” (221)

Murray’s ninth proposition states that “class structure is importantly based on differences in abilities that have a substantial genetic component.” The basic reasoning behind this was set forth by psychologist Richard Herrnstein in 1973: 1) if differences in mental abilities are inherited, and 2) if success requires those abilities, and 3) if earnings and prestige depend upon success, then 4) social standing (which reflects earnings and prestige) will be based to some extent on inherited differences among people. This argument was set forth with 800 pages of detailed empirical support in The Bell Curve, a book cowritten by Herrnstein and Murray in 1994 and subtitled ‘Intelligence and Class Structure in American Life.’

In the present work, Murray immediately follows up his ninth proposition with the following caveat:

The bulk of the variance in success in life is unexplained by either nature or nurture. Researchers are lucky if they explain half of the variance in educational attainment with measures of abilities and socioeconomic background. They’re lucky if they can explain even a quarter of the variance in earned income with such measures. The takeaway for thinking about our futures as individuals is that we do not live in a deterministic world ruled by either genes or social background, let alone by race or gender. But Proposition #9 is about social classes, not individuals. (228–9)

Time and chance happen to us all, but they do not push us all in the same direction; spread over an entire society, the effect of genes will inevitably tell.

The general factor of intelligence, known as g and measured by IQ tests, is not only the most important heritable trait contributing to success, but far more important than any other individual trait. Recent confirmation comes from a study of 6653 UK twins which correlated scores on the British school-leaving exam known as the GCSE with nine heritable traits. IQ alone statistically explained 34 percent of variation, while the other eight combined explained just 28 percent.

In the US, criticism of testing has focused on the high correlation between socio-economic status (SES) and performance on college admissions tests such as the SAT. The big question concerns the direction of causation: is SES causing high scores, or is inherited intelligence what put these families in the high-SES category?

An exhaustive analysis of this question, along with a comprehensive review of previous studies was published in 2009 by a team of psychologists at the University of Minnesota. They found that controlling for admission test scores reduced the correlation of parental SES and college grades from +.22 to –.01. On the other hand, controlling for measures of parental SES only reduced the correlation between admission test score and grades only from +.53 to +.50. This would seem to leave little room for argument.

Evidence for the influence of IQ and parental SES on success in later life is less clear, but Murray cites thirteen measures based on six databases and in only two cases is the correlation coefficient for SES higher than that for IQ.

The final proposition states that “outside interventions are inherently constrained in the effects they can have on personality, abilities, and social behavior.” In practice, ‘outside interventions’ usually refer to such practices as counseling, tutoring, mentoring, after-school activities and job training. The reasoning behind the proposition is simple: 1) if the shared environment explains little of the variance in cognitive repertoires (as stated in proposition eight); and 2) if the only environmental factors that can be affected by outside interventions are part of the shared environment; then 3) outside interventions are inherently constrained in the effects they can have on cognitive repertoires. In other words, “it is not within our power to do much to change personalities or abilities or social behaviors by design on a large scale.”

The truth of this final proposition mostly follows from what has gone before, so rather than adducing evidence directly in its favor, Murray devotes his discussion to showing why five major objections fail. The first three dispute the first premise above, asserting that it is 1) wrong for some important outcomes, 2) wrong for the early stages of life, or 3) wrong when it comes to changing self-concept.

We saw that shared environment accounted for more than one third of variation for only two out of thirty traits related to personality, abilities and social behavior discussed in a thorough 2015 meta-analysis of twin studies. But there is no a priori cutoff for how much variation a factor must explain to be considered substantial. For six other traits, the shared environment accounted for over 20 percent of variation, including such important items as educational attainment (25 percent) and disorders due to multiple drug use (26 percent). If outside interventions could have an effect on the shared environment factor contributing to these traits, might they not be worth the effort?

The best-case scenario for improving shared environment is adoption:

In effect, adoption at birth to competent parents gives us a glimpse of what would happen if an outside intervention could magically be successful at changing a wide variety of parenting behaviors from bad to good … . But adoption is as good as it gets. (246) If the shared environment explains just 26 percent of the variance, the outside intervention has to be big—boarding school, for example, or moving that family out of the neighborhood, or adoption into a new family. (242)

Short of establishing a police state bent on socializing all children in the same manner (as occurred in ancient Sparta and was aimed at in the USSR), the sorts of outside interventions that can be applied to larger numbers of people later in life generally amount to no more than a few hours a week, and must compete against all sorts of other past and present influences. Social agencies simply do not have the means to apply more radical remedies on a large scale. And of course, all it would do would be to reduce Ve in our ratio, so that a greater percentage of the variation would be due to genetic variation. Getting rid of variation is a difficult task indeed.

It is sometimes suggested that outside interventions can work in the early stages of life before habits have set and the child is more malleable. Murray acknowledges that

if interventions are ever going to work, they’re going to work in infancy and early childhood. But it’s one thing to believe that; it’s another to confront the empirical findings about the difficulties and constraints that have attended a half century of attempts to intervene early in life. (246)

When pre-school programs for disadvantaged youth were instituted in the 1960s, they produced a large effect: 35 percent of a standard deviation, nearly equivalent to half the Black-White kindergarten achievement gap. But subsequent experience showed this effect faded at a rate of 3 percent of a standard deviation per year. After 1980, even the initial effects of such interventions had shrunk to 16 percent, a finding which probably reflected improved conditions for children in the control groups.

In 1998, congress mandated a large and rigorously designed evaluation of Head Start; the report was published in 2010.

After one academic year in the program, effect sizes in six language and literacy areas ranged from .09 to .31 [i.e., 9–31 percent of a standard deviation], but there was negligible impact on math skills or on children’s attention, antisocial [behavior] or mental health problems. The limited effects at exit disappeared within two years. “By the end of the first grade, both achievement and behavioral ratings of treatment group children were essentially similar to control-group children.” (251)

It is sometimes asserted that outside interventions can have a positive effect on a student’s “self-concept.” The original version of this theory led to the self-esteem movement, comprehensively de-bunked in the early 2000s. More recently, a somewhat more plausible variant has been put forward.

Researchers administered a Standard Progressive Matrices test to fifth graders. One group of children was praised for their intelligence, while another was praised for the effort they put into the test.

Children praised for being intelligent subsequently displayed less task persistence and less task enjoyment. They became more concerned about getting a good score than about learning new things. They became protective of their image as “smart” and reluctant to jeopardize it. (256)

As Murray wryly notes, this finding “was especially jarring for a society in which many upper-middle-class parents incessantly tell their children how smart they are.” (256)

These findings have spawned the “growth mindset movement.” Advocates believe an emphasis on intelligence is harmful because it teaches children that their results follow from a fixed trait. They strive to convey to students the efficacy of effort, teaching them to interpret failure as a mere stepping stone to later success.

Common sense suggests such an approach could be beneficial for at least some students, but empirical assessments of growth mindset interventions have yet to reveal large effect sizes. It is also difficult to disentangle the effects of the interventions from pre-existing personality characteristics such as openness and conscientiousness, as well as from cognitive ability.

A fourth objection to the constraints on outside interventions questions whether nonshared environment really cannot be affected by outside interventions.

The best way to study the nonshared environment is by looking at monozygotic twins reared together: they have the same genes and the same home environment, so differences must be due to the nonshared environment.

But it has been found that those differences are not stable over time. Cognitive differences last no more than a few years and personality differences change even more quickly. No identical twin differences are stable over many years. The necessary implication: the nonshared environmental factors are not stable, but more like random noise. (259–260)

Effective interventions, however, could only be based on stable patterns.

The fifth objection is a recently fashionable appeal to “epigenetics.” This is a relatively new field of study dealing with auxiliary mechanisms which document that environmental events can switch off the expression of some genes (by making them less accessible to transcription machinery) or in some cases switch them on or modulate the intensity of their expression. This is something which goes on in the human body all the time, but there does not seem to be any evidence that we will soon be able to control the process for our own ends. Murray warns that

with rare exceptions, the mainstream media’s reporting on the science behind epigenetics bears little resemblance to what’s actually been discovered. (261) … As far as I can tell, no serious epigeneticist is prepared to defend the notion that we are on the verge of learning how to turn genes on and off and thereby alter behavioral traits in disadvantaged children (or anyone else). (268)

Murray points out that his final proposition, viz., that “outside interventions are inherently constrained in the effects they can have on personality, abilities, and social behavior,” may not remain true forever.

Who knows what role future drugs might play in enhancing learning and positively affecting personality traits and social behavior. At some point, the promise of … genetic editing will be realized, and all bets about the ability to change people by design in substantial numbers will be outdated. (269–270)

But we are definitely not there yet.

Recent Advances in the Study of Human Differences, Part 2 of 3: The Biological Basis of Race

Go to Part 1, on gender differences.

Social constructivist orthodoxy has been more successful at shaping popular perceptions of race than of sex. As Murray notes, the idea that “gender is merely a social construct” is widely perceived as too extreme, but many of our contemporaries labor under the mistaken impression that “significant racial differences in cognitive repertoires are known to be scientifically impossible.” (133)

Nevertheless, the author’s discussion of race generously begins with a section entitled “What the Orthodoxy Gets Right.” Richard Lewontin was correct that there is more variation within the major races than between them. Stephen Jay Gould was correct to criticize the theory of polygenesis, viz., that humans evolved independently in Europe, Asia and Africa for hundreds of thousands of years. No one today claims that races can be arranged in an unequivocal hierarchy from “best” to “worst.” But all this merely means that “we have before us an exercise in modifying our understanding of race, not resurrecting nineteenth-century conceptions.” (135)

In 2005, newly acquired data from the sequenced human genome confirmed that Homo sapiens originated in Africa; researchers found that “no origin outside Africa had the explanatory power of an origin anywhere within Africa.” The circumstances of human dispersal from Africa are less clear. Some Homo sapiens seem to have been in the Mideast by about 200,000 years ago. It was long assumed there must have been many distinct migrations out of Africa, but evidence has recently emerged that today’s non-Africans are mostly descended from a single small group:

In 2016, a new whole-genome study based on 300 genomes from 142 diverse populations provided evidence for a one-wave scenario, indicating that just one band of anatomically modern emigrants from Africa has descendants among today’s humans. But, as usual, there were complications. The genomes of Papuans gave signs that about 2 percent of their genomes might have come from an earlier population. That’s not much, but it suggests something more complicated than a single band of emigrants. (147)

Some of the Homo sapiens of Eurasia mated with Neanderthals and Denisovans already there, acquiring useful genes in the process. As the population grew, bands would occasionally split off into new territory, resulting in stepwise increases in genetic drift and decreased genetic diversity; this is called “serial founder effect.” The end result was a species in which differences in gene frequencies increased with distance and were subject to new evolutionary pressures across a wide variety of local environments.

As we know, there is no unequivocal number of human races. In such situations, it is helpful to run a statistical cluster analysis, in which a software program divides members of a sample first into two clusters, then three, increasing the number for “as long as the clusters produced continue to be informative.” (149)

We now turn to the evidence for Murray’s proposition five, viz., “Human populations are genetically distinctive in ways that correspond to self-identified race and ethnicity.”

In 2002, a team of scholars associated with the Human Genome Diversity Project ran a cluster analysis on a sample of 1056 persons from 52 distinct populations, using 377 genetic variants. They found the cleanest set of clusters was produced when K—the number of clusters—was set to five, and that these clusters corresponded to the major continents: Africa, Europe, East Asia, the Americas and Oceania (the Pleistocene continent which included Australia, New Guinea and Tasmania). In general, genetic discontinuities are clearest where they correspond to geographic barriers.

Six years later, following the complete sequencing of the human genome, another group of scholars ran a cluster analysis on the same sample, but taking account of 642,690 variants rather than 377. Murray summarizes the results:

At K = 2, two sets of the 51 populations had virtually no overlap: populations in sub-Saharan Africa versus populations in East Asia plus a few in the Americas. All the other populations were mixtures of the two clusters.

At K=3, the people who showed virtually no admixture across clusters consisted of individuals from sub-Saharan Africa, today’s Europe and Middle East, and the East Asian-Americas group.

At K=4 the Amerindians split off to form a separate cluster.

At K=5, the Oceania populations [=Australian Aborigines, New Guineans, Melanesians, and Micronesians] split off.

At K=6, the Central and South Asians split off.

At K=7, the configuration that the authors assessed as the most informative, those in the Mideast split off from the Europeans. (151)

Nota Bene: At no point does any increase in K fundamentally change the pattern of clusters; each increase splits one of the clusters already obtained, merely adding detail.

Murray draws our attention especially to the last two steps:

As in the 2002 study, the first five clusters corresponded to the five continental ancestral populations… [including] a cluster that corresponded to the classic definition of Caucasian—an odd agglomeration of peoples from Europe, North Africa, the Mideast, South Asia and parts of Central Asia. There had been a reason why physical anthropologists had once combined these disparate populations—all of them have morphological features in common—but it never made much sense to people who weren’t physical anthropologists. With K = 7, one of the new clusters split off the peoples of the Mideast and North Africa and the other split off the people of Central and South Asia—precisely the groups that had always been visibly distinctive from Europeans and from each other in the Caucasian agglomeration. (152)

I am reminded of Steve Sailer’s observation that Luigi Cavalli-Sforza’s laboriously produced map of the world’s “genetic population groups” resembles what Strom Thurmond might have sketched out with a box of crayons. In short, the common man’s perceptions of the races of mankind turn out to be fairly well supported by cutting-edge genetic research.

Recent studies have focused on more fine-grained distinctions. For example, a 2016 review article showed

what happens when several European subpopulations are plotted with different numbers of sites. When only 100 or even 1000 sites are used the subpopulations are indistinguishable. At 10,000 sites, some separation is visible. At 100,000 sites, Italians, Spanish, Germans, and Romanians are all reasonably distinct, with the British, Dutch, Swedish, and Irish fuzzily distinct. (154)

The upshot of these advances is that a geneticist can now say, in effect:

Give me a large random sample of [genetic variants] in the human genome, and I will use a computer algorithm, blind to any other information about the subjects, that matches those subjects closely not just to their continental ancestral populations, but, if the sample is large enough, to subpopulations within continents that correspond to ethnicities.

As the author notes, “if race and ethnicity were nothing but social constructs, that would be impossible.” (156)

Murray’s proposition six states that “Evolutionary selection pressure since humans left Africa has been extensive and mostly local.”

Indeed, the exodus from Africa accelerated evolution: not because mutations became any more common, but because new pressures were applied to pre-existing gene frequencies. For instance, the switch to a cooler Eurasian climate may have made certain previously neutral gene variants valuable, so that they spread within that local population (without necessarily reaching fixity). The transition to agriculture some ten thousand years ago led to a further series of drastic changes in human environments, generating intense new selective pressures and further accelerating evolution.

Before the sequencing of the human genome, researchers had to make educated guesses about where to look for genes recently subject to selection. Today, they can search the entire genome systematically in what are called “genome wide association studies.” Even tiny changes in frequency at individual loci can have large cumulative effects over the whole genome. Geneticists have also developed methods for dating adaptations less than about 30,000 years old.

A 2016 review article summarized the results of 73 studies which have revealed recent adaptations affecting cell function, connective tissue development, the brain and central nervous system, vision, hearing, olfactory receptors, skin pigmentation, immunity and metabolism.

Such adaptations tend to be local, meaning (at a minimum) confined to particular continental races. A 2009 German study found that

68 percent of the regions [of the human genome] under selection were under selection for a single [continental] population. Another 20 percent were under selection in just two of the six. Only 1 percent were under selection in all six populations. (179)

This research is still at an early stage, and we will not know for a long time just how much recent evolution there has been in various geographical regions. But Stephen Jay Gould’s claim that evolution since humans left Africa cannot have been extensive, while perhaps defensible when he made it in the 1980s, is now known to be mistaken.

Murray’s seventh proposition states that “Continental population differences in variants associated with personality, abilities, and social behavior are common.” Here is the evidence.

Research into continental differences in gene variants of all types is becoming more extensive in response to a growing awareness that diseases can affect different racial groups differently. This means that existing genomic data, collected mainly in Europe and the United States, may not be appropriate for use in medical research concerned with other parts of the world. Articles filled with fustian about “institutional racism” in medical science have even begun appearing in the popular press (if geneticists weren’t “racist,” they would presumably have been just as quick to collect samples from the deserts of Central Asia and the African jungle as in their own backyards).

Of course, this kerfuffle is having the positive result that genomic databases drawn from a wider array of racial groups are now being compiled. The goal is improved medical care around the world. But a side effect will be massive amounts of new data about racial differences in gene frequencies, many of which will correlate statistically with differences in personality, abilities and social behavior. Human Diversity provides information for 22 such traits for which evidence is already available (192-193). We cannot assume that all these differences have been produced by natural selection: founder effects and genetic drift probably made a larger contribution to some of them. But they are real racial differences in gene frequencies, whatever their cause.

As of yet, the task of assembling the genetic story for specific phenotypic traits has barely begun, but Murray assures us that “progress is accelerating nonlinearly” (201). By 2030, geneticists will be able to predict personality characteristics, abilities, and social behavior on the basis of genetic information alone, amounting to “an ironclad, you-can’t-get-around-this-one refutation” (300) of social constructivism. The orthodox may still resist the evidence, but they will eventually succumb to the ridicule.

Recent Advances in the Study of Human Differences, Part 1 of 3: Gender Differences

Human Diversity:
The Biology of Gender, Race, and Class
by Charles Murray
New York: Grand Central Publishing, 2020

“We are in the midst of a uniquely exciting period of discoveries in genetics and neuroscience (6),” notes Charles Murray near the beginning of his latest book Human Diversity, yet it remains something of a secret. Knowledgeable specialists avoid publicizing the discoveries, frequently claiming to be afraid the information will be misinterpreted and misused (i.e., by “white supremacists” and such). What they are really afraid of is retaliation by an aggressive minority of their colleagues who enforce a scientifically unsupported orthodoxy that Murray sums up in three assertions:

1) Gender is a social construct. Physiological sex differences associated with childbearing have been used to create artificial gender roles that are unjustified by inborn characteristics of personality, abilities, or social behavior.

2) Race is a social construct. The concept of race has arisen from cosmetic differences in appearance that are not accompanied by inborn differences in personality, abilities, or social behavior

3) Class is a function of privilege. People have historically been sorted into classes by political, economic, and cultural institutions that privilege heterosexual white males and oppress everyone else, with genes and human nature playing a trivial role if any. People can be resorted in a socially just way by changing these institutions. (3)

This orthodoxy has been on the defensive for many years now, and Murray is optimistic it will collapse within the coming decade. Plenty of individual believers will remain, but collectively they will lose their ability to enforce their beliefs through intimidation.

Human Diversity is a report on the revolution in our understanding of race, sex and class differences over the last thirty years. The author draws on genetic advances made possible by the sequencing of the human genome and also on neuroscience, but avoids extensive appeals to evolutionary psychology: “I decided that incorporating its insights would make it too easy for critics to attack the explanation and ignore the empirical reality.” (7)

This is part of a strategy “to stick to the low-hanging fruit” of findings “that have broad acceptance within their disciplines,” even if it leaves expert readers “yawning with boredom.” (6) Though soft-spoken by nature, Murray clearly hopes to strike an unanswerable blow against the Lysenkoist mafia whose power he has experienced personally. He conveniently summarizes his basic message in ten propositions for which “the clamor of genuine scientific dispute has abated,” (7) and there is little room left for empirically informed dispute. The first four propositions deal with sex differences, the next three with race, and the last three with class. Each proposition is given a chapter of its own.

The first proposition states that sex differences in personality are consistent worldwide and tend to widen in more gender-egalitarian cultures. Few will be surprised to find the latest studies confirming that women tend toward the warm, sympathetic, accommodating, altruistic and sociable end of the personality scale, with men more inclined to be reserved, utilitarian, unsentimental, dispassionate and solitary. Such differences emerge early in life are found around the world in radically different cultural environments.

A more counterintuitive finding is that such sex differences in personality widen rather than diminish in more egalitarian countries: this was the consistent result of five extensive international studies published between 2001 and 2018. As Murray notes, social constructivists are not the only ones surprised by this: “I know of no ideological perspective that would have predicted greater sex differences in personality in Scandinavia than in Africa or Asia.” He offers the conjecture that stronger enforcement of social norms in more traditional societies may suppress the expression of inborn personality traits, while in the modern West the sexes are “freer to do what comes naturally.” (43)

The second proposition states that “on average, females worldwide have advantages in verbal ability and social cognition while males have advantages in visuospatial abilities and the extremes of mathematical ability.” Social cognition refers to the ability to infer mental states from external clues and predict other people’s intentions and reactions. Women’s superior verbal skills are a consistent finding of international student assessment tests. Women also have better sensory perception and fine motor skills, and are better than men at remembering the minutiae (peripheral detail) of events.

Men are better at remembering the gist, and have markedly superior visuospatial skills. There has also long existed a widespread perception that men are better at math than women. Recent evidence makes some qualification necessary. Within the normal ability range, the male advantage is not statistically significant. It is clearer at the high end, but even here diminished greatly during the 1980s. Among the top one percent of one percent of human mathematical ability, there were 13 boys for every girl in the 1970s; by the early 1990s, the ratio had sunk to 3 to 1, where it has remained stable ever since.

Even where men and women solve problems equally well, they may do so in different ways. For example, women tend to navigate by identifying and remembering landmarks, while men are more likely to construct mental maps. Women more often use verbal forms of logic to solve math problems, whereas men tend to use symbolic or spatial reasoning.

Among the most cherished of feminist beliefs is that female under-representation in STEM fields (science, technology, engineering and math) reflects differences in socialization—differences that would disappear in a gender-neutral society. To test this hypothesis, Murray examines the preferences and choices of a cohort chosen for a Johns Hopkins Study of Mathematically Precocious Youth (SMPY). Focusing on such a sample allows him to ignore sex differences in abilities: all these people were qualified to pursue any undergraduate major they liked.

In the upper-middle-class schools and neighborhoods where most of the SMPY girls grew up, courses were filled with inspirational stories about women scientists, political leaders, artists, and authors. High schools were putting boys and girls in the same gym classes, and high school counselors were urging female students to go into male-dominated careers. When they reached college age in 1982–5, they all knew that the most famous universities in the nation were eager to add them to their student bodies and even more eager for them to populate their majors in science, technology, engineering, and math. On campuses, young women were hearing faculty and their fellow students urging them to forgo marriage and childbearing in favor of a career. (72)

It would not be easy to find a hypothesis which has been given a fairer or larger-scale trial than the explanation of female underrepresentation in STEM fields by sex-specific socialization.

The SMPY women were, indeed, about twice as likely as women in the general population to major in STEM subjects—but so were the men compared to men in the general population, so that the sex ratio was about the same. Twice as many of the women got degrees in the social sciences, business, and the humanities as did the men. Those of the women who did major in STEM subjects inclined more to the life sciences rather than math or the physical sciences.

An important reason for the persistent underrepresentation of women in STEM fields even among the mathematically gifted elite may be that many of these women also enjoyed their sex’s natural advantage in verbal intelligence, giving them “an attractive array of alternatives to STEM” (78)—whereas the men’s intelligence was more likely to skew heavily toward mathematics.

In 2012–13 a team of Vanderbilt psychologists interviewed these SMPY men and women, by then in their late forties, about their work preferences. The women indicated a much greater willingness to consider part-time careers and a greater unwillingness to work more than forty hours a week. They sought flexibility in their work schedule and placed a high value on such things as “having strong friendships.” Murray notes that since these women were in their late forties, their preference for shorter hours and a flexible schedule was not likely to be due to the presence of small children at home.

The men expressed a strong preference for a full-time career with a high salary, and agreed with such statements as “The prospect of receiving criticism from others does not inhibit me from expressing my thoughts’ and “I believe society should invest in my ideas because they are more important than those of other people in my discipline.” They viewed “being able to take risks on my job” as a positive good, and reported that they enjoyed working with computers, tools and machines.

In short, the stated preferences of these highly talented men and women who had come of age at the height of the feminist educational and career revolution were utterly sex-typical. Yet their widely differing preferences “were not accompanied by corresponding sex differences in how they viewed their career accomplishments and close relationships, or in their positive outlook on life,” according to the Vanderbilt researchers. (76) Forcing statistically equal life outcomes on the women in this sample might have been possible under totalitarian conditions, but would almost certainly have left them less happy.

The patterns observed in this cohort of unusually talented men and women holds for people general. Consider, for example, the RIASEC psychological assessment battery widely used for career guidance: of the six dimensions of preferences and abilities it measures, two reveal large and consistent sex differences. Those who score highest on the trait labelled “Realistic” enjoy working with tools, instruments, and mechanical or electrical equipment, as well as activities such as building, repairing machinery, and raising crops or animals. Men are higher on this measure by 84 percent of a standard deviation, indicating a robust average difference. Those who score highest on the trait labeled “Social” enjoy helping, enlightening or serving others through activities such as teaching, counseling, working in service-oriented organizations and engaging in social and political studies. Women are higher on this measure by 68 percent of a standard deviation, also quite robust. And these differences do not just show up in career assessment tests, but are closely mirrored by the actual jobs men and women go on to hold.

All of this evidence goes to confirm an observation made in 1911 by Edward Thorndike, a founder of the discipline of educational psychology, that the greatest cognitive difference between men and women lies “in the relative strength of their interest in things and their mechanisms (stronger in men) and the interest in persons and their feelings (stronger in women).” (19–20) This provided the inspiration for Murray’s third proposition: “on average, women worldwide are more attracted to vocations centered on people and men to vocations centered on things.” He notes that in the late 1980s, observers could have been forgiven for predicting that the career preferences of men and women

would converge within a few decades. From 1970 through the mid-1980s, the percentage of women in Things jobs had risen and the male-female ratio had plunged. If those [trends] had been sustained, the percentages of men and women in Things jobs would have intersected around 2001. But convergence was already slowing by the late 1980s and had effectively stalled by 1990. (87)

For example, between 1971 and 1986, the percentage of women’s bachelor of science degrees in the most things-oriented STEM fields—physics, chemistry, earth sciences, computer sciences, mathematics and engineering—more than doubled, but from a base of only 4 percent to a high of 10 percent. By 1992 it had declined again to 6 percent, where it has remained ever since, give or take a percentage point. The author concludes:

It looks as if women were indeed artificially constrained from moving into a variety of Things occupations as of 1970, that those constraints were largely removed, and that equilibrium was reached around 30 years ago. (88)

The fourth proposition states that “many sex differences in the brain are coordinate with sex differences in personality, abilities and behavior.” As neurobiological researcher Larry Cahill wrote in 2017: “The past 15 to 20 years witnesses an explosion of research documenting sex influences at all levels of brain function. So overpowering is the wave of research that the standard ways of dismissing sex influences have all been swept away.” Some of the most obvious sex differences in temperament are due to sex hormones, of which testosterone and estrogen are the best known (there are many others). Both men and women produce both of these hormones, but men produce much more testosterone and women much more estrogen.

Studies have demonstrated that a single dose of testosterone administered to women

significantly altered connectivity of the network in the brain that underlies the integration and selection of sensory information during empathic behavior. This finding suggests a neural mechanism by which testosterone can impair the recognition of emotions. (100)

The administration of testosterone has also been found to diminish women’s accuracy in inferring mental states, and women with higher natural levels of testosterone have been observed to be less risk-averse than other women. Supplemental testosterone administered to men diminishes their performance on the Cognitive Reflection Test, which measures capacity to override intuitive judgments with deliberated answers. Estrogen administered to men increases their emotional response to watching a distressed person.

Among the most important but less widely known functions of testosterone is to masculinize the fetal brain. Testosterone surges in human males occur twice before birth, during weeks 12-18 and again during weeks 34-41; a third occurs in the first three months after birth. In the absence of these testosterone surges, the brain develops according to the female pattern, which is thus in some sense the “default” type of human brain.

Since this discovery was made in 1959, many experiments have been conducted on nonhuman mammals in which hormones are manipulated during critical periods of prenatal and neonatal development. It has been established that certain regions of the brain have receptors which accept chemical signals from hormones. These signals affect a cell’s anatomical connectivity and neurochemicals, and even whether it survives or not.

Complete androgen insensitivity syndrome (CAIS) is a rare but instructive disorder that affects genetically male humans, i.e., persons with a Y chromosome. Such persons produce normal amounts of testosterone at the proper time—but to no effect, because their androgen receptors do not work. Persons with CAIS are born with externally normal female genitalia, are reared as girls, and are in most respects indistinguishable from girls behaviorally.

A 2017 Swedish study identified many specific ways in which fetuses with a Y chromosome but affected by CAIS develop brains that are a mix of characteristically “male” and “female” patterns. To give just one example: they are characteristically female with regard to hippocampus volume and male with regard to caudate volume. The study concluded that similarities in brain structure between the CAIS women and female controls are due to the CAIS condition, while similarities with male controls were due to the effects of their Y chromosome.

A few defenders of feminist orthodoxy have written books critical of hormone research, and been rewarded with “uniformly and sometimes gushingly enthusiastic… reviews in the mainstream press,” according to Murray. (106) But the best that can be said for their critiques is that they have succeeded in pointing out how some research has fallen short of perfection due to “small samples, inconsistent results, and scarce replications.” (105) But neuroscientists have not put much effort into refuting these books, apart from a few of them “having had scathing things to say in blogs.” One researcher told Murray that “one reason you don’t find many critiques… is that people in the field really don’t care. It’s so evidently nonsense.” (106) In short, empirically oriented scientists live in a largely separate mental world from the armchair theorists of social constructivism.

Among the best attested neurological sex differences is the greater “laterality” of the male brain, meaning that it is “structurally optimized for communicating within hemispheres” (112) as a result of fetal masculinization. The female brain is optimized for communication across hemispheres: the corpus callosum, which connects the two hemispheres is thicker in women, even after controlling for brain size and age. Men primarily use their right hemisphere for spatial tasks and their left hemisphere for verbal tasks, while women use both hemispheres for both.

When women suffer brain damage to the left hemisphere, they are less likely than men to develop language difficulties. Women’s language test scores after brain damage suffer the same effect whether the damage occurred to the left or right hemisphere, whereas men are more affected by damage to the left hemisphere. (110)

A recent study found that males have greater connectivity between the motor and sensory systems, and in systems associated with complex reasoning and control. Females have higher connectivity with the subcortical regions associated with emotion processing. Researchers say these results “suggest a better perception-action coordination in males, and better anticipation and subsequent processing of socially and emotionally relevant cues in females.” (115)

Go to Part 2.

Corona Conspiracy

You’d be forgiven for thinking that the Corona virus was the deadliest thing to hit the world since Spanish Flu. It was announced on February 12th that the Mobile World Congress in Barcelona has been cancelled due to the supposed epidemic, from which was nobody of European ancestry has died [MWC 2020 Canceled Due To Corona Virus Concerns, RTT News, February 12, 2020]. The World Health Organization has gone so far as to brand Corona “Public Enemy Number One” [WHO brands corona virus ‘public enemy number one’, Voice of the People, February 12, 2020]. This being so, you’d think that the Chinese doctors who discovered and drew attention to this apparently deadly disease would be in line for the Chinese equivalent of the Congressional Medal.

But they’re not. In fact, when Wuhan medic Dr Li Wenliang tried to raise the raise the alarm about the virus he was arrested and severely reprimanded by the police. Tragically, Dr Li has since died of Corona himself. Revelations of what happened have unleashed public fury even in the surveillance dictatorship that is modern China. Beijing has sent officials to Wuhan to find scapegoats in order assuage public anger. But even prominent Chinese academics are publically criticizing government censorship, due to the Corona scandal. The China Human Rights Lawyers Group have demanded, in an open letter, a lifting of censorship and that the day of Dr Li’s death — February 6th — be declared “The People’s Day of Truth.” Prominent Chinese academics have signed this letter [The fallout from the death of a Chinese doctor is turning into a major challenge for Xi Jinping, By James Griffiths, CNN, February 13, 2020].

One might assume that China is a paranoid dictatorship obsessed with control and frightened that any lack of control could result in a challenge to the regime. Dr Li’s behavior highlighted government incompetence — a serious disease outbreak of which they were not aware — so he had to be dealt with. But, of course, there’s another possibility, increasingly discussed online, that would more neatly explain the regime’s overreaction to this conscientious doctor: that the regime manufactured Corona in the first place.

There has been all kinds of speculation over this possibility, with even a Wikipedia page having been created about “Misinformation” regarding Corona, and how each conspiracy theory has been successfully refuted. The problem is that the conspiracy theorists implicitly mentioned by Wikipedia speculate wildly, without providing hard evidence. But it appears that there is evidence for one of the more outlandish theories: that Corona was created as a bioweapon.

It was announced on January 28th that two Chinese nationals, both of them scientists researching HIV, had been charged with serious offences, one of them for stealing 21 vials of biomaterial from Harvard’s Chemistry and Chemical Biology Department and trying to smuggle it out of the US, into China, in a sock. Both were charged that they were acting as agents of a foreign government — the People’s Republic of China — and aiding that foreign government. One of this duo, the sock smuggler Zaosong Zheng, had been arrested in December, but the other, Yanqing Ye, had already managed to flee back to China. Ye, who was allowed to return to China after being interviewed at Boston Airport in December, falsely told officials that she was not working for China’s National University of Defense Technology. She would, obviously, never have been allowed access to such sensitive chemical research if she had told the truth; the truth being that she does research for the Chinese military.

Further charged was Dr Charles Lieber, the head of Chemistry and Chemical Biology Department. It is alleged that Dr Lieber, despite working on a grant which specifically rules out having foreign interests, had, since 2011, been working as a “Strategic Scientist” at Wuhan University of Technology (WUT), in the city where the Corona virus began. According to our government, Lieber was running a “talent program” that “seek(s) to lure Chinese overseas talent and foreign experts to bring their knowledge and experience to China and reward individuals for stealing proprietary information.” It is alleged that, in 2018, Lieber lied to investigators about his involvement in this program [Harvard University Professor and Two Chinese Nationals Charged in Three Separate China Related Cases, Department of Justice, January 28th 2020].

So, we have the head of department working clandestinely for a university in Wuhan — one only a few miles from the epicentre of the outbreak, by the way — as part of program which aims to steal for the Chinese government. And we have two Chinese students, under this head’s ultimate supervision, stealing biochemical material and/or secretly working for the Chinese military. But the plot thickens further. A group of Indian scientists have sequenced the virus (in its animal form) and come to the conclusion that it was deliberately engineered [Uncanny similarity of unique inserts in the 2019-nCoV spike protein to HIV-1 gp120 and Gag, By Prashan Pradhan et al., bioRxiv, 2020].

Now, let’s be a little cautious. This is a pre-publication study and it has not, therefore, been subject to review by other scientists. But assuming it is a competent piece of research, then, taken with the other evidence, it does not look good for the Chinese authorities.

For the conspiratorially-minded, which is a reasonable default state of mind when dealing with a secretive and highly organized dictatorship, it might seem like something is awry. It might seem like the Chinese government has a program of biological warfare and, somehow, one of the weapons it has been working on has escaped, from Wuhan University of Technology, leading to the Corona virus that is causing so much panic. Or from the Wuhan Center for Disease Control (located 300 yards from the fish market where the infections supposedly began). Or from the Wuhan National Biosafety Laboratory.

It might seem like the Chinese authorities realised what had happened and wanted to suppress it, leading them to arrest the dutiful doctor who raised concerns about the outbreak.

It might even seem like the paranoid, tyrannical Chinese government have manufactured a virus for specific use against their own people, as a means of quelling any future rebellion against the regime’s authority.

It might even seem like the fact that the Chinese government are possibly capable of this might foment distrust, beyond China, in the governing elites more generally. This might explain why the British tabloid The Express has reported the “conspiracy theory” in its special “Weird Section,” as if to make light of it and why the leftists of Wikipedia, whose aim to prop up the current anti-White political dispensation, felt the need to create a page on “Misinformation” surrounding Corona.

The Express article quotes various twitter “Conspiracy Theorists” but does not discuss any of the hard evidence for the supposedly wacky theory. And it’s all okay because the article tells us:

Dr David Jacobs, the co-founder of the Coalition of Ontario Doctors in Canada, tweeted: ‘Please stop with the conspiracy theories!’ Furthermore, ‘We’re also battling the trolls and conspiracy theories’ tweeted Dr Tedros Adhanom Ghebreyesus, WHO Director General. ‘The #coronavirus was NOT made in a lab. It is NOT a biological weapon’ [Coronavirus theories: Is coronavirus an experiment gone wrong? Is it a Chinese bioweapon? By Sebastian Kettley, Express, February 12, 2020].

So, it’s all okay then. It’s not as if, as has been noted by communication analyst Francisco Yus, the use of capitalization is associated with expressing strong emotion [Cyberpragmatics: Internet-mediated communication in context, By Francisco Yus, 2011, p.234]. And it’s not as if we would expect people to not feel strong emotions in response to a theory that was ludicrous and which they knew must be without a shred of truth. It’s not as if conflicted feelings over what you want to be true and what you know might be true creates “cognitive dissonance” and, thus, an emotional reaction.

The bioweapon theory, at the very least, deserves to be taken seriously, until it is successfully refuted.

Organizing in the Face of Adversity: Lessons from History, Part 3: Southern Resistance to Racial Integration as Case Study

Go to Part 1.
Go to Part 2.

Now that we have examined our present terrain, we may look to a notorious past campaign of White resistance. Though still afflicted with egalitarian dogma, the most balanced examination of Southern “massive resistance” to the Civil Rights movement that I have thus far encountered is George Lewis’s Massive Resistance.[i] Because history is written by the victors, books about White resistance almost exclusively focus on Black actors and merely caricature, demonize, and marginalize the White advocates involved. Lewis takes them seriously. I recommend it to all White advocates, as we may learn much from the resisters’ successes and ultimate failures.

In 1954, the Supreme Court fundamentally transformed the United States with Brown v. Board of Education. By targeting our children and their education, much of the groundwork for the current regime was laid. The truth of how the ruling was preordained and engineered has been recounted by Jared Taylor. Because Plessy v. Ferguson and the segregation that it underpinned were entirely and indisputably constitutional, Thurgood Marshall of the NAACP (and later, our first of several affirmative action Justices) employed the fraudulent sociology of Kenneth Clark’s laughable, poorly-designed doll studies (and suppressed contradictory evidence) to argue a the theory that Blacks had poor self-esteem because of White racism. Meanwhile, Leftist Justice Felix Frankfurter was involved in backroom collusion with his former clerk, and then-current staffer in the Solicitor General’s office (handling the government’s argument), Philip Elman. This, coupled with the death of Chief Justice Vinson (who was replaced with the robed activist Earl Warren), the heart attack of Justice Jackson (who agreed to the ruling in his weakened health), and the peer pressure applied to Justice Reed, ensured the unanimous ruling. Though ostensibly based on the Fourteenth Amendment[ii] (itself never actually legitimately ratified, and written only to apply to newly emancipated slaves), the decision was reached absent any coherent legal argument.

White Resistance to Civil Rights and Racial Integration. In February 1956, Virginian Senator Harry F. Byrd, Sr. proclaimed that “if we can organize the Southern States for massive resistance to this order, I think that in time the rest of the country will realize that racial integration is not going to be accepted in the South.” He, along with fellow stalwarts Strom Thurmond of South Carolina and Richard Russell of Georgia, drafted the Southern Manifesto, signed by one hundred and one Congressmen, announcing the unified resistance of a Solid South. Prominent political operatives who had been deeply involved in the 1948 Dixiecrat revolt (in which States’ Rights Democrat candidate Strom Thurmond carried four states) helped to organize grassroots supporters into the Citizens’ Councils and other organizations; they quickly developed an extremely sophisticated propaganda dissemination campaign. Literature, radio ads, and television spots were churned out at breakneck pace; we must emulate this propaganda campaign, but it can only be effected if we can raise the requisite funds.

Eight states created special legislative bodies to counteract Brown; the former states of the Confederacy passed 136 different legislative items in under three years. Mississippi Senator James Eastland, Chairman of the Senate Judiciary Committee, announced that “you are not required to obey any court which passes out such a ruling. In fact, you are obligated to defy it.” North Carolina devolved integration to local option assignment plans to ensure that local school boards would have to be individually challenged, while Virginia proposed state-funded tuition grants for Whites to attend private schools, as well as empowered the legislature to close schools under direct desegregation orders. State legislatures targeted Jewish-funded and -staffed leftist front groups such as the NAACP, exposing their membership rolls to the public until the Supreme Court forbade it in 1958. Arkansas Governor Faubus and South Carolina Governor Hollings demanded that the Kennedy Administration investigate the sources of the protesters. Agitators were blacklisted by local businesses and utilities providers. Leftist sit-in protestors were arrested for criminal trespass and disorderly conduct until 1963, when the Supreme Court inexplicably ruled that sit-in protesters were not trespassers if their target was segregated by ordinance. District Court Judge J. Skelly Wright made legal history by enjoining the entire Louisiana state legislature with a restraining order to prevent them from further pushing back against Brown. By the start of the 1964–5 school year, five states had succeeded in their minimum compliance strategy, with less than one percent of Black students “integrated” into White schools. Successful combinations of grassroots activists and sympathetic local and state officials halted desegregation at several schools. Where desegregation was achieved, such as at the University of Alabama, the victory was often pyrrhic.

Ultimately, though, the enemy won. In 1957, President Eisenhower, in a move uncomfortably redolent of the worst excesses of Reconstruction, federalized the National Guard and ordered a thousand soldiers from the 101st Airborne, along with a phalanx of FBI agents, into Little Rock, Arkansas, to integrate Little Rock Central High School at gunpoint. As Florida Senator George Smathers remarked, “Surely all of us should have learned by this time that neither courts, nor troops, nor decisions, nor force, can make one group of our citizens wish to associate with another.” In the words of LSU professor Peter Carmichael, “Ignoring the primordial fact of White repugnance [at forced integration] and resorting to coercion is the opposite of a solution- it is the generation of more and harder problems.”

In 1961, another showdown occurred at Ole Miss in Oxford, Mississippi; Senator James Eastland noted that the outcome would “determine whether a judicial tyranny as Black and hideous as any in history exists in the US.” Amidst the waving of Confederate flags and the singing of Dixie, Governor Ross Barnett led the Ole Miss-Kentucky halftime crowd in chanting, “I love Mississippi! I love her people! I love her customs!” Unbeknownst to the crowd, Barnett was in backchannel negotiations with Attorney General Robert Kennedy. When Kennedy threatened to reveal their conversations, Barnett backed down after firing this parting shot: “Gentlemen, you are tramping on the sovereignty of this great State and depriving it of every vestige of honor and respect as a member of the union of states. You are destroying the Constitution of this great Nation. May God have mercy on your souls.” The resultant mass violence in Oxford had to be quelled by forces from Memphis.

Even Alabama Governor George C. Wallace (who had declared in his 1963 inaugural address that “in the name of the greatest people that have ever trod the earth, I draw the line in the dust and toss the gauntlet before the feet of tyranny, and I say: Segregation now, segregation tomorrow, segregation forever”) stood down from the schoolhouse door that very June.

The indispensably prescient Wallace merits further examination as a clear link between the Old Right and the Dissident Right. A natural populist, Wallace courted the White working-class, North and South, standing for traditionalism, communitarianism, family values, anticommunism, cultural nostalgia, and state sovereignty against the inexorable growth of the federal Leviathan. His national conservative populism, over four presidential campaigns and four terms as Governor, clearly presaged our current movement. Running in the Democrat primary of 1964, he said, “I know I can’t win … [but this will] give me a chance to let the people know … the dangers they face from the encroachments of their own government.” In 1966, he famously and aptly remarked that “there’s not a dime’s worth of difference between the Democrat and Republican parties.” Running under the aegis of the American Independent Party in 1968, his slogan was the beautiful “Stand up for America.” His 1972 Democrat primary campaign was cut short by the bullet of a would-be assassin, paralyzing Wallace from the waist down.

The rhetorical strategy of massive resistance was two-pronged. One prong focused on the White working-class, while the other focused on a more highbrow “respectable” approach. Upton Blevins explained the expected consequences of integration; it would induce our children “at a tender and impressionable age, through close contact, heavy pressure and continuous propaganda to lose all racial pride and restraint, to ignore their parents and to sink in delinquency, degradation, filth, total sin and the early destruction of the White race in the South through the pollution of their blood streams and total mongrelization.” Thomas Waring of the Charleston News and Courier expressed commonly held anxieties over venereal disease, illegitimacy, promiscuity, crime, and the effect of the Black intellectual gap on White students: “Which would you really put first: your theory of racial justice, or justice to your own child?”

Fears of miscegenation and the mainstreaming of Black culture have been utterly validated by our present state of affairs. Many of our public schools have been destroyed. White children now form the minority nationally; those schools which have fallen are often beset with regular violence, ubiquitous truancy, rampant substance abuse and promiscuity, with test scores several years behind grade level. The violence seems likely to accelerate as Whites dwindle. Starting this year, the state of California will no longer suspend disruptive students—overwhelmingly non-White, citing the “school-to-prison pipeline.” Our public-school system routinely graduates students who are functionally illiterate. Children lose their virginity at an earlier age with each passing year. As we slouch toward Idiocracy, up to one-third of the US population cannot name a single branch of the government; only about one-quarter can name all three. The music our children grind to is Black, a debased glorification of dissipation, murder, rape, and deviant sexuality. Many young White women live in degeneracy, dressing like prostitutes, becoming ever-more sexually adventurous, and adopting the Black lexicon of “booties,” “hoes,” and “bitches.” Many young White men embrace Black fashion and drug culture.

The Intellectual Basis of Resistance to Integration. The intellectual approach to forced integration focused on the aforementioned constitutional doctrine of interposition according to which the federal government, particularly the judiciary, was deemed to have encroached upon the sovereignty of the states. This doctrine argued that it was the duty of state to interpose themselves between federal tyranny and their citizens. Seven states drafted resolutions explicitly committing themselves to nullification. Alabama, for example, passed a resolution declaring Brown “null, void, and of no effect. … This State is not bound to abide by them.” The Virginia-based Commission on Constitutional Government flourished, attracting Pennsylvania Republicans W. Stuart Helm and Albert W. Johnson with its constitutional principles. In 1963, the CCG launched a campaign to pass ‘the silent amendments’ via the National Legislative Conference. These amendments would have allowed state legislatures to propose amendments by two-thirds of the states, remove reapportionment from federal jurisdiction, and establish a Super Court” made up of the fifty state Chief Justices, with power to overrule the Supreme Court on constitutional issues. They were supported by fourteen states.

These more highbrow proponents of resistance demonstrated the vitally important task of appealing to the Founders; it is imperative that we continue to expose and talk ceaselessly about the explicit White racial consciousness inseparable from the American Founding. In a recent piece, I argued that “Americans do not have a White racial consciousness because we never needed one.” By this, I did not mean that we have literally never had White racial consciousness, but rather that the ever-present White identity was implicit and taken for granted. It was simply understood that this nation was created of, by, and for Whites. The more that Americans understand this, the more they will become receptive to White advocacy. These appeals to history and the Constitution are crucial; we must be careful not to appear too heavy-handed, while at the same time staying aggressive, assertive, and true to our position. We must refrain from vulgarity and retain the moral high ground, but we must still be wary of abstraction, for men do not love esoteric legal theories. We have to try to strike the proper balance between abstraction and concrete proposals; and we have to adopt a dignified, confident demeanor. Blacks won the high ground of “respectability” by persistently dressing their activists in their Sunday best and affecting a “quiet dignity.”

Whites, who like order and abhor chaos, quickly grew uncomfortable with segregationist violence. When, however, this violence occurred, it was often engineered by the opposition. Birmingham Public Safety Commissioner Bull Connor, Montgomery Police Commissioner L.B. Sullivan, and Dallas County Sheriff Jim Clark were goaded into violence by the leaders of the NAACP, the Congress of Racial Equality, and the Southern Christian Leadership Conference (SCLC), and other activist organizations. The Freedom Rides were planned for the sole purpose of generating violence. After Clark roughly arrested a woman in Selma, the Civil Rights coordinators “went back to the church that night and voted him an honorary member…and from then on they played him just like an expert playing a violin.” As Hosea Williams of SCLC said, “We must pray that we are attacked, for if the sheriff does nothing to stop us, …then we have lost. … We must pray, in God’s name, for the White man to commit violence, and we must not fight back.” Police Chief Laurie Pritchett of Albany, Georgia, studied Gandhian tactics in order to formulate the best strategy for countering ostensibly nonviolent protesters. Understanding that police violence was the key to the success of the enemy, he planned extensively to avoid it; for example, he imprisoned protesters in diffused concentric circles to deny activists and journalists a central focal point.

As Senator Byrd announced, “I deplore the violence which has occurred in [Alabama], but it must be realized that it was deliberately provoked by a mixed group of outsiders who went to Alabama to influence the people for propaganda benefits.” Sensationalized coverage of Southern violence (for a more recent example, see the film Mississippi Burning) was instrumental in the passage of the 1964 Civil Rights Act and 1965 Voting Rights Act. The violence also solidified criticism of resistance from the business community; as one member of the Birmingham Chamber of Commerce remarked, “If we’re going to have good business in Birmingham, we better change our way of living.” One infamous photo from Birmingham, showing a police K9 lunging at a Black protestor, had particular impact; little did the public know, the Black man was carrying a concealed knife. The 1963 Sixteenth Street Baptist Church bombing certainly did not help matters. The 1955 murder of Emmett Till provided an ever-present martyr to worship; as with all martyred Leftists, there is more to the story of the circumstances surrounding Till’s death. The enemy had won the propaganda battle, “casting themselves as non-violent Christians wanting no more than the rights that had been guaranteed to them by the Founding Fathers.” By coopting Christianity and engineering staged set-pieces for national and international media, Leftist operatives won the narrative war.

The Jewish Representative Emanuel Celler (author of the 1965 Hart-Celler Act that completed the Jewish coup and what may be soon regarded as the death warrant of the United States of America) introduced the Civil Rights Act. Virginian Howard Smith bottled up the bill in the House Rules Committee, and Senator Byrd studied arcane rules of procedure to do the same in the Senate. President Johnson gave Byrd “the treatment,” and he relented. Georgian Richard Russell attacked the Communists blatantly involved in the Civil Rights campaign and enlisted the other senior Southern Senators (including South Carolinian Thurmond, who had in 1957 set the still-reigning record for the longest filibuster, speaking for twenty-four hours and eighteen minutes in opposition to a civil rights bill) for a collective filibuster from March 9 to June 10, 1963. North Carolina Senator Sam Ervin proposed a double jeopardy amendment that would have ruled out a second federal trial for those cleared by a state court for the same offense, thus defanging the Act by robbing the federal judiciary of the opportunity to go after segregationists who had been acquitted at home. Ervin’s amendment actually won by a single vote, but the vote was overturned on a technicality.

There was no massive resistance in response to the Civil Rights Act (1964) and the Voting Rights Act (1965). Since, unlike Brown, these were pieces of legislation that had gone through the legislative process, many of the most vociferous segregationists hung up their hats. Many political leaders perceived that resistance was no longer expedient for their careers. No minds or hearts were changed, but the will to overtly resist as they had was finally worn down; resistance was forced to turn from being proactive to reactive, from vanguard to rearguard. Many Southern states enacted complicated vote dilution schemes to blunt Black votes, including the gerrymandering and annexation of electoral wards, along with full-slate and ‘at large’ elections. Explicit calls for enforced segregation morphed into arguments for freedom of association, from communitarian racial ideology to individual liberty. Gradually, racially grounded arguments disappeared from the American political scene. Even in response to the forced busing ushered in by Swann v Charlotte-Mecklenburg County, nothing like massive resistance reared its head. Whites typically reacted quietly and individually, rather than collectively, though parents’ anger was belied by statements such as these: “I served in Korea. I served in Vietnam. I’ll serve in Charlotte if I need to.” There was no response to the 1981 division of the Fifth Circuit Court of Appeals into the Eleventh Circuit which further weakened the South,

Though it was united insofar as it was arrayed against a common enemy (akin to the few decades of American “conservative” unity against Communism), massive resistance suffered from disunity and tonal uncertainty. The South was caught in a vise, contending with internal pressure from the business progressives of the Sunbelt of the New South, whose only allegiance was to profit, and overwhelming external pressure from the federal government; resisters could not decide on a unified methodology for achieving their common goals. The Citizens’ Councils and other affiliates were coordinated on the local and state level, but the response was rarely synchronized across state lines. Organizations were only loosely connected under the auspices of a phantom umbrella; this often resulted in something like a boat with each rower paddling in a different direction. Though massive resistance benefitted greatly from the involvement of powerful politicians, those same politicians’ personal ambitions and calculations certainly contributed to the relatively sudden death of resistance. Crucially, unlike during the secession in 1860, the churches afforded no organizational direction to their congregations; the Southern Baptist Convention almost immediately endorsed Brown. The parallel cancers of Zionist and Leftist theology had poisoned nearly every branch of the Christian tree; White advocates should not take this to mean that Christianity is irredeemable. The root is still pure.

My reading of Massive Resistance left me with many questions worth pondering: Was desegregation inevitable? Did our more explicit resistance play into the enemy’s hands, or did we not put up enough of a fight? Did the movement make a mistake by gradually turning aside from racial argumentation? Did the resisters underestimate the power arrayed against them? How do we strike the right balance between telling the unvarnished, brutal truth without alienating potential supporters? In other words, how far will subtlety actually carry us? If a potential supporter would be alienated by what we have to say, do we even want their support? Aren’t they just deadweight? Are we only capable of success at the local and state level, or is a national campaign possible? Why did massive resistance lose its steam? How do we reach wider audiences so that we are not only preaching to the choir? Is money the only solution? How far are we willing to go to fight for our civilization and our people? How much are we willing to sacrifice? How can we overcome insufficient commitment? How do we come to terms with the creeping probability that we may never get a majority of Whites to join us?

There is, of course, another possibility. This is the gnawing fear that perhaps we will not fight, that perhaps there already are not enough of us, that we may in fact lay down our arms as so many of our European brethren have. But what people even deserve to survive who refuse to protect their own children? Maybe there will be no Last Stand. No Thermopylae. No Alamo. No Pickett’s Charge. No Rorke’s Drift. But even in the depths of this despair, we must still gird our loins to fight on. Sisyphus kept pushing, rather than let the boulder crush him. We have rebounded before, though our current crisis may be the worst that the West has ever faced. After the fall of Rome, the West did not die, but rather massively contracted; the Dark Ages were not so dark. As long as we monastically keep the flame and preserve the texts, our civilization will persist, a foundation from which, as Ferdinand Bardamu recently wrote, we “may be able to give future Whites the opportunity to rebuild a society of their very own upon the ashes of post-Western degeneracy.” Where even a few of us remain, so too does the West. But is that the best that we now can hope for?

5 Lewis, George. Massive Resistance (London: Bloomsbury Academic, 2006).

6 Berger, Raoul. Government by Judiciary (Indianapolis: Liberty Fund, 1997).


Organizing in the Face of Adversity: Lessons from History—Part 2: The Contemporary Environment

Go to Part 1.

We must examine our present state of affairs before we can contemplate organized resistance. The reality is that we no longer live in a free country. We are only free in the sense that we may buy whichever variety of condom that we would like. The truth is available, but only for those who have the time and the motivation to seek it out from underneath the mountain of lies that it is hidden under; as a consequence, and in the absence of funding, essentially only the already converted are exposed to the truth. The general population is bombarded every day with racial propaganda, while nauseatingly horrific anti-White crime goes unreported. The drug-feud murder of Matthew Shepard is turned into a homosexual “hate crime.” George Zimmerman was airbrushed White by CNN, which also took the liberty of doctoring away the head wounds he sustained in Trayvon Martin’s brutal assault. The Martin family’s race hustler legal team recruited an imposter to pose as Trayvon’s girlfriend and fabricate testimony. Prosecutors eagerly went along with the hoaxed evidence so that they could arrest Zimmerman. Black Lives Matter exists mainly to continue producing race hoaxes and demonizing law enforcement; their rallying cry, “hands up, don’t shoot,” is itself a fraud. We have entered a brave new world of media manipulation; the rise of deep fake technology presents us with both a terrifying challenge to overcome and a thrilling opportunity to further discredit Big Journalism.

We have only nominal freedom of speech, as heretics are ostracized as social pariahs, cast into outer darkness. Our enemies at the Anti-Defamation League and the Southern Poverty Law Center, allied with legions of ‘constitutional’ law professors and legal scholars, are campaigning to eliminate even our nominal freedom of speech by creating out of whole cloth the concept of “hate speech.” Many have lost their jobs. Big Tech monopolies like Google and PayPal close accounts without notice. The ruling class is rapidly building a panoptic social credit regime modeled on the Chinese system, with dissidents functionally deprived of all of their rights as public citizens. Growing calls from the largest banks for a cashless society should frighten us all. Goldman Sachs will no longer underwrite Initial Public Offerings if a business is too White.

Even the slightest hint of White identity is hunted down and extinguished before it actualizes into a threat against the egalitarian regime. The US Army and Navy rapidly investigated when cadets were filmed making the ‘OK’ sign. This White Scare, to borrow Jared Taylor’s parlance, is intensifying into a very real danger for all White advocates; indeed, for all Whites. Taylor was banned from traveling to Europe and Greg Johnson was arrested and deported from Norway, solely for their heterodox views. White advocates are hounded from and prevented from participating in public life. The city of Madison, Wisconsin, declared that racism is a “public health crisis,” blaming White racism for the poor health and proclivity for violence of Blacks. That case study in mediocrity, David French, has announced in National Review that “it’s time to declare war on White-nationalist terrorism.”

Broad bipartisan support is growing for “red flag” laws to disarm Whites, who may be reported anonymously for any reason and then stripped of their constitutional rights without recourse. New York’s “criminal justice reform” includes the elimination of bail (replete with paying criminals to return for their court dates), the examination of crime scenes by criminal defendants, and the outing of witnesses to the criminals they are to testify against. Illegal alien criminals routinely get away with murder. The House of Representatives has introduced what might be the most radical piece of legislation in American history, the “New Way Forward Act”, to accelerate and codify this anarcho-tyranny.

Democrat presidential candidates have gone whole hog on all of this. Senators Elizabeth Warren and Bernie Sanders advocate government regulation of all media platforms “to stop the spread of hate,” with Warren going so far as to propose criminalizing the spread of “disinformation.” Former Vice President Joe Biden launched his campaign with a video focused on the Charlottesville hoax and recently announced plans to make Big Tech censor even more, finding the current tyranny woefully inadequate. Senator Cory Booker proposed a separate law enforcement agency for “hate crimes and White supremacist violence.” Mayor (and Episcopal Saint) Pete Buttigieg has said that he supports applying the same strategies used to combat “foreign radicalism,” meaning Islamic jihad, to suppress White nationalism. One must wonder if this includes the Obama Doctrine of murdering American citizens. In a speech last year at the Brookings Institution, Kevin McAleenan of the Department of Homeland Security announced that the agency is prioritizing “racially based violent extremism, particularly White supremacist terrorism.”

An ominous sign of the acceleration of our suppression is the growing push to medicalize “racism,” ably recounted by the eminent Jared Taylor. Much like the Soviet Union institutionalized dissidents by diagnosing them with “sluggish schizophrenia,” those of us who are even accused of straying from the regnant dogma are seen as mentally diseased. There simply must be something wrong with the heterodox; after all, everyone knows that diversity is our greatest strength. Psychiatrists study “intolerant personality disorder” and “pathological bias.” Oxford researchers claim that the blood pressure drug propranolol, which blocks neurons that have to do with threat perception, may pharmacologically “treat” racism and “cure” our intolerance. Neuroscientists have also experimented with magnetically disabling parts of the brain such as the pMFC, which helps to respond to threats, in order to change attitudes regarding faith and immigration.

The first episode of HBO’s Watchmen may provide a look ahead into what lies in store as the White Scare is increasingly militarized. Set in the present day in an alternate America (one whose flag has almost twice as many stars as ours), the Leftist regime of President Robert Redford has instituted sweeping racial “reparations,” authorizing a vast transfer of wealth from White to Black. The events of the episode take place in Tulsa, Oklahoma; the signs of Black power are manifest, from an all-Black production of the musical Oklahoma to a plethora of Black monuments. All of the Blacks we see are upper or middle-class figures of authority. The majority-Black police force (though of course led by enlightened Whites) is forced to wear masks to protect their identities from the violent White resistance organization known as the Seventh Kavalry, or 7K. The group is named for the Seventh Cavalry Regiment slaughtered at Custer’s Last Stand, now memorialized at Little Bighorn Battlefield National Monument, renamed in honor of our soldiers’ butchers by President George H.W. Bush in 1991.

The episode begins with a highly fictionalized account of the 1921 Tulsa race riot, glorying in its depiction of sneering Klansmen as they indiscriminately massacre good, hardworking Blacks. This ham-handed scene is followed up by the present-day murder of a Black police officer by a member of 7K. Upon learning of the killing, “Sister Night,” the empowered Black hero (and our police officer protagonist) speeds down to “Nixonville,” a sad trailer park community that HBO’s writers clearly reveled in portraying. These poor Whites, relegated to heavily-surveilled slums, are caricatured trailer trash. Sister Night gets out of her car, calmly catwalks up to one of the trailers, kicks in the door, and brutally beats the occupant, tossing him in the trunk of her car. At police headquarters, the man is “interrogated.” In order to divine his racist thoughts, the police place the man in a device called “the pod.” In this “pod,” a circular metal room walled with a continuous video screen, a White officer called “Looking Glass” announces that he will be asking the “suspect” some questions.

The man asks for a lawyer, and Looking Glass responds, “Yeah, we don’t really have to do that with terrorists.” He then clicks a remote and the video screens turn on. While Looking Glass asks the man questions (such as, “If I defecated on the American flag, how would that make you feel?,” “Should all Americans pay taxes?,” and whether he believes in false flag conspiracy theories), harsh music blares and images of Americana are shown one after another in rapid succession: cornfields, barns, classic advertisements featuring Whites, the moon landing, cowboys, a merged Gadsden and Confederate flag, Mount Rushmore, American Gothic, George Custer, Rosie the Riveter, a woman with an apple pie, the World Trade Center, White children, and the Statue of Liberty. Interspersed in the Americana are images of a self-immolating monk, Harriet Tubman, a Klan rally, the Negro Baseball League, Blacks graduating from college, Black soldiers, the atomic bomb, and finally the flag of the Third Reich followed by the now-old American flag. Looking Glass was monitoring the man’s eye dilations in response to his “bias questions.” Sister Night then viciously tortures the “terrorist,” his blood and urine commingling in a heavy stream from underneath the door. We need not continue in our examination. The message is clear; White identity is an evil that must be eradicated, and there are no moral compunctions in how that is accomplished. In so doing, however, Watchmen proves our point: American history, along the Americana that materially represents our culture, is White. American identity is necessarily White identity.

Randy Weaver and the Siege at Ruby Ridge: The Past as Future?

The Siege at Ruby Ridge[i] is an instructive example of what we face, all the more so for the fact that it occurred almost thirty years before the fervid witch-trials of the present regime began in earnest. Randy Weaver and his family were Christian fundamentalist White separatists, targeted for destruction because of their beliefs. Randy and his wife, Vicki, moved from their native Iowa to remote Ruby Ridge, Idaho, to live independently off of the land. They simply sought escape from our fallen and degraded society. Randy dabbled in local politics, unsuccessfully running for sheriff in 1988. He was put on the federal radar in 1985 after a vindictive former neighbor wrote letters to multiple federal agencies spuriously alleging that Weaver had made threats against government officials, including President Reagan. The Weavers were interviewed and determined not to pose any threat; no charges were filed. At the 1986 World Congress of Aryan Nations, Weaver was approached by Kenneth Fadeley, a paid-per-conviction ATF informant; in 1989, Fadeley persuaded Weaver to sell him two shotguns. In a brazen case of entrapment, the ATF informant asked Weaver to saw them off and showed him where to shorten the guns. After Fadeley’s cover was blown in 1990, the ATF threatened to charge Weaver for selling the illegal sawed-off shotguns (even though the guns were shortened after the fact at the behest of an informant). Government agents told Weaver to act as their informant in order to make the charges disappear; principled, he refused.

He was loosely acquainted with members of the Aryan Nations, which was the target of prosecutorial obsession (akin to the current White Scare) due to the involvement of a handful of fringe members with The Order, a small, violent White supremacist group.  The leader of The Order, Robert Mathews, was burned alive in his own home by the FBI’s Orwellian Hostage Rescue Team in 1984. Federal agents mistakenly believed that Weaver was much more deeply involved than he actually was, which partially explains their zeal. Ironically, however, the Aryan Nations was already almost entirely compromised, with federal operatives from several different agencies (none of whom communicated with each other) occupying many of the key positions within the organization. To further coerce Weaver into acting as an informant, agents posed as a couple with a broken-down car; when he and Vicki stopped to help them, as kind rural White Americans do, ATF agents swarmed from the car, guns drawn, and violently threw Randy to the ground to place him under arrest. Vicki was pushed face first into the snow. Nearly destitute, the Weavers posted their beloved cabin as bond. Weaver still refused to be an informant.

In 1991, Weaver was officially charged. His court date was set for February 20, but he was notified, not coincidentally, that it was set for March 20. After he failed to appear in court on February 20, the US Attorney’s Office, despite knowing of the erroneously communicated date, sought a grand jury indictment and obtained an arrest warrant. For the next eighteen months, the US Marshals Service spent hundreds of thousands of dollars on a vast surveillance campaign that included psychological profiles, military aerial reconnaissance, a network of spy cameras placed throughout the surrounding woods, and mail interception. The agents even kept a record of the menstrual cycle of Sara Weaver, Randy and Vicki’s sixteen-year-old daughter. A voluntary surrender was negotiated and then rejected by the US Attorney.

On August 21, 1992, the federal hammer fell. Six Marshals, equipped with military camouflage, night-vision goggles, and M16 automatic rifles approached the cabin. The family dog, Striker, was alerted to their presence and started barking. The Weavers’ fourteen-year-old son, Sammy, followed the dog, along with Kevin Harris, a family friend. The firing was initiated when a Marshal killed the Striker to silence him, shooting him in the back with a machinegun. Sammy fired in the direction the shot had come from, and Randy called for Sammy to run back to the house. Sammy yelled, “I’m coming, Dad,” and was shot several times in the back as he ran away into the safety of his father’s arms. Sammy was killed instantly, his back torn to shreds by an agent’s machinegun. A Marshal was killed during the frenzy, and agents blamed Harris. Though Harris had fired back in self-defense, the federal agent was most likely not killed by him, but rather by friendly fire. The FBI’s Hostage Rescue Team was called in.

The cabin, presented in the media as an “armed compound,” was surrounded by hundreds of government agents. Rules of engagement were quickly drafted to authorize deadly force against the Weavers; agents present characterized their orders as “if you see ‘em, shoot ‘em.” On August 22, FBI sniper Lon Horiuchi shot Randy from behind, attempting to sever his spinal cord as he walked to retrieve his son’s dead body; as he staggered back into the cabin, Horiuchi shot Vicki Weaver in the head as she held their ten-month-old baby in her arms. Just like her young son Sammy had been the day before, she was killed instantly, the side of her face blown away. Randy pried their infant, crying “Mama,” out of its mother’s dead arms, and he and his daughter Sara dragged the bloody body of the mother-of-three through the kitchen. Kevin had been hit in his shoulder by the bullet that felled Vicki.

For the following eight days of the siege, as Sara prayed for deliverance and tended to her baby sister, her wounded father, their dying friend, and her ten-year-old sister Rachel, covered in her mother’s blood, the FBI used loudspeakers to taunt the family, saying inhuman things like, “Good morning, Mrs. Weaver! We had pancakes for breakfast. What did you have?,” “Did you sleep well last night, Vicki?,” and, “Behind every strong man there is a good woman. … Can we get some milk for [the baby]?” They named their base “Camp Vicki.” At one point during the siege, a local news crew observed gasoline being loaded onto an FBI helicopter which was then seen circling the Weavers’ cabin; after noticing that it was being videotaped, the helicopter left the area.

Our government engaged in further psychological warfare. The telephone was kept constantly ringing, the loudspeakers always in use, the drone of tanks and helicopters in harsh cacophony. Brilliant spotlights were kept trained in the windows to make night indistinguishable from day. Similar tactics were used during the Siege at Waco[ii], where agents destroyed the water supply, defiled a grave, and employed loudspeakers to blare the sounds of rabbits being slaughtered along with Tibetan chanting and roaring jet planes.

In the aftermath of the siege, Randy was convicted only for his failure to appear in court. Kevin Harris was acquitted of all charges. Weaver received a three-million-dollar settlement, Harris a four-hundred-thousand-dollar settlement. The federal government attempted to destroy all copies of the FBI internal report. Evidence was “misplaced,” withheld, and fabricated. Horiuchi claimed that he could not see through the cabin door, but a sketch he made the day after the murder clearly shows Vicki in the window. This evidence was suppressed by the FBI, purposely mailed to prosecutors two weeks late and fourth class to boot. He pled the Fifth and had his manslaughter charge dismissed due in large part to behind-the-scenes lobbying for blanket immunity by none other than our current Attorney General, William Barr. Horiuchi received no punishment whatsoever and went on to slaughter more innocent civilians at Waco the very next year. The Ruby Ridge Task Force released a heavily-redacted report. The six Marshals who initiated the siege, murdered the Weaver’s dog, and murdered Sammy Weaver received the highest commendations. The Department of Justice declined to prosecute the FBI officials who covered up the tragedy, but one agent was sentenced to eighteen months in prison and a handful of others suspended for a few days. Deputy Director Larry Potts was censured (the same punishment given for misplacing FBI property) and eventually demoted following the massacre at Waco. In 1997, the cabin that the Weavers had poured their hearts and souls into collapsed under the weight of winter snow.

A more recent example of this violent suppression is the 2016 murder of the rancher Robert LaVoy Finicum. Finicum was a spokesman for the Citizens for Constitutional Freedom militia, which occupied the Malheur National Wildlife Refuge in Oregon for forty-one days at the beginning of 2016. The context for the standoff is labyrinthine; the core issue is the decades-long struggle over Western lands between the federal government and environmentalists on one side, and local governments and cattle ranchers on the other. The Bureau of Land Management owns over one-eighth of the landmass of the United States, including a massive majority of the Western states.

The events that led to Finicum’s death can trace their birth to two events. The first was the 2014 Bundy standoff in Nevada, which began when Cliven Bundy’s cattle were determined to threaten the habitat of the desert tortoise. Bundy was then ordered to reduce his cattle population and the extent of their grazing; when he refused and discontinued payment of BLM grazing fees, he quickly became in arrears for over one million dollars. Federal agents began removing hundreds of Bundy’s cattle, killing several. Armed militiamen came to Bundy’s defense, culminating in a tense standoff.

The second event, which most directly precipitated the events of January 2016, was the federal harassment of father and son Dwight and Steven Hammond. The Fish and Wildlife Service had attempted to buy out the Hammonds for years, to no avail, to add their land to the Malheur National Wildlife Refuge. In conjunction with the BLM, the ranchers adjacent to the Hammonds had all been forced out. Grazing permits were revoked, and fees significantly hiked for those remaining. The irrigation system that the ranchers had so painstakingly built was intentionally diverted, destroying once-thriving ranch land. Finally, ostensibly at the behest of environmentalists (despite the fact that the privately-managed land was in fact more biodiverse than the federally-managed land), the Hammonds also had their grazing permits revoked. Their land was fenced, and the process of removing their cattle began. When they set controlled burns, as they always did, they were arrested for the charge of arson on federal land. After one served one year in prison and the other three months, capricious federal prosecutors appealed their sentences to the Leftist Ninth Circuit Court of Appeals. Unsurprisingly, the harsher sentences sought were granted: five years for father and son. Two years later, President Trump pardoned the pair.

Ammon Bundy, son of Cliven, rallied hundreds of patriots (federal informants peppered among them) to the Hammonds’ defense, and they arrived in droves to occupy Malheur. LaVoy Finicum was among them; in his words, “It’s about freedom for all of us, and so I crossed the Rubicon and I came here.” On January 26, Finicum and several militia leaders had been granted a pass to attend a town hall meeting led by the sympathetic sheriff of Grant County. Traveling through Harney County on a remote stretch of Route 395, selected by federal agents for its unreliable cell phone service, Finicum’s convoy was ensnared in a trap set by the FBI. In unlit and unmarked vehicles, government operatives followed Finicum and suddenly turned on their lights, ordering his truck to pull over. He refused, stating his permission to visit the sheriff, and sped away to try to escape. Unbeknownst to him, however, the FBI had set another roadblock ahead, known as a “deadman” block because it presented no opportunity for escape.

Finicum attempted to go around the block, but was stopped by agents and the presence of heavy snowdrifts. A member of the by now all too familiar FBI Hostage Rescue Team fired shots before Finicum had even stopped his truck; one shot pierced the ceiling of the truck, while the trajectory of the other is disputed. Reports vary on whether Finicum was out of the truck or not when the second shot was fired, and whether or not this second shot was fired almost immediately upon exiting. Agents claim that the shot went wild while he had not yet exited the vehicle, while some witnesses report that the shot hit Finicum in his side after he had stepped out. Finicum undisputedly exited the truck with his hands in the air and challenged the agents, “You back down or you kill me now. Go ahead. Put the bullet through me. I don’t care. I’m going to go meet the sheriff. You do as you damned well please.” Agents reportedly saw Finicum reach for his pocket, and he was shot three times in the back. One day short of his fifty-fifth birthday, the father of eleven was killed.

Agents did not attempt to provide Finicum with medical assistance for ten to fifteen minutes. A loaded 9mm handgun was found in his pocket. Those who report that Finicum was hit in the side by the aforementioned second shot claim that it is that which made him twitch to the side, supplying the excuse whereby he was killed; other witnesses report that Finicum was struggling through the snow, and that that is why he appeared to reach for his side. His own statements in the weeks leading up to his killing have led some to postulate that his death was a suicide-by-cop, including, “I have no intention of spending any of my days in a concrete box” and, “I’m not going to end up in prison. I would rather die than be caged. And I’ve lived a good life.”

What is not disputed is that Finicum was a sitting duck, surrounded on all sides by federal agents and Oregon State Troopers. The illustrious FBI Hostage Rescue Team failed to disclose the first two shots fired, saying that they had not fired at all, and agents collected their casings before they could be logged in as evidence. Before the commencement of the operation, the FBI ordered the Oregon State Police to remove their body cameras. Apparently, dashboard cameras were also disabled. After the killing, the FBI refused to submit to a recorded interview, and demanded that the off-the-record interview be conducted with the entire team, rather than one-on-one. Consequently, there is a lack of audio and video evidence; the government did release a heavily edited aerial surveillance video, but it only provoked more questions. The FBI agent who fired the first shot after Finicum left his truck, W. Joseph Astarita, was indicted for obstruction of justice and making false statements, but was acquitted. Astarita and the four other agents who attempted to conceal evidence were not placed on leave during the investigation into their actions. Some reports indicate that authorization for the operation came from executive “national command authority.”

The Finicum affair provides us with yet another example of the disparity between the responses of “our” government to those on the Right and the Left. Leftist violence from Occupy Wall Street to Black Lives Matter and Antifa is met with permissive silence, while patriots merely exercising their constitutional rights are deemed “White supremacists” and “domestic terrorists.” In December 2019, the House of Representatives of the Washington State Legislature labeled State Representative Matthew Shea a terrorist for his support of militiamen like Finicum. Leftist Weather Underground terrorists now rest on laurels in academia. In fact, the son of two of these cop-killers, Chesa Boudin, part of a long line of Leftist legal elite, is now the District Attorney of San Francisco. He is a graduate of Yale Law School; the next time someone refers to that institution’s prestige, just remember the gap-toothed face of Georgia Governor-in-exile Stacey Abrams. Ted Kaczynski’s manifesto, Technological Slavery, is readily available and critically acclaimed, while Anders Breivik’s is available only on dark corners of the Internet. Those who even viewed Brenton Tarrant’s video of the Christchurch shooting have been criminally prosecuted and imprisoned.

Go to Part 3.

3 Walter, Jess. Ruby Ridge (New York: Harper Perennial, 2002).

4 Reavis, Dick J. The Ashes of Waco (Syracuse: Syracuse University Press, 1998).