.

.
Library of Professor Richard A. Macksey in Baltimore

POSTS BY SUBJECT

Labels

Monday, March 30, 2020

Recent Advances in the Study of Human Differences: Implications of the Genomic Revolution



 

Recent Advances in the Study of Human Differences: Implications of the Genomic Revolution


 

Editor’s note: This is the final installment of Devlin’s review of Murray’s Human Diversity.

 

Human Diversity concludes with a consideration of the genomic revolution currently unfolding.

 

Older Americans learned about genetics in Mendelian terms where each gene coded for some trait which was normally either dominant or recessive. The genome as a whole was thought of as analogous to a large jigsaw puzzle. Once the entire genome was mapped, we could figure out which traits was encoded by which gene and the result would be a full understanding of inheritance.

 

Even long before completion of the human genome project in 2003, researchers began suspecting that things were going to get a bit more complicated this, both because some traits are under the control of more than one gene (polygenesis) and because some genes are associated with more than one trait (pleiotropy).

 

As recently as 1999, one of the pioneers of genome-wide analysis made news for suggesting autism might be under the control of fifteen or more genetic loci. That was thought to be an exceptionally high number at the time; today it is considered “quaintly low.” (275) Since genome-wide analysis became possible, it has been discovered that, e.g., human height is caused by the combined effects of around 100,000 different loci. Indeed, statistical correlations with height can be measured for around 62 percent of all gene loci, although most of these probably have no causal effect. The word omnigenic has begun to appear in the literature. In short, Gregor Mendel got lucky with those pea plants of his back in the nineteenth century: he stumbled upon a monogenetic trait which simplified his interpretive task greatly.

 

As for pleiotropy, a 2018 study looked at correlations between genetic loci affecting general cognitive function and 52 health related traits. Statistically significant correlations were measured for no fewer than 36 of these, many of which had no obvious relations to cognitive functioning. Such results could soon become typical.

 

The notion of a straightforward correlation between traits and the genetic loci which “encode” them has, accordingly, been displaced by that of polygenetic scores. To measure a person’s polygenetic score for a given trait, one must first know which single nucleotide polymorphisms (SNPs) are statistically correlated with that trait. Then one performs a genome-wide association study (GWAS) on the person. For each SNP, every human being inherits two alleles, one from each parent. Depending on which alleles the subject has, this yields a genotype score of 0, 1, or 2 for that SNP. These numbers are then added up for all statistically significant SNPs as weighted by their statistical significance. This gives researchers an estimate of how likely the individual is to exhibit the trait.

 

Polygenetic scores are useful to researchers because the causality runs in only one direction: personality, abilities and social behavior cannot cause polygenetic scores. Furthermore, they can predict from birth, even for late-onset phenotypes, and they have 100 percent test-retest reliability. They can also predict differences between family members, which twin studies cannot do. Polygenetic scores are normally distributed, meaning it will eventually be possible to measure means and standard deviations as we do with IQ and other normally distributed traits.

 

Medical research is the first domain where polygenetic scores have begun proving useful:

In 2010, two technical articles in the US National Library of Medicine contained the phrase “polygenetic score” or “polygenetic risk score” in the title or abstract. By 2015, that number was up to 47. In 2018, it was 171. (293)

But the effects of the new technique are unlikely to be limited to medicine. For example, behavioral geneticist Robert Plomin expects polygenetic scores to revolutionize psychology by allowing professionals to estimate the genetic risk patients have for disorders before they develop, for creating more precise treatments, and for shifting the focus from treatment to prevention.

 

Not everyone is equally impressed by the advance represented by polygenetic scores. The chief objection is that statistical correlation is not causation. A trait can be heritable in the statistical sense without having any genetic mechanism. For example:

Marital status is highly heritable—72 percent in one large-sample twin study. The heritability of divorce specifically has been estimated at around 50 percent. Because divorce is heritable, we can be sure that a GWAS will identify a large number of SNPs that are significantly associated with divorce. Suppose, for example, that some of the SNPs are related to the personality trait “irritability.” Isn’t that a plausible causal link to divorce? But we can’t be sure even of that. Pervasive pleiotropy means the SNPs related to irritability are also related to a number of other traits that are just as plausibly a cause of divorce—or, conversely, might be related to resistance to divorce. Omnigenetics and pleiotropy work to create a causal map so sprawling and indeterminate that it is reasonable to conclude GWAS has taught us nothing new about the causes of divorce and finding more SNPs won’t teach us anything important. (284)

In other words, as one critic of Plomin’s claims has written: “Marriage and divorce are heritable, but they do not have a specific genetic etiology.” (284)

More generally, the critics note that

all complex human traits result from a combination of causes. If these causes interact, it is impossible to assign quantitative values to the fraction of a trait due to each, just as we cannot say how much of the area of a rectangle is due, separately, to each of its two dimensions. (285)

These critics are not merely expressing caution

about how many complications remain unresolved. They aren’t just saying that it’s early days yet and that we shouldn’t get ahead of the data. They are saying that when it comes to complex traits, the GWA [genome wide association] enterprise is futile. (285)

Moreover, “complex traits” like divorce could be influenced by completely different genetically influenced traits in different people. Irritability likely makes one more likely to be divorced, but so does a propensity for philandering, and genes influenced one such trait may not be linked to the other. This reality has resulted in evolutionary psychologists emphasizing that the traits that should be studied are those for which there is evidence that they are directly under natural selection—traits like intelligence and the various personality systems (here, p. 264).

 

Insofar as science is about establishing causality, such skepticism about complex traits may well be correct. But, as Murray points out, a ‘soft’ science such as sociology “has never been about causal pathways and perhaps never will be. It’s about explaining enough variance to make useful probabilistic statements.” (286) For that purpose, polygenetic scores are going to be useful and thus, predicts Murray, will inevitably be used:

By the end of the 2020s, it will be widely accepted that quantitative studies of social behavior that don’t use polygenetic scores usually aren’t worth reading. (286) When large databases with genomic information are easily available, it will be akin to professional malpractice to conduct an analysis of social behavior that does not include genomic information. Few quantitative social scientists are going to write such analyses because they won’t get past peer review. The question “Why didn’t you take genetics into account?” will be universal and will have no good answer. (287)

Genome-wide complex trait analysis, or GCTA, is another new technique with uses and limitations similar to those of polygenetic scoring. These techniques are the principle reason for Murray’s confidence that the days of enforced Lysenkoist orthodoxy are now numbered. He expects that genomic analysis will revolutionize physical anthropology, economics, political science and social policy as well as psychology and sociology, in part by permitting far more rigorous studies of environmental effects.

However, such optimism may be misplaced given the pronounced leftist proclivities of much social science, particularly in highly politicized areas like sociology. Imagine the difficulty of publishing a study in a mainstream academic journal in which race is a variable and polygenetic scores are used for variables like criminality or intelligence.

*   *   *

In his final chapter, Murray reflects on the reasons behind the ferocity of our intellectual elite’s devotion to social constructivist dogma. 

This is not a matter which can be decided by means of data and controlled experiments, of course; and by the same token, the empirical arguments of the previous chapters hold good regardless of what one thinks of Murray’s remarks on this subject.

 

The premise concealed behind all the furious insistence on egalitarian dogma is a “conflation of intellectual ability and the professions it enables with human worth.” The elites are smart, and smart people are strongly attached to their own intelligence and the things it enables them to do. Many of them imagine, therefore, that telling another group of people that nature gave them a lower average IQ is tantamount to a council of despair, as though this would make their lives less worth living. But this is not the way ordinary working people see matters.

 

As Murray notes, these natural differences were formerly discussed within the moral vocabulary of Christianity. God calls different men to different stations for reasons of his own—reasons that are inscrutable to human understanding; and it is rebellion against the Divine Will not to accept such providential arrangements. But our human value and eternal destiny are something else entirely: the king has no advantage over the peasant on the Day of Judgment. It behooves even the king, therefore, to retain a sense of humility and dependence on God’s unearned grace.

Today’s elite, having lost its Christian moorings, has lost any way of dealing with natural inequality. They seem to believe that high-IQ professionals are really “better” than working people in some fundamental sense, rather than simply more advantageously placed.  

In order for this situation not to outrage their moral sense, they must think of high status as something equally available to all at birth.  

Such a conception implies that they owe their own exalted abilities and status to personal effort, while their attitude “toward ordinary Americans is too often covertly condescending if they are people of color and openly disparaging if they are white.” (316) 

Under their leadership, what Murray considers the four chief wellsprings of human flourishing—family, community, vocation, and faith—have largely dried up for the rest of society.

 

But the evidence presented in Human Diversity indicates that our cognitive and social elites are merely the winners of a genetic lottery. They stand in far greater need of humility concerning their own accomplishments than “disadvantaged minorities” do of social programs. As Murray notes, within living memory “it was considered un-American to be a snob, to look down on other Americans, and to think you were better than anyone else.” Perhaps the most important consequence of the impending collapse of social constructivism will be the removal of an essential prop from the unbearable self-conceit of the Western elite. Then we can turn our attention to repairing some of the damage done on their watch to family, community, vocation and faith.

No comments:

Post a Comment