Friday, September 16, 2005

MISUNDERESTIMATED:

1993, Summer - Mocked for trying to unseat Governor Ann Richards.

1994, Nov. – Defeats Governor Ann Richards to become Governor of Texas.

1998, Nov. – Becomes first Texas Governor to win re-election.

1999, Fall – Mocked for announcing a run for the Presidency.

2000, Winter – The media nominates John McCain for the Republican Party.

2000, Oct. – Mocked by media before three debates with Al Gore.

2000, Oct. – Wins all three debates against Al Gore.

2000, Nov. – Dirty trick unleashed by Gore Campaign and media.

2000, Nov. – Dan Rather calls Florida for Gore one hour before polls close.

2000, Nov. – Bush Wins election when Katherine Harris certifies Florida’s election.

2000, Nov. – Democrats try to steal election through courts.

2000, Dec. – Supreme Court stops Democrats from stealing the election

2001, Spring – Bush gets his first round of tax cuts passed.

2001, Summer – Jim Jeffords hands Senate control to Tom Daschle and the Democrats.

2001, Aug. – Bush Job Approval hits all-time low according to lib media polls.

2001, Sep. – 9/11/01 terrorist attacks destroy WTC and define Bush Presidency.

2001, Sep. – Bush has bullhorn moment at the WTC.

2001, Sep. – Bush galvanizes the nation in his speech before a joint session of Congress.

2001, Oct. – Democrats say Bush is dragging his feet on responding to the attacks.

2001, Oct. – The U.S. Military begins destroying the Taliban the next day.

2001, Oct. – Democrats say the War in Afghanistan is a quagmire in week one.

2002, Spring – Media and Democrats say Bush knew about 9-11 before it happened.

2002, Summer – Bush begins debate on removing Saddam Hussein.

2002, Sep. – Democrats say Bush is dragging his feet on dealing with Iraq.

2002, Sep. - Democrats demand Homeland Security Department.

2002, Nov. – Bush bets his popularity and the GOP wins back Senate, gains in House.

2002, Nov. – Homeland Security Act of 2002 passes to create new department.

2002, Dec. – Congress passes the Iraq War Resolution. Most Democrats support it.

2003, Mar. – War in Iraq begins.

2003, Apr. – Democrats call Iraq a quagmire one week after war starts.

2003, Apr. – Baghdad falls.

2003, Apr. – Media focuses on looting of Museum. Turns out most artifacts are fine.

2003, May – Bush gets 2nd round of tax cuts passed with GOP Senate helped elect.

2003, July – ‘Bad tan’ Joe Wilson becomes media darling when he lies about Niger.

2003, July – Uday and Qusay take permanent dirt knap.

2003, Aug. – Foreign terrorists begin car bombing. Media calls foreigners, insurgents.

2003, Sep. – Bush Job Approval hits all time low. Lower than previous low in Aug. 2001

2003, Dec. – Saddam Hussein is captured in Mosul. Democrats cry.

2004, Jan. – Dean implodes. Kerry becomes ‘electable’ savior.

2004, Jan. – David Kay becomes media darling with his WMD testimony.

2004, Jan. – Paul ‘mumbles’ O’Neill gets 60 Minutes red carpet.

2004, Jan. – Bush Job Approval hits new all-time low according to lib media polls.

2004, Feb. – Richard Clarke gets 60 Minutes red carpet. The horror. The horror.

2004, Mar. – Bush Job Approval hits new all-time low according to lib media polls.

2004, May – Abu Ghraib photos are paraded on 60 Minutes Wednesday.

2004, May – Bush Job Approval hits new all-time low according to lib media polls.

2004, June – 9-11 Commission becomes platform for the Jersey girls to bash Bush.

2004, Summer – Fahrenheit 9-11 anti- American propaganda film becomes media hit.

2004, Aug. – Anti-Bush liberals led by the very fat Michael Moore march in NYC.

2004, Sep. – Bush ends convention with a speech that crushes Democrats hopes.

2004, Sep. – 60 Minutes Wednesday gives America Memo-gate with a story on Bush.

2004, Sep. – 1,000th soldier dies in Iraq War. Democrats and media celebrate.

2004, Oct. – Bush mocked for poor performances against John Kerry in debates.

2004, Oct. – Afghanistan holds a successful First Presidential Election.

2004, Oct. - NY Times puts out fake story on missing ammo in Iraq.

2004, Oct. – Osama Bin Laden endorses John Kerry.

2004, Nov. – Fake Exit Polls produced by the AP on Election Day to discourage GOP.

2004, Nov. – Bush Wins Re-election 51 – 48 with over 62 Million votes.

2004, Nov. – GOP makes huge gains in both the Senate and the House.

2004, Nov. – Insane liberals claim the election in Ohio was stolen.

2005, Jan. – Sen. Barbara Boxer embarrasses herself by protesting the Election Results.

2005, Jan. – Democrats say the election in Iraq will be a blood bath.

2005, Jan. – 8 Million Iraqis vote. Their turnout nearly matches ours.

2005, Apr. – Lebanon defies Syria. Moves toward kicking them out.

2005, Spring – Terrorists begin large car bombing campaign in Iraq. Democrats celebrate

2005, July – Bush Job Approval hits new all-time low according to lib media polls.

2005, July – Media puts Karl Rove in jail. Media resurrects ‘Bad tan’ Joe Wilson.

2005, Aug. – Media gives bullhorn to Cindy Sheehan.

2005, Aug. – Iraqis create their First Constitution.

2005, Aug. – Hurricane Katrina blows Cindy Sheehan off the map.

2005, Sep. – Media and Democrats blame Bush for the delay in the response.

2005, July – Bush Job Approval hits new all-time low according to lib media polls.

2005, Sep. – Bush sends General Honore to take control. The military succeeds.

2005, Sep. – Bush delivers speech that uplifts America and demoralizes Democrats.

2005, Sep. - On the verge of having John Roberts confirmed. The Teflon Bork.

IV

Elites throughout the West are living a lie, basing the futures of their societies on the assumption that all groups of people are equal in all respects. Lie is a strong word, but justified. It is a lie because so many elite politicians who profess to believe it in public do not believe it in private. It is a lie because so many elite scholars choose to ignore what is already known and choose not to inquire into what they suspect. We enable ourselves to continue to live the lie by establishing a taboo against discussion of group differences.

The taboo is not perfect—otherwise, I would not have been able to document this essay—but it is powerful. Witness how few of Harvard’s faculty who understood the state of knowledge about sex differences were willing to speak out during the Summers affair. In the public-policy debate, witness the contorted ways in which even the opponents of policies like affirmative action frame their arguments so that no one can accuse them of saying that women are different from men or blacks from whites. Witness the unwillingness of the mainstream media to discuss group differences without assuring readers that the differences will disappear when the world becomes a better place.

The taboo arises from an admirable idealism about human equality. If it did no harm, or if the harm it did were minor, there would be no need to write about it. But taboos have consequences.

The nature of many of the consequences must be a matter of conjecture because people are so fearful of exploring them.76 Consider an observation furtively voiced by many who interact with civil servants: that government is riddled with people who have been promoted to their level of incompetence because of pressure to have a staff with the correct sex and ethnicity in the correct proportions and positions. Are these just anecdotes? Or should we be worrying about the effects of affirmative action on the quality of government services?77 It would be helpful to know the answers, but we will not so long as the taboo against talking about group difference prevails.

How much damage has the taboo done to the education of children? Christina Hoff Sommers has argued that willed blindness to the different developmental patterns of boys and girls has led many educators to see boys as aberrational and girls as the norm, with pervasive damage to the way our elementary and secondary schools are run.78 Is she right? Few have been willing to pursue the issue lest they be required to talk about innate group differences. Similar questions can be asked about the damage done to medical care, whose practitioners have only recently begun to acknowledge the ways in which ethnic groups respond differently to certain drugs.79

How much damage has the taboo done to our understanding of America’s social problems? The part played by sexism in creating the ratio of males to females on mathematics faculties is not the ratio we observe but what remains after adjustment for male-female differences in high-end mathematical ability. The part played by racism in creating different outcomes in black and white poverty, crime, and illegitimacy is not the raw disparity we observe but what remains after controlling for group characteristics. For some outcomes, sex or race differences nearly disappear after a proper analysis is done. For others, a large residual difference remains.80 In either case, open discussion of group differences would give us a better grasp on where to look for causes and solutions.



What good can come of raising this divisive topic? The honest answer is that no one knows for sure. What we do know is that the taboo has crippled our ability to explore almost any topic that involves the different ways in which groups of people respond to the world around them—which means almost every political, social, or economic topic of any complexity.

Thus my modest recommendation, requiring no change in laws or regulations, just a little more gumption. Let us start talking about group differences openly—all sorts of group differences, from the visuospatial skills of men and women to the vivaciousness of Italians and Scots. Let us talk about the nature of the manly versus the womanly virtues. About differences between Russians and Chinese that might affect their adoption of capitalism. About differences between Arabs and Europeans that might affect the assimilation of Arab immigrants into European democracies. About differences between the poor and non-poor that could inform policy for reducing poverty.

Even to begin listing the topics that could be enriched by an inquiry into the nature of group differences is to reveal how stifled today’s conversation is. Besides liberating that conversation, an open and undefensive discussion would puncture the irrational fear of the male-female and black-white differences I have surveyed here. We would be free to talk about other sexual and racial differences as well, many of which favor women and blacks, and none of which is large enough to frighten anyone who looks at them dispassionately.

Talking about group differences does not require any of us to change our politics. For every implication that the Right might seize upon (affirmative-action quotas are ill-conceived), another gives fodder to the Left (innate group differences help rationalize compensatory redistribution by the state).81 But if we do not need to change our politics, talking about group differences obligates all of us to renew our commitment to the ideal of equality that Thomas Jefferson had in mind when he wrote as a self-evident truth that all men are created equal. Steven Pinker put that ideal in today’s language in The Blank Slate, writing that “Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group.”82

Nothing in this essay implies that this moral principle has already been realized or that we are powerless to make progress. In elementary and secondary education, many outcomes are tractable even if group differences in ability remain unchanged. Dropout rates, literacy, and numeracy are all tractable. School discipline, teacher performance, and the quality of the curriculum are tractable. Academic performance within a given IQ range is tractable. The existence of group differences need not and should not discourage attempts to improve schooling for millions of American children who are now getting bad educations.

In university education and in the world of work, overall openness of opportunity has been transformed for the better over the last half-century. But the policies we now have in place are impeding, not facilitating, further progress. Creating double standards for physically demanding jobs so that women can qualify ensures that men in those jobs will never see women as their equals. In universities, affirmative action ensures that the black-white difference in IQ in the population at large is brought onto the campus and made visible to every student. The intentions of their designers notwithstanding, today’s policies are perfectly fashioned to create separation, condescension, and resentment—and so they have done.

The world need not be that way. Any university or employer that genuinely applied a single set of standards for hiring, firing, admitting, and promoting would find that performance across different groups really is distributed indistinguishably. But getting to that point nationwide will require us to jettison an apparatus of laws, regulations, and bureaucracies that has been 40 years in the making. That will not happen until the conversation has opened up. So let us take one step at a time. Let us stop being afraid of data that tell us a story we do not want to hear, stop the name-calling, stop the denial, and start facing reality.

CHARLES MURRAY is the W.H. Brady Scholar in Freedom and Culture at the American Enterprise Institute. His previous contributions to COMMENTARY, available online, include “The Bell Curve and Its Critics” (May 1995, with a subsequent exchange in the August 1995 issue).


III

Turning to race, we must begin with the fraught question of whether it even exists, or whether it is instead a social construct. The Harvard geneticist Richard Lewontin originated the idea of race as a social construct in 1972, arguing that the genetic differences across races were so trivial that no scientist working exclusively with genetic data would sort people into blacks, whites, or Asians. In his words, “racial classification is now seen to be of virtually no genetic or taxonomic significance.”25

Lewontin’s position, which quickly became a tenet of political correctness, carried with it a potential means of being falsified. If he was correct, then a statistical analysis of genetic markers would not produce clusters corresponding to common racial labels.

In the last few years, that test has become feasible, and now we know that Lewontin was wrong.26 Several analyses have confirmed the genetic reality of group identities going under the label of race or ethnicity.27 In the most recent, published this year, all but five of the 3,636 subjects fell into the cluster of genetic markers corresponding to their self-identified ethnic group.28 When a statistical procedure, blind to physical characteristics and working exclusively with genetic information, classifies 99.9 percent of the individuals in a large sample in the same way they classify themselves, it is hard to argue that race is imaginary.

Homo sapiens actually falls into many more interesting groups than the bulky ones known as “races.”29 As new findings appear almost weekly, it seems increasingly likely that we are just at the beginning of a process that will identify all sorts of genetic differences among groups, whether the groups being compared are Nigerian blacks and Kenyan blacks, lawyers and engineers, or Episcopalians and Baptists. At the moment, the differences that are obviously genetic involve diseases (Ashkenazi Jews and Tay-Sachs disease, black Africans and sickle-cell anemia, Swedes and hemochromatosis). As time goes on, we may yet come to understand better why, say, Italians are more vivacious than Scots.

Out of all the interesting and intractable differences that may eventually be identified, one in particular remains a hot button like no other: the IQ difference between blacks and whites. What is the present state of our knowledge about it?

There is no technical dispute on some of the core issues. In the aftermath of The Bell Curve, the American Psychological Association established a task force on intelligence whose report was published in early 1996.30 The task force reached the same conclusions as The Bell Curve on the size and meaningfulness of the black-white difference. Historically, it has been about one standard deviation31 in magnitude among subjects who have reached adolescence;32 cultural bias in IQ tests does not explain the difference; and the tests are about equally predictive of educational, social, and economic outcomes for blacks and whites. However controversial such assertions may still be in the eyes of the mainstream media, they are not controversial within the scientific community.

The most important change in the state of knowledge since the mid-1990’s lies in our increased understanding of what has happened to the size of the black-white difference over time. Both the task force and The Bell Curve concluded that some narrowing had occurred since the early 1970’s. With the advantage of an additional decade of data, we are now able to be more precise: (1) The black-white difference in scores on educational achievement tests has narrowed significantly. (2) The black-white convergence in scores on the most highly “g­-loaded” tests—the tests that are the best measures of cognitive ability—has been smaller, and may be unchanged, since the first tests were administered 90 years ago.



With regard to the difference in educational achievement, the narrowing of scores on major tests occurred in the 1970’s and 80’s. In the case of the SAT, the gaps in the verbal and math tests as of 1972 were 1.24 and 1.26 standard deviations respectively.33 By 1991, when the gaps were smallest (they have risen slightly since then), those numbers had dropped by .37 and .35 standard deviations.

The National Assessment of Educational Progress (NAEP), which is not limited to college-bound students, is preferable to the SAT for estimating nationally representative trends, but the story it tells is similar.34 Among students ages nine, thirteen, and seventeen, the black-white differences in math as of the first NAEP test in 1973 were 1.03, 1.29, and 1.24 standard deviations respectively. For nine-year-olds, the difference hit its all-time low of .73 standard deviations in 2004, a drop of .30 standard deviations. But almost all of that convergence had been reached by 1986, when the gap was .78 standard deviations. For thirteen-year-olds, the gap dropped by .45 standard deviations, reaching its low in 1986. For seventeen-year-olds, the gap dropped by .52 standard deviations, reaching its low in 1990.

In the reading test, the comparable gaps for ages nine, thirteen, and seventeen as of the first NAEP test in 1971 were 1.12, 1.17, and 1.25 standard deviations. Those gaps had shrunk by .38, .62, and .68 standard deviations respectively at their lowest points in 1988.35 They have since remained effectively unchanged.

An analysis by Larry Hedges and Amy Nowell uses a third set of data, examining the trends for high-school seniors by comparing six large data bases from different time periods from 1965 to 1992. The black-white difference on a combined measure of math, vocabulary, and reading fell from 1.18 to .82 standard deviations in that time, a reduction of .36 standard deviations.36

So black and white academic achievement converged significantly in the 1970’s and 1980’s, typically by more than a third of a standard deviation, and since then has stayed about the same.37 What about convergence in tests explicitly designed to measure IQ rather than academic achievement?38 The ambiguities in the data leave two defensible positions. The first is that the IQ difference is about one standard deviation, effectively unchanged since the first black-white comparisons 90 years ago. The second is that harbingers of a narrowing difference are starting to emerge. I cannot settle the argument here, but I can convey some sense of the uncertainty.



The case for an unchanged black-white IQ difference is straightforward. If you take all the black-white differences on IQ tests from the first ones in World War I up to the present, there is no statistically significant downward trend. Of course the results vary, because tests vary in the precision with which they measure the general mental factor (g) and samples vary in their size and representativeness. But results continue to center around a black-white difference of about 1.0 to 1.1 standard deviations through the most recent data.39

The case for a reduction has two important recent results to work with. The first is from the 1997 re-norming of the Armed Forces Qualification Test (AFQT), which showed a black-white difference of .97 standard deviations.40 Since the typical difference on paper-and-pencil IQ tests like the AFQT has been about 1.10 standard deviations, the 1997 results represent noticeable improvement.41 The second positive result comes from the 2003 standardization sample for the Wechsler Intelligence Scale for Children (WISC-IV), which showed a difference of .78 standard deviations, as against the 1.0 difference that has been typical for individually administered IQ tests.42

One cannot draw strong conclusions from two data points. Those who interpret them as part of an unchanging overall pattern can cite another recent result, from the 2001 standardization of the Woodcock-Johnson intelligence test. In line with the conventional gap, it showed an overall black-white difference of 1.05 standard deviations and, for youths aged six to eighteen, a difference of .99 standard deviations.43

There is more to be said on both sides of this issue, but nothing conclusive.44 Until new data become available, you may take your choice. If you are a pessimist, the gap has been unchanged at about one standard deviation. If you are an optimist, the IQ gap has decreased by a few points, but it is still close to one standard deviation. The clear and substantial convergence that occurred in academic tests has at best been but dimly reflected in IQ scores, and at worst not reflected at all.



Whether we are talking about academic achievement or about IQ, are the causes of the black-white difference environmental or genetic? Everyone agrees that environment plays a part. The controversy is about whether biology is also involved.

It has been known for many years that the obvious environmental factors such as income, parental occupation, and schools explain only part of the absolute black-white difference and none of the relative difference. Black and white students from affluent neighborhoods are separated by as large a proportional gap as are blacks and whites from poor neighborhoods.45 Thus the most interesting recent studies of environmental causes have worked with cultural explanations instead of socioeconomic status.46

One example is Black American Students in an Affluent Suburb: A Study of Academic Disengagement (2003) by the Berkeley anthropologist John Ogbu, who went to Shaker Heights, Ohio, to explore why black students in an affluent suburb should lag behind their white peers.47 Another is Black Rednecks and White Liberals (2005) by Thomas Sowell, who makes the case that what we think of as the dysfunctional aspects of urban black culture are a legacy not of slavery but of Southern and rural white “cracker” culture.48 Both Ogbu and Sowell describe ingrained parental behaviors and student attitudes that must impede black academic performance. These cultural influences often cut across social classes.

From a theoretical standpoint, the cultural explanations offer fresh ways of looking at the black-white difference at a time when the standard socioeconomic explanations have reached a dead end. From a practical standpoint, however, the cultural explanations point to a cause of the black-white difference that is as impervious to manipulation by social policy as causes rooted in biology. If there is to be a rapid improvement, some form of mass movement with powerful behavioral consequences would have to occur within the black community. Absent that, the best we can hope for is gradual cultural change that is likely to be measured in decades.

This brings us to the state of knowledge about genetic explanations. “There is not much direct evidence on this point,” said the American Psychological Association’s task force dismissively, “but what little there is fails to support the genetic hypothesis.”49 Actually, there is no direct evidence at all, just a wide variety of indirect evidence, almost all of which the task force chose to ignore.50

As it happens, a comprehensive survey of that evidence, and of the objections to it, appeared this past June in the journal Psychology, Public Policy, and Law. There, J. Philippe Rushton and Arthur Jensen co-authored a 60-page article entitled “Thirty Years of Research on Race Differences in Cognitive Ability.”51 It incorporates studies of East Asians as well as blacks and whites and concludes that the source of the black-white-Asian difference is 50- to 80-percent genetic. The same issue of the journal includes four commentaries, three of them written by prominent scholars who oppose the idea that any part of the black-white difference is genetic.52 Thus, in one place, you can examine the strongest arguments that each side in the debate can bring to bear.

Rushton and Jensen base their conclusion on ten categories of evidence that are consistent with a model in which both environment and genes cause the black-white difference and inconsistent with a model that requires no genetic contribution.53 I will not try to review their argument here, or the critiques of it. All of the contributions can be found on the Internet, and can be understood by readers with a grasp of basic statistical concepts.54

For those who consider it important to know what percentage of the IQ difference is genetic, a methodology that would do the job is now available. In the United States, few people classified as black are actually of 100-percent African descent (the average American black is thought to be about 20-percent white).55 To the extent that genes play a role, IQ will vary by racial admixture. In the past, studies that have attempted to test this hypothesis have had no accurate way to measure the degree of admixture, and the results have been accordingly muddy.56 The recent advances in using genetic markers solves that problem. Take a large sample of racially diverse people, give them a good IQ test, and then use genetic markers to create a variable that no longer classifies people as “white” or “black,” but along a continuum. Analyze the variation in IQ scores according to that continuum. The results would be close to dispositive.57



None of this is important for social policy, however, where the issue is not the source of the difference but its intractability. Much of the evidence reviewed by Rushton and Jensen bears on what we can expect about future changes in the black-white IQ difference. My own thinking on this issue is shaped by the relationship of the difference to a factor I have already mentioned—“g”—and to the developing evidence for g’s biological basis.

When you compare black and white mean scores on a battery of subtests, you do not find a uniform set of differences; nor do you find a random assortment. The size of the difference varies systematically by type of subtest. Asked to predict which subtests show the largest difference, most people will think first of ones that have the most cultural content and are the most sensitive to good schooling. But this natural expectation is wrong. Some of the largest differences are found on subtests that have little or no cultural content, such as ones based on abstract designs.

As long ago as 1927, Charles Spearman, the pioneer psychometrician who discovered g, proposed a hypothesis to explain the pattern: the size of the black-white difference would be “most marked in just those [subtests] which are known to be saturated with g.”58 In other words, Spearman conjectured that the black-white difference would be greatest on tests that were the purest measures of intelligence, as opposed to tests of knowledge or memory.

A concrete example illustrates how Spearman’s hypothesis works. Two items in the Wechsler and Stanford-Binet IQ tests are known as “forward digit span” and “backward digit span.” In the forward version, the subject repeats a random sequence of one-digit numbers given by the examiner, starting with two digits and adding another with each iteration. The subject’s score is the number of digits that he can repeat without error on two consecutive trials. Digits-backward works exactly the same way except that the digits must be repeated in the opposite order.

Digits-backward is much more g-loaded than digits-forward. Try it yourself and you will see why. Digits-forward is a straightforward matter of short-term memory. Digits-backward makes your brain work much harder.59

The black-white difference in digits-backward is about twice as large as the difference in digits-forward.60 It is a clean example of an effect that resists cultural explanation. It cannot be explained by differential educational attainment, income, or any other socioeconomic factor. Parenting style is irrelevant. Reluctance to “act white” is irrelevant. Motivation is irrelevant. There is no way that any of these variables could systematically encourage black performance in digits-forward while depressing it in digits-backward in the same test at the same time with the same examiner in the same setting.61

In 1980, Arthur Jensen began a research program for testing Spearman’s hypothesis. In his book The g Factor (1998), he summarized the results from seventeen independent sets of data, derived from 149 psychometric tests. They consistently supported Spearman’s hypothesis.62 Subsequent work has added still more evidence.63 Debate continues about what the correlation between g-loadings and the size of the black-white difference means, but the core of Spearman’s original conjecture, that a sizable correlation would be found to exist, has been confirmed.64

During the same years that Jensen was investigating Spearman’s hypothesis, progress was also being made in understanding g. For decades, psychometricians had tried to make g go away. Confident that intelligence must be more complicated than a single factor, they strove to replace g with measures of uncorrelated mental skills. They thereby made valuable contributions to our understanding of intelligence, which really does manifest itself in different ways and with different profiles, but getting rid of g proved impossible. No matter how the data were analyzed, a single factor kept dominating the results.65

By the 1980’s, the robustness and value of g as an explanatory construct were broadly accepted among pyschometricians, but little was known about its physiological basis.66 As of 2005, we know much more. It is now established that g is by far the most heritable component of IQ.67 A variety of studies have found correlations between g and physiological phenomena such as brain-evoked potentials, brain pH levels, brain glucose metabolism, nerve-conduction velocity, and reaction time.68 Most recently, it has been determined that a highly significant relationship exists between g and the volume of gray matter in specific areas of the frontal cortex, and that the magnitude of the volume is under tight genetic control.69 In short, we now know that g captures something in the biology of the brain.



So Spearman’s basic conjecture was correct—the size of the black-white difference and g-loadings are correlated—and g represents a biologically grounded and highly heritable cognitive resource. When those two observations are put together, a number of characteristics of the black-white difference become predictable, correspond with phenomena we have observed in data, and give us reason to think that not much will change in the years to come.70

One implication is that black-white convergence on test scores will be greatest on tests that are least g-loaded. Literacy is the obvious example: people with a wide range of IQ’s can be taught to read competently, and it is the reading test of the NAEP in which convergence has reached its closest point (.55 standard deviations in the 1988 test). More broadly, the confirmation of Spearman’s hypothesis explains why the convergence that has occurred on academic achievement tests has not been matched on IQ tests.

A related implication is that the source of the black-white difference lies in skills that are hardest to change. Being able to repeat many digits backward has no value in itself. It points to a valuable underlying mental ability, in the same way that percentage of fast-twitch muscle fibers points to an underlying athletic ability. If you were to practice reciting digits backward for a few days, you could increase your score somewhat, just as training can improve your running speed somewhat. But in neither case will you have improved the underlying ability.71 As far as anyone knows, g itself cannot be coached.

The third implication is that the “Flynn effect” will not close the black-white difference. I am referring here to the secular increase in IQ scores over time, brought to public attention by James Flynn.72 The Flynn effect has been taken as a reason for thinking that the black-white difference is temporary: if IQ scores are so malleable that they can rise steadily for several decades, why should not the black-white difference be malleable as well?73

But as the Flynn effect has been studied over the last decade, the evidence has grown, and now seems persuasive, that the increases in IQ scores do not represent significant increases in g.74 What the increases do represent—whether increases in specific mental skills or merely increased test sophistication—is still being debated. But if the black-white difference is concentrated in g and if the Flynn effect does not consist of increases in g, the Flynn effect will not do much to close the gap. A 2004 study by Dutch scholars tested this question directly. Examining five large databases, the authors concluded that “the nature of the Flynn effect is qualitatively different from the nature of black-white differences in the United States,” and that “the implications of the Flynn effect for black-white differences appear small.”75

These observations represent my reading of a body of evidence that is incomplete, and they will surely have to be modified as we learn more. But taking the story of the black-white IQ difference as a whole, I submit that we know two facts beyond much doubt. First, the conventional environmental explanation of the black-white difference is inadequate. Poverty, bad schools, and racism, which seem such obvious culprits, do not explain it. Insofar as the environment is the cause, it is not the sort of environment we know how to change, and we have tried every practical remedy that anyone has been able to think of. Second, regardless of one’s reading of the competing arguments, we are left with an IQ difference that has, at best, narrowed by only a few points over the last century. I can find nothing in the history of this difference, or in what we have learned about its causes over the last ten years, to suggest that any faster change is in our future.
II

The technical literature documenting sex differences and their biological basis grew surreptitiously during feminism’s heyday in the 1970’s and 1980’s. By the 1990’s, it had become so extensive that the bibliography in David Geary’s pioneering Male, Female (1998) ran to 53 pages.2 Currently, the best short account of the state of knowledge is Steven Pinker’s chapter on gender in The Blank Slate (2002).3

Rather than present a telegraphic list of all the differences that I think have been established, I will focus on the narrower question at the heart of the Summers controversy: as groups, do men and women differ innately in characteristics that produce achievement at the highest levels of accomplishment? I will limit my comments to the arts and sciences.

Since we live in an age when students are likely to hear more about Marie Curie than about Albert Einstein, it is worth beginning with a statement of historical fact: women have played a proportionally tiny part in the history of the arts and sciences.4 Even in the 20th century, women got only 2 percent of the Nobel Prizes in the sciences—a proportion constant for both halves of the century—and 10 percent of the prizes in literature. The Fields Medal, the most prestigious award in mathematics, has been given to 44 people since it originated in 1936. All have been men.

The historical reality of male dominance of the greatest achievements in science and the arts is not open to argument. The question is whether the social and legal exclusion of women is a sufficient explanation for this situation, or whether sex-specific characteristics are also at work.

Mathematics offers an entry point for thinking about the answer. Through high school, girls earn better grades in math than boys, but the boys usually do better on standardized tests.5 The difference in means is modest, but the male advantage increases as the focus shifts from means to extremes. In a large sample of mathematically gifted youths, for example, seven times as many males as females scored in the top percentile of the SAT mathematics test.6 We do not have good test data on the male-female ratio at the top one-hundredth or top one-thousandth of a percentile, where first-rate mathematicians are most likely to be found, but collateral evidence suggests that the male advantage there continues to increase, perhaps exponentially.7

Evolutionary biologists have some theories that feed into an explanation for the disparity. In primitive societies, men did the hunting, which often took them far from home. Males with the ability to recognize landscapes from different orientations and thereby find their way back had a survival advantage. Men who could process trajectories in three dimensions—the trajectory, say, of a spear thrown at an edible mammal—also had a survival advantage.8 Women did the gathering. Those who could distinguish among complex arrays of vegetation, remembering which were the poisonous plants and which the nourishing ones, also had a survival advantage. Thus the logic for explaining why men should have developed elevated three-dimensional visuospatial skills and women an elevated ability to remember objects and their relative locations—differences that show up in specialized tests today.9

Perhaps this is a just-so story.10 Why not instead attribute the results of these tests to socialization? Enter the neuroscientists. It has been known for years that, even after adjusting for body size, men have larger brains than women. Yet most psychometricians conclude that men and women have the same mean IQ (although debate on this issue is growing).11 One hypothesis for explaining this paradox is that three-dimensional processing absorbs the extra male capacity. In the last few years, magnetic-resonance imaging has refined the evidence for this hypothesis, revealing that parts of the brain’s parietal cortex associated with space perception are proportionally bigger in men than in women.12

What does space perception have to do with scores on math tests?13 Enter the psychometricians, who demonstrate that when visuospatial ability is taken into account, the sex difference in SAT math scores shrinks substantially.14

Why should the difference be so much greater at the extremes than at the mean? Part of the answer is that men consistently exhibit higher variance than women on all sorts of characteristics, including visuospatial abilities, meaning that there are proportionally more men than women at both ends of the bell curve.15 Another part of the answer is that someone with a high verbal IQ can easily master the basic algebra, geometry, and calculus that make up most of the items in an ordinary math test. Elevated visuospatial skills are most useful for the most difficult items.16 If males have an advantage in answering those comparatively few really hard items, the increasing disparity at the extremes becomes explicable.

Seen from one perspective, this pattern demonstrates what should be obvious: there is nothing inherent in being a woman that precludes high math ability. But there remains a distributional difference in male and female characteristics that leads to a larger number of men with high visuospatial skills. The difference has an evolutionary rationale, a physiological basis, and a direct correlation with math scores.



Now put all this alongside the historical data on accomplishment in the arts and sciences. In test scores, the male advantage is most pronounced in the most abstract items. Historically, too, it is most pronounced in the most abstract domains of accomplishment.17

In the humanities, the most abstract field is philosophy—and no woman has been a significant original thinker in any of the world’s great philosophical traditions. In the sciences, the most abstract field is mathematics, where the number of great women mathematicians is approximately two (Emmy Noether definitely, Sonya Kovalevskaya maybe). In the other hard sciences, the contributions of great women scientists have usually been empirical rather than theoretical, with leading cases in point being Henrietta Leavitt, Dorothy Hodgkin, Lise Meitner, Irène Joliot-Curie, and Marie Curie herself.

In the arts, literature is the least abstract and by far the most rooted in human interaction; visual art incorporates a greater admixture of the abstract; musical composition is the most abstract of all the arts, using neither words nor images. The role of women has varied accordingly. Women have been represented among great writers virtually from the beginning of literature, in East Asia and South Asia as well as in the West. Women have produced a smaller number of important visual artists, and none that is clearly in the first rank. No female composer is even close to the first rank. Social restrictions undoubtedly damped down women’s contributions in all of the arts, but the pattern of accomplishment that did break through is strikingly consistent with what we know about the respective strengths of male and female cognitive repertoires.

Women have their own cognitive advantages over men, many of them involving verbal fluency and interpersonal skills. If this were a comprehensive survey, detailing those advantages would take up as much space as I have devoted to a particular male advantage. But, sticking with my restricted topic, I will move to another aspect of male-female differences that bears on accomplishment at the highest levels of the arts and sciences: motherhood.



Regarding women, men, and babies, the technical literature is as unambiguous as everyday experience would lead one to suppose. As a rule, the experience of parenthood is more profoundly life-altering for women than for men. Nor is there anything unique about humans in this regard. Mammalian reproduction generally involves much higher levels of maternal than paternal investment in the raising of children.18 Among humans, extensive empirical study has demonstrated that women are more attracted to children than are men, respond to them more intensely on an emotional level, and get more and different kinds of satisfactions from nurturing them. Many of these behavioral differences have been linked with biochemical differences between men and women.19

Thus, for reasons embedded in the biochemistry and neurophysiology of being female, many women with the cognitive skills for achievement at the highest level also have something else they want to do in life: have a baby. In the arts and sciences, forty is the mean age at which peak accomplishment occurs, preceded by years of intense effort mastering the discipline in question.20 These are precisely the years during which most women must bear children if they are to bear them at all.

Among women who have become mothers, the possibilities for high-level accomplishment in the arts and sciences shrink because, for innate reasons, the distractions of parenthood are greater. To put it in a way that most readers with children will recognize, a father can go to work and forget about his children for the whole day. Hardly any mother can do this, no matter how good her day-care arrangement or full-time nanny may be. My point is not that women must choose between a career and children, but that accomplishment at the extremes commonly comes from a single-minded focus that leaves no room for anything but the task at hand.21 We should not be surprised or dismayed to find that motherhood reduces the proportion of highly talented young women who are willing to make that tradeoff.

Some numbers can be put to this observation through a study of nearly 2,000 men and women who were identified as extraordinarily talented in math at age thirteen and were followed up 20 years later.22 The women in the sample came of age in the 1970’s and early 1980’s, when women were actively socialized to resist gender stereotypes. In many ways, these talented women did resist. By their early thirties, both the men and women had become exceptional achievers, receiving advanced degrees in roughly equal proportions. Only about 15 percent of the women were full-time housewives. Among the women, those who did and those who did not have children were equally satisfied with their careers.

And yet. The women with careers were four-and-a-half times more likely than men to say they preferred to work fewer than 40 hours per week. The men placed greater importance on “being successful in my line of work” and “inventing or creating something that will have an impact,” while the women found greater value in “having strong friendships,” “living close to parents and relatives,” and “having a meaningful spiritual life.” As the authors concluded, “these men and women appear to have constructed satisfying and meaningful lives that took somewhat different forms.”23 The different forms, which directly influence the likelihood that men will dominate at the extreme levels of achievement, are consistent with a constellation of differences between men and women that have biological roots.

I have omitted perhaps the most obvious reason why men and women differ at the highest levels of accomplishment: men take more risks, are more competitive, and are more aggressive than women.24 The word “testosterone” may come to mind, and appropriately. Much technical literature documents the hormonal basis of personality differences that bear on sex differences in extreme and venturesome effort, and hence in extremes of accomplishment—and that bear as well on the male propensity to produce an overwhelming proportion of the world’s crime and approximately 100 percent of its wars. But this is just one more of the ways in which science is demonstrating that men and women are really and truly different, a fact so obvious that only intellectuals could ever have thought otherwise.

Note: What follows is a fully annotated version of the article that appears in the print edition of the September 2005 issue of COMMENTARY.


The Inequality Taboo

Charles Murray

When the late Richard Herrnstein and I published The Bell Curve eleven years ago, the furor over its discussion of ethnic differences in IQ was so intense that most people who have not read the book still think it was about race. Since then, I have deliberately not published anything about group differences in IQ, mostly to give the real topic of The Bell Curve—the role of intelligence in reshaping America’s class structure—a chance to surface.

The Lawrence Summers affair last January made me rethink my silence. The president of Harvard University offered a few mild, speculative, off-the-record remarks about innate differences between men and women in their aptitude for high-level science and mathematics, and was treated by Harvard’s faculty as if he were a crank. The typical news story portrayed the idea of innate sex differences as a renegade position that reputable scholars rejected.

It was depressingly familiar. In the autumn of 1994, I had watched with dismay as The Bell Curve’s scientifically unremarkable statements about black IQ were successfully labeled as racist pseudoscience. At the opening of 2005, I watched as some scientifically unremarkable statements about male-female differences were successfully labeled as sexist pseudoscience.

The Orwellian disinformation about innate group differences is not wholly the media’s fault. Many academics who are familiar with the state of knowledge are afraid to go on the record. Talking publicly can dry up research funding for senior professors and can cost assistant professors their jobs. But while the public’s misconception is understandable, it is also getting in the way of clear thinking about American social policy.

Good social policy can be based on premises that have nothing to do with scientific truth. The premise that is supposed to undergird all of our social policy, the founders’ assertion of an unalienable right to liberty, is not a falsifiable hypothesis. But specific policies based on premises that conflict with scientific truths about human beings tend not to work. Often they do harm.

One such premise is that the distribution of innate abilities and propensities is the same across different groups. The statistical tests for uncovering job discrimination assume that men are not innately different from women, blacks from whites, older people from younger people, homosexuals from heterosexuals, Latinos from Anglos, in ways that can legitimately affect employment decisions. Title IX of the Educational Amendments of 1972 assumes that women are no different from men in their attraction to sports. Affirmative action in all its forms assumes there are no innate differences between any of the groups it seeks to help and everyone else. The assumption of no innate differences among groups suffuses American social policy. That assumption is wrong.

When the outcomes that these policies are supposed to produce fail to occur, with one group falling short, the fault for the discrepancy has been assigned to society. It continues to be assumed that better programs, better regulations, or the right court decisions can make the differences go away. That assumption is also wrong.

Hence this essay. Most of the following discussion describes reasons for believing that some group differences are intractable. I shift from “innate” to “intractable” to acknowledge how complex is the interaction of genes, their expression in behavior, and the environment. “Intractable” means that, whatever the precise partitioning of causation may be (we seldom know), policy interventions can only tweak the difference at the margins.

I will focus on two sorts of differences: between men and women and between blacks and whites. Here are three crucial points to keep in mind as we go along:

1. The differences I discuss involve means and distributions. In all cases, the variation within groups is greater than the variation between groups. On psychological and cognitive dimensions, some members of both sexes and all races fall everywhere along the range. One implication of this is that genius does not come in one color or sex, and neither does any other human ability. Another is that a few minutes of conversation with individuals you meet will tell you much more about them than their group membership does.

2. Covering both sex differences and race differences in a single, non-technical article, I had to leave out much in the print edition of this article. This online version is fully annotated and includes extensive supplementary material.

3. The concepts of “inferiority” and “superiority” are inappropriate to group comparisons. On most specific human attributes, it is possible to specify a continuum running from “low” to “high,” but the results cannot be combined into a score running from “bad” to “good.” What is the best score on a continuum measuring aggressiveness? What is the relative importance of verbal skills versus, say, compassion? Of spatial skills versus industriousness? The aggregate excellences and shortcomings of human groups do not lend themselves to simple comparisons. That is why the members of just about every group can so easily conclude that they are God’s chosen people. All of us use the weighting system that favors our group’s strengths.1

Monday, September 12, 2005

Ken Gorrell:
Katrina exposed America's harmful culture of dependence
By KEN GORRELL
Guest Commentary



WHAT IS a culture of dependence? It is generation after generation of families existing on direct government financial support and sapped of ambition to take care of their own immediate needs or prepare themselves for a better future. For those trapped in this dysfunctional culture, the normal cost-benefit equations of life don't apply. The government safety net becomes a smothering blanket, insulating citizens from the consequences of their actions while reinforcing the poisonous idea that the problems they create for themselves should become someone else's problem to solve.

This idea does not lead to a good or easy life, but it does enable a self-perpetuating existence, unhealthy for both society in general and specifically for those who make themselves wards of the state.

What happens when local and state elected officials — those layers of government most responsible for responding to the needs of local populations — fail in their basic duties to protect life, liberty and property? Members of the culture of dependence are hardest hit.

They are least able, by training, temperament or resources to act in their own best interests or to survive a breakdown in civil order. They are most in need of strong leadership and direct guidance. The self-synchronization of activities practiced daily by the broader population is, for them, an unlearned skill. An emergency situation is not the time to abandon them to the vagaries of fate.

I believe that Hurricane Katrina will be remembered as far more than a powerful storm. By exposing critical structural defects — and here I refer not to the failed levees, but to defects in society and our relationship to government — Katrina was, metaphorically, the perfect storm: The collision of the culture of dependence and ineffectual state and local government.

Effective leadership must be decisive, reasonable and believable. Mayor Ray Nagin of New Orleans was none of these. He was not decisive in the days leading up to Katrina's landfall. His expectations of citizen compliance with the voluntary evacuation order were not reasonable. Too many of his citizens did not believe or heed his order. Mayor Nagin failed to use the resources at his disposal to best effect or implement his city's own published disaster plan.

Rather than take charge of his city after the storm passed, he spent much of his time blaming others for his failures. While it would be too much to expect a Rudy Giuliani in every city, even half a Giuliani in charge in New Orleans would have saved lives.

An analysis of Governor Kathleen Blanco's actions before and after Katrina hit yields the same depressing conclusion: leadership was in short supply in Louisiana. Those most dependent on government suffered the most, as they always do and always will.

It would be preferred not to have to play the blame game while fellow Americans await rescue from the toxic stew of New Orleans or the devastation of the Gulf Coast. However, the usual suspects — the enablers of the culture of dependence — have already lined up to ensure their noxious propaganda becomes ground truth even before the rebuilding efforts begin.

Part and parcel of their viewpoint is the idea that a "federal case" should be made out of everything regardless of Constitutional strictures. Concentrating power in the federal government is their entering argument for every policy debate, so for them this is simply politics as usual despite the unusual circumstances. By blaming Washington for the multi-layered failures in dealing effectively with Katrina, they concentrate efforts on finding a federal solution to what is by law first and foremost a local and state problem.

The process of learning from our mistakes should be nonpartisan. Of course it will not be. The usual critics of President Bush will concentrate on federal government actions because it is in their partisan interest to do so. Harsh truths will be buried under harsher rhetoric. The President has been criticized for "overstepping" authority by requesting federal power to access local library records in the pursuit of suspected terrorists bent on inflicting Katrina-like death tolls in our cities. These same voices criticize him now for not stepping into local disaster planning and preparedness. A typical liberal dichotomy: Demand federal intrusion contrary to law but hamstring federal efforts to accomplish clearly delineated duties.

It is easier to blame Washington for the consequences we bring upon ourselves when we fail to live up to our responsibilities as individuals or when we elect mediocre state and local officials such as those recently thrust into the national spotlight from New Orleans and Baton Rouge. But in a free society, we deserve what we tolerate.

Accepting the culture of dependence as a constant burden should not be tolerated. Changing it to a culture of self-reliance is a worthy long-term goal. We can start the process by demanding more leadership from our local elected representatives and expecting more from our fellow citizens.

Ken Gorrell is an insurance agent in Northfield.

TCS: Tech Central Station - Imperium Americanum? Hardly.

Thursday, September 08, 2005

Blame Amid the Tragedy
Gov. Blanco and Mayor Nagin failed their constituents.

BY BOB WILLIAMS
Wednesday, September 7, 2005 12:01 a.m. EDT

As the devastation of Hurricane Katrina continues to shock and sadden the nation, the question on many lips is, Who is to blame for the inadequate response?

As a former state legislator who represented the legislative district most impacted by the eruption of Mount St. Helens in 1980, I can fully understand and empathize with the people and public officials over the loss of life and property.

Many in the media are turning their eyes toward the federal government, rather than considering the culpability of city and state officials. I am fully aware of the challenges of having a quick and responsive emergency response to a major disaster. And there is definitely a time for accountability; but what isn't fair is to dump on the federal officials and avoid those most responsible--local and state officials who failed to do their job as the first responders. The plain fact is, lives were needlessly lost in New Orleans due to the failure of Louisiana's governor, Kathleen Blanco, and the city's mayor, Ray Nagin.

The primary responsibility for dealing with emergencies does not belong to the federal government. It belongs to local and state officials who are charged by law with the management of the crucial first response to disasters. First response should be carried out by local and state emergency personnel under the supervision of the state governor and his emergency operations center.

The actions and inactions of Gov. Blanco and Mayor Nagin are a national disgrace due to their failure to implement the previously established evacuation plans of the state and city. Gov. Blanco and Mayor Nagin cannot claim that they were surprised by the extent of the damage and the need to evacuate so many people. Detailed written plans were already in place to evacuate more than a million people. The plans projected that 300,000 people would need transportation in the event of a hurricane like Katrina. If the plans had been implemented, thousands of lives would likely have been saved.

In addition to the plans, local, state and federal officials held a simulated hurricane drill 13 months ago, in which widespread flooding supposedly trapped 300,000 people inside New Orleans. The exercise simulated the evacuation of more than a million residents. The problems identified in the simulation apparently were not solved.





A year ago, as Hurricane Ivan approached, New Orleans ordered an evacuation but did not use city or school buses to help people evacuate. As a result many of the poorest citizens were unable to evacuate. Fortunately, the hurricane changed course and did not hit New Orleans, but both Gov. Blanco and Mayor Nagin acknowledged the need for a better evacuation plan. Again, they did not take corrective actions. In 1998, during a threat by Hurricane George, 14,000 people were sent to the Superdome and theft and vandalism were rampant due to inadequate security. Again, these problems were not corrected.
The New Orleans contingency plan is still, as of this writing, on the city's Web site, and states: "The safe evacuation of threatened populations is one of the principle [sic] reasons for developing a Comprehensive Emergency Management Plan." But the plan was apparently ignored.

Mayor Nagin was responsible for giving the order for mandatory evacuation and supervising the actual evacuation: His Office of Emergency Preparedness (not the federal government) must coordinate with the state on elements of evacuation and assist in directing the transportation of evacuees to staging areas. Mayor Nagin had to be encouraged by the governor to contact the National Hurricane Center before he finally, belatedly, issued the order for mandatory evacuation. And sadly, it apparently took a personal call from the president to urge the governor to order the mandatory evacuation.

The city's evacuation plan states: "The city of New Orleans will utilize all available resources to quickly and safely evacuate threatened areas." But even though the city has enough school and transit buses to evacuate 12,000 citizens per fleet run, the mayor did not use them. To compound the problem, the buses were not moved to high ground and were flooded. The plan also states that "special arrangements will be made to evacuate persons unable to transport themselves or who require specific lifesaving assistance. Additional personnel will be recruited to assist in evacuation procedures as needed." This was not done.

The evacuation plan warned that "if an evacuation order is issued without the mechanisms needed to disseminate the information to the affected persons, then we face the possibility of having large numbers of people either stranded and left to the mercy of a storm, or left in an area impacted by toxic materials." That is precisely what happened because of the mayor's failure.

Instead of evacuating the people, the mayor ordered the refugees to the Superdome and Convention Center without adequate security and no provisions for food, water and sanitary conditions. As a result people died, and there was even rape committed, in these facilities. Mayor Nagin failed in his responsibility to provide public safety and to manage the orderly evacuation of the citizens of New Orleans. Now he wants to blame Gov. Blanco and the Federal Emergency Management Agency. In an emergency the first requirement is for the city's emergency center to be linked to the state emergency operations center. This was not done.





The federal government does not have the authority to intervene in a state emergency without the request of a governor. President Bush declared an emergency prior to Katrina hitting New Orleans, so the only action needed for federal assistance was for Gov. Blanco to request the specific type of assistance she needed. She failed to send a timely request for specific aid.
In addition, unlike the governors of New York, Oklahoma and California in past disasters, Gov. Blanco failed to take charge of the situation and ensure that the state emergency operation facility was in constant contact with Mayor Nagin and FEMA. It is likely that thousands of people died because of the failure of Gov. Blanco to implement the state plan, which mentions the possible need to evacuate up to one million people. The plan clearly gives the governor the authority for declaring an emergency, sending in state resources to the disaster area and requesting necessary federal assistance.

State legislators and governors nationwide need to update their contingency plans and the operation procedures for state emergency centers. Hurricane Katrina had been forecast for days, but that will not always be the case with a disaster (think of terrorist attacks). It must be made clear that the governor and locally elected officials are in charge of the "first response."

I am not attempting to excuse some of the delays in FEMA's response. Congress and the president need to take corrective action there, also. However, if citizens expect FEMA to be a first responder to terrorist attacks or other local emergencies (earthquakes, forest fires, volcanoes), they will be disappointed. The federal government's role is to offer aid upon request.

The Louisiana Legislature should conduct an immediate investigation into the failures of state and local officials to implement the written emergency plans. The tragedy is not over, and real leadership in the state and local government are essential in the months to come. More importantly, the hurricane season is still upon us, and local and state officials must stay focused on the jobs for which they were elected--and not on the deadly game of passing the emergency buck.

Mr. Williams is president of the Evergreen Freedom Foundation, a free market public policy research organization in Olympia, Wash

Wednesday, September 07, 2005

More good stuff keeps on comin...Im really looking forward to 06 and 08:)
ENJOY
New Glory
By Jamie Glazov
FrontPageMagazine.com | September 7, 2005

Frontpage Interview’s guest today is Ralph Peters, a retired U.S. Army lieutenant colonel who served in infantry and intelligence units before becoming a Foreign Area Officer and a global strategic scout for the Pentagon. He has published three books on strategy and military affairs, as well as hundreds of columns for the New York Post, The Washington Post, The Wall Street Journal, Newsweek, and other publications. He is the author of the new book New Glory: Expanding America's Global Supremacy.







FP: Ralph Peters, welcome to Frontpage Interview.



Peters: I'm honored by the chance to reach your audience. Thanks.



FP: What inspired you to write New Glory?



Peters: New Glory is a book that literally took me a lifetime to write--in the sense that it contains decades of first-hand experience and observation in more than sixty countries. While I've written essays and columns over the years, I just sensed that the time was right to put it all together, to lay out as forthrightly and honestly as I could where I think the world is going--to offer a fresh vision of the world as it is and as it's going to be...no matter who might be offended by my views.



And, frankly, I was fed up with the countless "experts" all over the media who had never been anywhere or done anything, but who had an opinion on everything. You can't understand this complex world without going out to see it firsthand. The book's conclusions about where we've been and where we need to go strategically will surprise many readers, but they're based upon direct experience, not faculty-lounge chitchat. This book had been cooking inside me for a long time--and I'm glad I waited to write it. I needed all those years of getting dirty overseas to mature my thinking--and to escape Washington group-think.



FP: Tell us why the battle for Fallujah epitomized how we must fight -- and win -- the terror war.



Peters: Well, the First Battle of Fallujah, in the spring of 2004, was an example of how to get it as wrong as you possibly can. We bragged that we were going to "clean up Dodge." And the Marines went in, tough and capable as ever. Then, just when the Marines were on the cusp of victory, they were called off, thanks to a brilliant, insidious and unscrupulous disinformation campaign waged by al-Jazeera. I was in Iraq at the time, and the lies about American "atrocities" were stunning. But the lies worked and the Bush administration, to my shock and dismay, backed down.



Let's be honest: The terrorists won First Fallujah. And for six months thereafter Fallujah was the world capital of terror--a terrorist city-state. It was evident to all of us who had served that we'd have to go back into Fallujah, but the administration--which I support--made the further error of waiting until after the presidential election to avoid casualties or embarrassments during the campaign. Well, fortunately, in the Second Battle of Fallujah the Army and Marines realized they had to do it fast, before the media won again and the politicians caved in again. The military had been burned once and they were determined not to get burned again. And they did a stunning job--Second Fallujah was a model of how to take down a medium-size city. Great credit to the troops, mixed reviews for the politicos.



The bottom line is this: If you have to fight, fight to win, don't postpone what's necessary, and be prepared for the media's anti-American onslaught. Today, the media--with some noteworthy exceptions--are stooges of Islamist terrorists who, if they actually won, would butcher the journalists defending them.



We should never go to war lightly, but if we must fight, we have to give it everything we've got and damn the global criticism. There's a straightforward maxim that applies: In warfare, if you're unwilling to pay the butcher's bill up front, you will pay it with compound interest in the end.



FP: You note that terror of female sexuality underlies Islamic terror. You also make the point that a culture that hates and fears woman is incompatible with modernity and democracy. Can you illuminate these phenomena for us please?



Peters: No brainer on this one. Any society that refuses to exploit the talents and potential contributions of half of its population can't remotely hope to compete with the USA or the West in general. Worse, the virtual enslavement of women is as much a symptom of other ailments as it is a problem in and of itself. Where women are tormented by bitter old men in religious robes, there's never a meritocracy for males, either. And such societies are consistently racially and religiously bigoted. Take Pakistan: While the USA is operating at a phenomenal level of human efficiency in the 21st century, say 85%, Pakistan would likely measure in at 12 to 15%. They just keep falling comparatively farther and farther behind, they hate it, and, of course, they blame us. We're dealing with the abject and utter failure of the entire civilization of Middle Eastern Islam--not competitive in a single sphere (not even terror, since these days we're terrorizing the terrorists). It's historically unprecedented--and unspeakably dangerous.



As far as the inhuman, inhumane--and stupid--treatment of women in the Middle East, yep, Islam is scared of the girls. I wish Freud were alive--he'd really get a look at a civilization's discontents. If you're not terrified of female sexuality, you don't lock women up, insist on covering them up from scalp to toenail and stone them to death for their "sins." Every single Muslim culture in the greater Middle East is sexually infantile--to use the Freudian term. For all their macho posturing, the men are terrified of their feared inadequacy. It's like one big junior high school dance, with the boys on one side of the gym and the girls on the other--except the boys have Kalashnikovs.



Now, I realize this isn't the sort of thing most people consider as a strategic factor, but I am thoroughly convinced that the one foolproof test for whether or not a society has any hope of making it in the 21st century is its treatment of women. Where women are partners, societies take off--as ours has done for this reason and others. Where women are property, there's simply no hope of a competitive performance.



In the collective culture of the Middle East, we're dealing with a deeply neurotic, if not outright psychotic civilization. I wish I could be more positive. But the average Middle Eastern male just has snakes in his head. And, by the way, the place isn't much fun, either. A mega-mall or two does not make a civilization.



FP: You make the observation that “Islam produced a strain of violent homoeroticism that reaches into al-Qaeda and beyond.” Please expand on this reality a bit for us.



Peters: Another issue "sober" Washington wouldn't consider as a strategic concern, but this ties in with the fear of and disdain for women. If you read the notes and papers they left behind, it's evident that the hijackers of 9/11 were a boy's club with strong homoerotic tendencies. Read Mohammed Atta's lunatic note describing how women must be kept away from his funeral to avoid polluting his grave. Does that sound like a guy with a happy dating history? Of course, sex between men and boys is a long tradition from North Africa through Afghanistan (fear of women always leads to an excessive fixation on female virginity--so she won't know her husband's inadequate--as well as homoerotic undercurrents).



They don't talk about it, of course--it's supposed to be anathema--but very few Middle Eastern mothers would trust their good-looking young sons around many adult males. This has deep roots, right back to the celebrations of the Emperor Babur's fixation on a pretty boy in the Baburnama. And the related dread of the female as literal femme fatale, as vixen, as betrayer, appears in much of the major literature--especially the "Thousand and One Arabian Nights," which, in its unabridged, unexpurgated version, is one long chronicle of supposed female wantonness and insatiability (the men are always innocent victims of Eve).



Pretty hard for the president to work this into a State of the Union message, but I'm convinced that sexual dysfunction is at the core of the Middle East's sickness--and it's certainly sick. Nothing about our civilization so threatens the males of the Middle East as the North American career woman making her own money and her own decisions. We don't think of it this way, but from one perspective the best symbols of the War on Terror would be the Islamic veil versus the two-piece woman's business suit.



There is no abyss more unbridgeable between our civilizations than that created by our respect for women and the Islamic disdain for the female. There are many aspects of our magnificent civilization that threaten traditional, backward societies, but nothing worries them so much as the independence of the Western woman--not that they approve of freedom of any kind.



FP: You write that the developments in Iran pose a great danger to the Islamists and great hope for the West. Tell us what the possibilities are. Perhaps a domino theory? (i.e, if the Iranians overthrow their religious despots, the rest of the Islamic world might do the same?)



Peters: No matter what the outcome in Iraq, the Middle East isn't going to change overnight. This is a very long process. But if you want an irrefutable indicator of how important Iraq's future is, just consider how many resources our enemies are willing to spend to stop the emergence of an even partially functional rule-of-law democracy in Iraq. The terrorists are throwing in everything they've got. Surely, that should tell us something.



Despite all the yelling and jumping up and down in the "Arab Street" (where someone needs to pick up the litter, by the way), the truth is that Arabs, especially, are afraid they can't do it, that they can't build a modern, let alone a postmodern, market democracy. The Arabs desperately need a win--they've been losing on every front for so long. If Iraq is even a deeply flawed success, it will be success enough to spark change across the region. But we must not expect overnight results. This is all very hard. We're not just trying to change a country--we're asking a civilization to change, to revive itself.



Iraq matters immensely. But no matter the outcome, it will be a long time before we see the rewards. It's an agonizingly slow process--which is tough for our society, which expects quick results.



And if Iraq should fail, despite our best efforts, it won't really be an American (or Anglo-American) failure. The consequences will be severe, but we'll work it off at the strategic gym. A failed Iraq will be another tragic Arab failure.



This is our best shot, but it's their last chance.



FP: You observe that Islamist terror sprouts from the failure of Arab and Islamic civilization, that they are humiliated, envious and seek to destroy the reminder of everything we have done right. Please illustrate this picture for us.



Peters: Back to our disdain for new strategic factors: Certainly economic statistics and demographics, hydrology and terms of trade all matter. But the number one deadly and galvanizing strategic impulse in the world today is jealousy. And it's jealousy of the West in general, but specifically of the United States. Jealousy is a natural, deep human emotion, which afflicts us all in our personal lives--to some degree. But when it afflicts an entire civilization, it's tragic. The failed civilization of the Middle East--where not one of the treasured local values is functional in the globalized world--is morbidly jealous of us. They've succumbed to a culture of--and addiction to--blame. Instead of facing up to the need to change and rolling up their sleeves, they want the world to conform to their terms. Ain't going to happen, Mustapha.



I've been out there. And while anti-Americanism is really much exaggerated, where it does exist among the terrorists and their supporters, jealousy is a prime motivating factor. You've heard it before, but it's all too true: They do hate us for our success.



The populations of the Middle East blew it. They've failed. Thirteen hundred years of effort came down to an entire civilization that can't design and build an automobile. And thanks to the wonders of the media age, it's daily rubbed in their faces how badly they've failed.



Oil wealth? A tragedy for the Arabs, since it gave the wealth to the most backward. The Middle East still does not have a single world-class university outside of Israel. Not one. The oil money has been thrown away--it's been a drug, not a tool.



The terrorists don't want progress. They want revenge. At the risk of punning on the title of the book, they don't want new glory--they want their old (largely imagined) glory back. They want to turn back the clock to an imagined world. The terrorists are the deadly siblings of Westerners who believe in Atlantis.



FP: It is clear you are not very fond of France and Germany. How come?



Peters: Actually, I love France and Germany. They're two of my favorite museums. And what's not to like about two grotesquely hypocritical societies who are, between them, responsible for the worst savagery in and beyond Europe over the past several centuries?



Anybody who really wants to see how I take "Old Europe" apart will just have to read the book. Too much to say to get it down here. But the next time the continent that perfected genocide and ethnic cleansing plays the moral superiority card, let's remind them that no German soldier ever liberated anybody--and the most notable achievement of the French military in the past century and a half has been the slaughter of unarmed black Africans.



And just watch their brutal treatment of their Islamic residents. Old Europe--France and Germany--is just the Middle East-lite.



FP: Explain why you believe there are great benefits to America reaching out to India.



Peters: Human capital. Trade. Healthy competition. Strategic position. Common interests. Brilliant, hard-working people. Great food. That enough?



FP: Are there grounds to have hope about Africa?



Peters: Yes. There are plentiful reasons to be hopeful about parts--parts--of Africa. But much of the continent is every bit as disastrous as the popular image has it. My complaint is that we treat that vast, various continent as one big, failed commune. Well, Congo or Sierra Leone certainly aren't inspiring...but in the course of several, recent, lengthy trips to Africa, I was just astonished at the vigor, vision and strategic potential of South Africa. South Africa is well on the way to becoming the first true sub-Saharan great power--and it's another natural ally for us. Oh, the old revolutionary, slogan-spouting generation and their protégés have to die off--and they will. But, in the long-term, I expect great things from South Africa, that they'll control (economically and culturally) southern Africa at least as far north as the Rovuma River. The one qualifier is this: Their next presidential election will be the turning point, either way. If they elect a demagogue, South Africa could still turn into another failing African state. But if they elect a technocrat, get out of the way, because the South Africans are coming.



I explain much of this far better in the book than I can here. Suffice to say that, for all the continent's horrid misery, there are islands of genuine hope. And, of course, there's plenty of wreckage...and AIDS, civil wars, corruption (the greatest bane of all for the developing world). I'm not a Pollyanna. But over the years I've gotten pretty good at spotting both potential crises and potential successes--and South Africa, for all its problems, is a land of stunning opportunities with neo-imperial potential.



FP: Overall, as a former military man, tell us what the United States has to stop doing, and has to start doing, to win this terror war.



Peters: Knock off the bluster and fight like we mean it. To a disheartening degree, the War on Terror has been a war of (ineptly chosen) words. Look, this is a death struggle, a strategic knife fight to the bone. I wish our civilian leaders would stop beating their chests and saying that we're going to get this terrorists or that one--because when we fail to make good on our promises, the terrorists wins by default. More deeds, fewer words.



Above all, we need to think clearly, to cast off the last century's campus-born excuses for the Islamic world of the Middle East. We need to be honest about the threat, in all its dimensions. "Public diplomacy" isn't going to convert the terrorists who were recruited and developed while we looked away from the problem for thirty years. In the end, only deeds convince. And not just military deeds, of course, although those remain indispensable.



Most Americans still do not realize the intensity or the dimensions of the struggle with Islamist terror. Despite 9-11, they just don't have a sense that we're at war. And I'm afraid I have to fault the Bush administration on that count: Good Lord, we're at war with the most implacable enemies we've ever faced (men who regard death as a promotion), and what was our president's priority this year? The reform of Social Security. While I continue to support the administration's overall intent and efforts in Iraq and around the world, I believe the president has failed us badly by not driving home to the people that we're at war.



The Bush administration has done great and necessary things--but all too often they've done those things badly. And only the valor and blood of our troops has redeemed the situation, time after time, from Fallujah to the struggles of the future.



FP: Ralph Peters thank you for joining us today.



Peters: My pleasure, and my thanks. And allow me to say a special thanks to all your readers in uniform, those troops defending the values of our civilization and human decency in distant, discouraging places. Freedom truly isn't free.

Click Here to support Frontpagemag.com.

Tuesday, September 06, 2005

LOCAL, STATE FAILURES DOOMED NEW ORLEANS (Lonsberry)
boblonsberry.com ^ | 09/06/05 | Bob Lonsberry


Posted on 09/06/2005 6:35:29 AM PDT by shortstop


One of the astounding aspects of the Hurricane Katrina tragedy has been the profound incompetence of Louisiana’s politicians. Never has it been more clear that winning an election and being a leader are two completely different things.


From the run-up to the storm to the events since, Louisiana’s governor and New Orleans’ mayor have been useless, far more concerned with taking political advantage and misapplying blame than with saving people’s lives and doing their duty.


And, with incredible gall, this pair has led the ungrateful and dishonest charge against the federal government and President Bush. They, along with various race-baiters and a great many hateful celebrities, have turned this catastrophe to their political advantage, recasting it as an event of racist neglect instead of what it is – the largest relief effort in the history of the United States.


Those with blood on their hands dare to indict the rest of us.


And, yes, they do have blood on their hands.


The woeful mismanagement by the city of New Orleans of the evacuation, the shelters and the relief effort undeniably cost lives. What remains to be seen is how many lives.


Let’s take the evacuation. Though information has been shifting and hard to come by, it seems clear that while the city did run inadequate transit buses for free to evacuate residents before the storm, the much larger school bus fleet was left idle. In fact, it was not even moved so that it not only did not carry any people to safety, it was lost to rising flood waters.


Literally thousands of people were left to face the storm and its aftermath because the city didn’t send its largest transportation tool – the school buses – to get them.


And the two city-established shelters – the convention center and the Superdome – quickly turned to misery and violence because the city failed to supply and supervise them in even the most rudimentary way. The city told people to go to the shelters, but the city did nothing to make such a move safe or healthy.


It did not send in emergency supplies of water, food, blankets, cots, medicines or anything else that would be considered essential. It didn’t even send in port-a-potties. And it left the shelters understaffed or completely un-staffed. There was no organization, no security, no city officials assigned to be in charge.


There was no provision for the people in the shelters to eat, drink, sleep, be safe or go to the bathroom. And yet the city sent tens of thousands of people to them, directly causing misery and death.


In the wake of the storm, the city’s public safety response was confused and ineffective. Blaming its failures on communications gaps, the city astoundingly had no electrical back-up for its police and fire radio system. And it apparently had completely ineffective commanders.


Police officers were able to move around in the city, but did so pointlessly, uselessly and sometimes criminally. As looters gained the upper hand, some New Orleans police joined them.


And some New Orleans police simply ran off.


Nearly one in every seven members of the New Orleans police department abandoned their posts and abandoned their city. Some even stole police vehicles to make their escape, leaving the people they were sworn to defend to suffering and despair.


The cowardice and dishonor of this is all the more obvious when contrasted with the actions of emergency workers on September 11.


The New York firefighters ran in and the New Orleans police ran out.


And the New Orleans mayor is no Rudy Giuliani.


Strength, courage and leadership were absent in New Orleans. Instead of inspiring by their example, the mayor and governor were visibly shaken and afraid. They acted like frightened children. Stupid frightened children.


The governor did little before the storm and little after the storm. She failed to use her National Guard effectively or expeditiously and she refused to let anyone else use it either. She complained that the federal government was doing nothing while refusing to authorize the federal government to operate freely in her state.


Federal officials can only work in a city with the permission and cooperation of state and local officials. That has largely been withheld. Instead, Louisiana’s politicians have made excuses for themselves and made repeated and dishonest attacks against the federal government.


And beyond Louisiana, those whose bread is buttered by racial division and distrust sowed the hateful lie of racial prejudice in the pace of the response. Completely ignoring the enormity of bringing massive amounts of equipment, supplies and aid workers into the devastated region, they told the people of New Orleans and the gullible of the country that this massive relief effort was something to be hated, not appreciated.


In the words of one idiot, “George Bush doesn’t like black people.”


And in the words of Jesse Jackson, standing with a bunch of refugees to whom he brought no water or relief: “This is the hold of a slave ship.”


And that is a lie.


What happened is that one of the largest storms ever to hit the American mainland struck the Gulf Coast. It hit a city which, incredibly, is built below sea level. It hit a city and state whose leadership fundamentally failed to prepare or effectively evacuate. It hit a city and state whose leaders then provided no effective relief for the displaced, and whose incompetence was demonstrated by the collapse of basic institutions like law enforcement.


What also happened is the United States and its people moved with compassion and speed to provide relief. Hearts, wallets and homes were opened. The largest relief effort in the nation’s history was launched and it saved thousands and thousands of people. It is now feeding, housing and clothing those people, and it will eventually reclaim and rebuild their city.


That’s what happened.


But you won’t hear that on the news.


All you’ll hear on the news is the complaint and ingratitude of people who have blood on their hands.


Monday, September 05, 2005

New Orleans, the tragedy
September 1st, 2005


As Hurricane Katrina headed toward New Orleans, sticklers for the actual meaning of words told us that it would be wrong to label the impending disaster a tragedy. That term, with its origins in drama, refers to horrible consequences produced out of the flaws in human nature. A hurricane is a force of nature, and cannot by definition be “tragic” no matter how horrible the outcome.

The drama unfolding in New Orleans, however, is now officially a tragedy. Katrina wrought destruction, but the consequences most horrifying us today are the result of human folly.

For at least a decade, critics have warned that the levee system protecting New Orleans needed serious upgrading. Dire predictions of the complete destruction of the city by either a hurricane or by a historic Mississippi River flood have circulated for many years, but were insufficient to move authorities to expensive action. Holland, after a tragedy killing thousands in the 1950s, reinforced its dykes with more than the thumbs of young boys. New Orleans ignored the lessons.

The looting and apparent near-anarchy in the flooded streets have nothing to do with Mother Nature, and everything to do with human nature, unconstrained by the thin veneer of civilization.

The incomplete evacuation of citizens and warehousing in the Superdome struck me at the time as a poor choice. Why were there not sound trucks cruising the streets warning those detached from the media to run for their lives? Why weren’t there places designated where folks heading out of town could fill up their cars with refugees lacking transportation? Why wasn’t every bus, truck, and railroad freight car pressed into service to haul people away?

Blogger Ultima Thule captured my own impression of the political authorities in Louisiana when she wrote

Louisiana Governor Blanco unfortunately resembles her name -- Blanco -- she looks like a deer caught in the headlines -- oops -- I was going to type headlights -- but that was an apt slip of the fingers.

Nobody wants to kick New Orleans and Louisiana when they are so devastated. But we will be deluding ourselves and laying the foundations for future suffering, if we don’t examine the human failures which have turned a natural disaster into a tragedy.

Few if any cities have contributed more to American culture than New Orleans. Jazz, our distinctive national contribution to music, has its origins in New Orleans. So too in the realm of cuisine, New Orleans is virtually without peer. Many years ago, a wealthy and cultivated Japanese entrepreneur observed to me that New Orleans was the only city in America he had found in which rich and poor people alike understood food. He mentioned Provence in France and Tuscany in Italy as comparisons. You could walk into unimpressive restaurants in less prosperous neighborhoods in New Orleans, patronized by ordinary citizens, not free-spending tourists, and expect a meal made from fresh ingredients, flavored with interesting herbs and spices, and served to patrons who would accept no less.

But the many virtues of New Orleans are offset in part by serious flaws. The flowering of the human spirit in the realm of cultural creativity is counterbalanced by a tradition of corruption, public incompetence, and moral decay. It is no secret that New Orleans and the Great State of Louisiana have a sorry track record when it comes to political corruption. And corruption tolerated in one sphere tends to metastasize and infect other aspects of life. They don’t call it “The Big Easy” because it is simple to start a business, and easy to run one there.

Many years ago, an oilman in Houston pointed out to me that there was no inherent reason Houston should have emerged as the world capital of the petroleum business. New Orleans was already a major city with centuries of history, proximity to oil deposits, and huge transportation advantages when the Houston Ship Channel was dredged, making the then-small city of Houston into a major port. The discovery of the Humble oil field certainly helped Houston rise as an oil center, but the industry could just as easily have centered itself in New Orleans.

When I pressed my oilman informant for the reason Houston prevailed, he gave me a look of pity for my naiveté, and said, “Corruption.” Anyone making a fortune in New Orleans based on access to any kind of public resources would find himself coping with all sorts of hands extended for palm-greasing. Permits, taxes, fees, and outright bribes would be a never-ending nightmare. Houston, in contrast, was interested in growth, jobs, prosperity, and extending a welcoming hand to newcomers. New Orleans might be a great place to spend a pleasant weekend, but Houston is the place to build a business.

Today, metropolitan Houston houses roughly 4 times the population of pre-Katrina metropolitan New Orleans, despite the considerable advantage New Orleans has of capturing the shipping traffic of the Mississippi basin.

It is far from a coincidence that Houston is now absorbing refugees from New Orleans, and preparing to enroll the children of New Orleans in its own school system. Houston is a city built on the can-do spirit (space exploration, oil, medicine are shining examples of the human will to knowledge and improvement, and all have been immeasurably advanced by Houstonians). Houston officials have capably planned for their own possible severe hurricanes, and that disaster planning is now selflessly put at the disposal of their neighbors to the east.

Let us all do everything we can to ameliorate the horrendous suffering of people all over the Gulf Coast, not just in New Orleans. But we must not fail to learn necessary lessons. Hurricanes are predictable and inevitable. Their consequences can be minimized by honest and capable political leadership. It appears that New Orleans could have done much better. We would honor the suffering and deaths by insisting that any rebuilding be premised on a solid moral and political foundation.

Thomas Lifson is the editor and publisher of The American Thinker



Thomas Lifson