Thursday, September 29, 2005

Remarks to the Commonwealth Club

by Michael Crichton
San Francisco
September 15, 2003



I have been asked to talk about what I consider the most important challenge facing mankind, and I have a fundamental answer. The greatest challenge facing mankind is the challenge of distinguishing reality from fantasy, truth from propaganda. Perceiving the truth has always been a challenge to mankind, but in the information age (or as I think of it, the disinformation age) it takes on a special urgency and importance.

We must daily decide whether the threats we face are real, whether the solutions we are offered will do any good, whether the problems we're told exist are in fact real problems, or non-problems. Every one of us has a sense of the world, and we all know that this sense is in part given to us by what other people and society tell us; in part generated by our emotional state, which we project outward; and in part by our genuine perceptions of reality. In short, our struggle to determine what is true is the struggle to decide which of our perceptions are genuine, and which are false because they are handed down, or sold to us, or generated by our own hopes and fears.

As an example of this challenge, I want to talk today about environmentalism. And in order not to be misunderstood, I want it perfectly clear that I believe it is incumbent on us to conduct our lives in a way that takes into account all the consequences of our actions, including the consequences to other people, and the consequences to the environment. I believe it is important to act in ways that are sympathetic to the environment, and I believe this will always be a need, carrying into the future. I believe the world has genuine problems and I believe it can and should be improved. But I also think that deciding what constitutes responsible action is immensely difficult, and the consequences of our actions are often difficult to know in advance. I think our past record of environmental action is discouraging, to put it mildly, because even our best intended efforts often go awry. But I think we do not recognize our past failures, and face them squarely. And I think I know why.

I studied anthropology in college, and one of the things I learned was that certain human social structures always reappear. They can't be eliminated from society. One of those structures is religion. Today it is said we live in a secular society in which many people---the best people, the most enlightened people---do not believe in any religion. But I think that you cannot eliminate religion from the psyche of mankind. If you suppress it in one form, it merely re-emerges in another form. You can not believe in God, but you still have to believe in something that gives meaning to your life, and shapes your sense of the world. Such a belief is religious.

Today, one of the most powerful religions in the Western World is environmentalism. Environmentalism seems to be the religion of choice for urban atheists. Why do I say it's a religion? Well, just look at the beliefs. If you look carefully, you see that environmentalism is in fact a perfect 21st century remapping of traditional Judeo-Christian beliefs and myths.

There's an initial Eden, a paradise, a state of grace and unity with nature, there's a fall from grace into a state of pollution as a result of eating from the tree of knowledge, and as a result of our actions there is a judgment day coming for us all. We are all energy sinners, doomed to die, unless we seek salvation, which is now called sustainability. Sustainability is salvation in the church of the environment. Just as organic food is its communion, that pesticide-free wafer that the right people with the right beliefs, imbibe.

Eden, the fall of man, the loss of grace, the coming doomsday---these are deeply held mythic structures. They are profoundly conservative beliefs. They may even be hard-wired in the brain, for all I know. I certainly don't want to talk anybody out of them, as I don't want to talk anybody out of a belief that Jesus Christ is the son of God who rose from the dead. But the reason I don't want to talk anybody out of these beliefs is that I know that I can't talk anybody out of them. These are not facts that can be argued. These are issues of faith.

And so it is, sadly, with environmentalism. Increasingly it seems facts aren't necessary, because the tenets of environmentalism are all about belief. It's about whether you are going to be a sinner, or saved. Whether you are going to be one of the people on the side of salvation, or on the side of doom. Whether you are going to be one of us, or one of them.

Am I exaggerating to make a point? I am afraid not. Because we know a lot more about the world than we did forty or fifty years ago. And what we know now is not so supportive of certain core environmental myths, yet the myths do not die. Let's examine some of those beliefs.

There is no Eden. There never was. What was that Eden of the wonderful mythic past? Is it the time when infant mortality was 80%, when four children in five died of disease before the age of five? When one woman in six died in childbirth? When the average lifespan was 40, as it was in America a century ago. When plagues swept across the planet, killing millions in a stroke. Was it when millions starved to death? Is that when it was Eden?

And what about indigenous peoples, living in a state of harmony with the Eden-like environment? Well, they never did. On this continent, the newly arrived people who crossed the land bridge almost immediately set about wiping out hundreds of species of large animals, and they did this several thousand years before the white man showed up, to accelerate the process. And what was the condition of life? Loving, peaceful, harmonious? Hardly: the early peoples of the New World lived in a state of constant warfare. Generations of hatred, tribal hatreds, constant battles. The warlike tribes of this continent are famous: the Comanche, Sioux, Apache, Mohawk, Aztecs, Toltec, Incas. Some of them practiced infanticide, and human sacrifice. And those tribes that were not fiercely warlike were exterminated, or learned to build their villages high in the cliffs to attain some measure of safety.

How about the human condition in the rest of the world? The Maori of New Zealand committed massacres regularly. The dyaks of Borneo were headhunters. The Polynesians, living in an environment as close to paradise as one can imagine, fought constantly, and created a society so hideously restrictive that you could lose your life if you stepped in the footprint of a chief. It was the Polynesians who gave us the very concept of taboo, as well as the word itself. The noble savage is a fantasy, and it was never true. That anyone still believes it, 200 years after Rousseau, shows the tenacity of religious myths, their ability to hang on in the face of centuries of factual contradiction.

There was even an academic movement, during the latter 20th century, that claimed that cannibalism was a white man's invention to demonize the indigenous peoples. (Only academics could fight such a battle.) It was some thirty years before professors finally agreed that yes, cannibalism does inbdeed occur among human beings. Meanwhile, all during this time New Guinea highlanders in the 20th century continued to eat the brains of their enemies until they were finally made to understand that they risked kuru, a fatal neurological disease, when they did so.

More recently still the gentle Tasaday of the Philippines turned out to be a publicity stunt, a nonexistent tribe. And African pygmies have one of the highest murder rates on the planet.

In short, the romantic view of the natural world as a blissful Eden is only held by people who have no actual experience of nature. People who live in nature are not romantic about it at all. They may hold spiritual beliefs about the world around them, they may have a sense of the unity of nature or the aliveness of all things, but they still kill the animals and uproot the plants in order to eat, to live. If they don't, they will die.

And if you, even now, put yourself in nature even for a matter of days, you will quickly be disabused of all your romantic fantasies. Take a trek through the jungles of Borneo, and in short order you will have festering sores on your skin, you'll have bugs all over your body, biting in your hair, crawling up your nose and into your ears, you'll have infections and sickness and if you're not with somebody who knows what they're doing, you'll quickly starve to death. But chances are that even in the jungles of Borneo you won't experience nature so directly, because you will have covered your entire body with DEET and you will be doing everything you can to keep those bugs off you.

The truth is, almost nobody wants to experience real nature. What people want is to spend a week or two in a cabin in the woods, with screens on the windows. They want a simplified life for a while, without all their stuff. Or a nice river rafting trip for a few days, with somebody else doing the cooking. Nobody wants to go back to nature in any real way, and nobody does. It's all talk-and as the years go on, and the world population grows increasingly urban, it's uninformed talk. Farmers know what they're talking about. City people don't. It's all fantasy.

One way to measure the prevalence of fantasy is to note the number of people who die because they haven't the least knowledge of how nature really is. They stand beside wild animals, like buffalo, for a picture and get trampled to death; they climb a mountain in dicey weather without proper gear, and freeze to death. They drown in the surf on holiday because they can't conceive the real power of what we blithely call "the force of nature." They have seen the ocean. But they haven't been in it.

The television generation expects nature to act the way they want it to be. They think all life experiences can be tivo-ed. The notion that the natural world obeys its own rules and doesn't give a damn about your expectations comes as a massive shock. Well-to-do, educated people in an urban environment experience the ability to fashion their daily lives as they wish. They buy clothes that suit their taste, and decorate their apartments as they wish. Within limits, they can contrive a daily urban world that pleases them.

But the natural world is not so malleable. On the contrary, it will demand that you adapt to it-and if you don't, you die. It is a harsh, powerful, and unforgiving world, that most urban westerners have never experienced.

Many years ago I was trekking in the Karakorum mountains of northern Pakistan, when my group came to a river that we had to cross. It was a glacial river, freezing cold, and it was running very fast, but it wasn't deep---maybe three feet at most. My guide set out ropes for people to hold as they crossed the river, and everybody proceeded, one at a time, with extreme care. I asked the guide what was the big deal about crossing a three-foot river. He said, well, supposing you fell and suffered a compound fracture. We were now four days trek from the last big town, where there was a radio. Even if the guide went back double time to get help, it'd still be at least three days before he could return with a helicopter. If a helicopter were available at all. And in three days, I'd probably be dead from my injuries. So that was why everybody was crossing carefully. Because out in nature a little slip could be deadly.

But let's return to religion. If Eden is a fantasy that never existed, and mankind wasn't ever noble and kind and loving, if we didn't fall from grace, then what about the rest of the religious tenets? What about salvation, sustainability, and judgment day? What about the coming environmental doom from fossil fuels and global warming, if we all don't get down on our knees and conserve every day?

Well, it's interesting. You may have noticed that something has been left off the doomsday list, lately. Although the preachers of environmentalism have been yelling about population for fifty years, over the last decade world population seems to be taking an unexpected turn. Fertility rates are falling almost everywhere. As a result, over the course of my lifetime the thoughtful predictions for total world population have gone from a high of 20 billion, to 15 billion, to 11 billion (which was the UN estimate around 1990) to now 9 billion, and soon, perhaps less. There are some who think that world population will peak in 2050 and then start to decline. There are some who predict we will have fewer people in 2100 than we do today. Is this a reason to rejoice, to say halleluiah? Certainly not. Without a pause, we now hear about the coming crisis of world economy from a shrinking population. We hear about the impending crisis of an aging population. Nobody anywhere will say that the core fears expressed for most of my life have turned out not to be true. As we have moved into the future, these doomsday visions vanished, like a mirage in the desert. They were never there---though they still appear, in the future. As mirages do.

Okay, so, the preachers made a mistake. They got one prediction wrong; they're human. So what. Unfortunately, it's not just one prediction. It's a whole slew of them. We are running out of oil. We are running out of all natural resources. Paul Ehrlich: 60 million Americans will die of starvation in the 1980s. Forty thousand species become extinct every year. Half of all species on the planet will be extinct by 2000. And on and on and on.

With so many past failures, you might think that environmental predictions would become more cautious. But not if it's a religion. Remember, the nut on the sidewalk carrying the placard that predicts the end of the world doesn't quit when the world doesn't end on the day he expects. He just changes his placard, sets a new doomsday date, and goes back to walking the streets. One of the defining features of religion is that your beliefs are not troubled by facts, because they have nothing to do with facts.

So I can tell you some facts. I know you haven't read any of what I am about to tell you in the newspaper, because newspapers literally don't report them. I can tell you that DDT is not a carcinogen and did not cause birds to die and should never have been banned. I can tell you that the people who banned it knew that it wasn't carcinogenic and banned it anyway. I can tell you that the DDT ban has caused the deaths of tens of millions of poor people, mostly children, whose deaths are directly attributable to a callous, technologically advanced western society that promoted the new cause of environmentalism by pushing a fantasy about a pesticide, and thus irrevocably harmed the third world. Banning DDT is one of the most disgraceful episodes in the twentieth century history of America. We knew better, and we did it anyway, and we let people around the world die and didn't give a damn.

I can tell you that second hand smoke is not a health hazard to anyone and never was, and the EPA has always known it. I can tell you that the evidence for global warming is far weaker than its proponents would ever admit. I can tell you the percentage the US land area that is taken by urbanization, including cities and roads, is 5%. I can tell you that the Sahara desert is shrinking, and the total ice of Antarctica is increasing. I can tell you that a blue-ribbon panel in Science magazine concluded that there is no known technology that will enable us to halt the rise of carbon dioxide in the 21st century. Not wind, not solar, not even nuclear. The panel concluded a totally new technology-like nuclear fusion-was necessary, otherwise nothing could be done and in the meantime all efforts would be a waste of time. They said that when the UN IPCC reports stated alternative technologies existed that could control greenhouse gases, the UN was wrong.

I can, with a lot of time, give you the factual basis for these views, and I can cite the appropriate journal articles not in whacko magazines, but in the most prestigeous science journals, such as Science and Nature. But such references probably won't impact more than a handful of you, because the beliefs of a religion are not dependant on facts, but rather are matters of faith. Unshakeable belief.

Most of us have had some experience interacting with religious fundamentalists, and we understand that one of the problems with fundamentalists is that they have no perspective on themselves. They never recognize that their way of thinking is just one of many other possible ways of thinking, which may be equally useful or good. On the contrary, they believe their way is the right way, everyone else is wrong; they are in the business of salvation, and they want to help you to see things the right way. They want to help you be saved. They are totally rigid and totally uninterested in opposing points of view. In our modern complex world, fundamentalism is dangerous because of its rigidity and its imperviousness to other ideas.

I want to argue that it is now time for us to make a major shift in our thinking about the environment, similar to the shift that occurred around the first Earth Day in 1970, when this awareness was first heightened. But this time around, we need to get environmentalism out of the sphere of religion. We need to stop the mythic fantasies, and we need to stop the doomsday predictions. We need to start doing hard science instead.

There are two reasons why I think we all need to get rid of the religion of environmentalism.

First, we need an environmental movement, and such a movement is not very effective if it is conducted as a religion. We know from history that religions tend to kill people, and environmentalism has already killed somewhere between 10-30 million people since the 1970s. It's not a good record. Environmentalism needs to be absolutely based in objective and verifiable science, it needs to be rational, and it needs to be flexible. And it needs to be apolitical. To mix environmental concerns with the frantic fantasies that people have about one political party or another is to miss the cold truth---that there is very little difference between the parties, except a difference in pandering rhetoric. The effort to promote effective legislation for the environment is not helped by thinking that the Democrats will save us and the Republicans won't. Political history is more complicated than that. Never forget which president started the EPA: Richard Nixon. And never forget which president sold federal oil leases, allowing oil drilling in Santa Barbara: Lyndon Johnson. So get politics out of your thinking about the environment.

The second reason to abandon environmental religion is more pressing. Religions think they know it all, but the unhappy truth of the environment is that we are dealing with incredibly complex, evolving systems, and we usually are not certain how best to proceed. Those who are certain are demonstrating their personality type, or their belief system, not the state of their knowledge. Our record in the past, for example managing national parks, is humiliating. Our fifty-year effort at forest-fire suppression is a well-intentioned disaster from which our forests will never recover. We need to be humble, deeply humble, in the face of what we are trying to accomplish. We need to be trying various methods of accomplishing things. We need to be open-minded about assessing results of our efforts, and we need to be flexible about balancing needs. Religions are good at none of these things.

How will we manage to get environmentalism out of the clutches of religion, and back to a scientific discipline? There's a simple answer: we must institute far more stringent requirements for what constitutes knowledge in the environmental realm. I am thoroughly sick of politicized so-called facts that simply aren't true. It isn't that these "facts" are exaggerations of an underlying truth. Nor is it that certain organizations are spinning their case to present it in the strongest way. Not at all---what more and more groups are doing is putting out is lies, pure and simple. Falsehoods that they know to be false.

This trend began with the DDT campaign, and it persists to this day. At this moment, the EPA is hopelessly politicized. In the wake of Carol Browner, it is probably better to shut it down and start over. What we need is a new organization much closer to the FDA. We need an organization that will be ruthless about acquiring verifiable results, that will fund identical research projects to more than one group, and that will make everybody in this field get honest fast.

Because in the end, science offers us the only way out of politics. And if we allow science to become politicized, then we are lost. We will enter the Internet version of the dark ages, an era of shifting fears and wild prejudices, transmitted to people who don't know any better. That's not a good future for the human race. That's our past. So it's time to abandon the religion of environmentalism, and return to the science of environmentalism, and base our public policy decisions firmly on that.

Thank you very much.

Wednesday, September 28, 2005

Bush urges Congress to help US refineries expand
26 Sep 2005 20:42:43 GMT

Source: Reuters

(Recasts, new throughout)

By Chris Baltimore

WASHINGTON, Sept 26 (Reuters) - President George W. Bush on Monday urged Congress to clear away regulatory obstacles that prevent building new U.S. oil refineries, a move certain to trigger a new fight with environmental groups and Democrats.

Hurricane Rita's 120 mile per hour winds last weekend knocked out two Texas refineries for up to a month. That came on top of 5 percent of U.S. Gulf Coast refining capacity that remains offline from Hurricane Katrina in August.

"The storms have shown how fragile the balance is between supply and demand in America," Bush said. "We need more refining capacity."

No new U.S. refinery has been built since 1976. U.S. gasoline demand has grown to over 9 million barrels per day (bpd) but a maze of permitting requirements and landowner objections has blocked new projects.

Bush specifically cited as a roadblock the "new source review" rule administered by the Environmental Protection Agency as part of the Clean Air Act. It aims to protect public health and ensure that refinery expansions do not increase air pollution from substances linked to acid rain and smog.

"The issue of new source review, for example, is one that we've reviewed and said that, for the sake of ... wise and careful expansion of refining capacity, we ought to look at those rules and regulations," Bush told reporters after meeting with Energy Department officials about hurricane damage.

Environmental groups have opposed earlier moves by the Bush administration to change the EPA new source review rule.

"It's clear that the president and his allies in the House are using Katrina as cover for ramming through proposals to weaken the Clean Air Act," said Kevin Curtis, vice president of the National Environmental Trust.

Democrats are also skeptical. They say oil companies are disinclined to build new plants because tight capacity keeps profits healthy. However, expanding existing plants is a less costly way to gain extra gasoline production -- and could become even cheaper if the "new source review" rule is gutted.

"This rush to push through legislation of dubious virtue without any significant review is both unnecessary and unwise," said Rep. John Dingell, senior Democrat on the House Energy Committee.

REPUBLICANS OFFER BILLS

Since Katrina blasted the Louisiana coast on Aug. 29, Republican lawmakers have proposed a raft of legislation aimed at boosting refining capacity.

Joe Barton, chairman of the House Energy Committee, is expected to hold a bill-drafting session on Wednesday to add about 2 million bpd in refining capacity from both new plants and expansion projects. Another House bill, offered by John Shadegg of Arizona, would require the government to provide risk insurance to six new refineries.

In the Senate, Jon Kyl of Arizona is pursuing tax breaks to encourage new or expanded U.S. refineries.

And EPA officials are drafting legislation that would give the agency broad power to suspend the Clean Air Act -- including the new source review rule -- to respond to natural disasters, according to Democratic Rep. Henry Waxman.

Bush said he supports such efforts.

"I look forward to working with Congress, as we analyze the energy situation, to expedite the capacity of our refiners to expand and/or build new refineries," he said.

Building a new refinery would take up to 10 years and $3 billion under existing rules, according to the National Petrochemical and Refiners Association.

"New, grass-roots refineries are not necessarily the best answer for the industry," Exxon Mobil Chairman Lee Raymond told CNBC television. "What we really need is a streamlining of the regulations." He did not elaborate.

Barton's bill would require the EPA to change pollution rules to give refiners "maximum legal flexibility" available under existing law to expand and retool their plants.

LINKS:

Thursday, September 22, 2005

Atta files destroyed by Pentagon
By Bill Gertz
THE WASHINGTON TIMES
September 22, 2005


Pentagon lawyers during the Clinton administration ordered the destruction of intelligence reports that identified September 11 leader Mohamed Atta months before the attacks on the Pentagon and World Trade Center, according to congressional testimony yesterday.
A lawyer for two Pentagon whistleblowers also told the Senate Judiciary Committee yesterday that the Defense Intelligence Agency last year destroyed files on the Army's computer data-mining program known as Able Danger to avoid disclosing the information.
Retired Army Maj. Erik Kleinsmith, former director of the Army Land Information Warfare Center, told the panel he was directed by Pentagon lawyers to delete 2? terabytes of computer data -- the equivalent of one-quarter of the information in the Library of Congress -- on Able Danger in May or June 2000 because of legal concerns about information on U.S. citizens.
Maj. Kleinsmith said keeping the data beyond 90 days would have violated an Army directive limiting the collection of information on U.S. citizens.
"Yes, I could have conveniently forgot to delete the data, and we could have kept it," Maj. Kleinsmith said. "But I knowingly would have been in violation of the regulation."
The attorney for two Pentagon officials involved in Able Danger testified that the program did not identify Atta as being in the United States, only that he was linked by analysts to an al Qaeda cell in Brooklyn, N.Y.
"At no time did Able Danger identify Mohamed Atta as being physically present in the United States," said Mark Zaid, who represents Army Reserve Lt. Col. Anthony Shaffer, an intelligence analyst, and J.D. Smith, a defense official, who both claim Able Danger data was mishandled.
"And no information at the time that they obtained would have led anyone to believe that criminal activity had taken place or that any specific terrorist activities were being planned. All they developed were associational links."
Mr. Zaid said Able Danger-related data, including possibly a chart containing a photo of Atta, that was compiled by Orion Scientific, was destroyed by DIA some time in the spring of 2004 after the official who held the material had his security clearance revoked.
The Senate hearing included testimony from Rep. Curt Weldon, Pennsylvania Republican, who first went public with information that the Army intelligence unit had uncovered information on Atta in Brooklyn, and three other of the September 11 suicide hijackers in 2000 through the computer-based program that sifted both secret intelligence and unclassified databases for information.
"Over the past three months, I have witnessed denial, deception, threats to [Defense Department] employees, character assassination, and now silence," said Mr. Weldon.
He said that if the information had been handled properly "it might have had an impact on the most significant attack ever against our country and our citizens." He charged that the government commission that investigated September 11 had overlooked the Able Danger material on Atta.
A recent Pentagon inquiry into the matter found no reports linking Atta to a Brooklyn al Qaeda cell. However, investigators uncovered one report linking al Qaeda leader Mohammed Atef, to Islamists in Brooklyn. Atef was killed in Afghanistan in 2001.
Mr. Weldon said he thinks Able Danger was shut down after a "profile" of Chinese weapons proliferation linked two Americans to Chinese students at Stanford University engaged in technology acquisition for China.
During the profile, the names of Secretary of State Condoleezza Rice, at the time the Stanford University provost, and former Defense Secretary William Perry were mentioned in the data and created "a wave of controversy," he said.
After Congress sought the data, "tremendous pressure was placed on the Army, because this was a prototype operation, and they shut down the Able Danger operation," Mr. Weldon said.




Friday, September 16, 2005

MISUNDERESTIMATED:

1993, Summer - Mocked for trying to unseat Governor Ann Richards.

1994, Nov. – Defeats Governor Ann Richards to become Governor of Texas.

1998, Nov. – Becomes first Texas Governor to win re-election.

1999, Fall – Mocked for announcing a run for the Presidency.

2000, Winter – The media nominates John McCain for the Republican Party.

2000, Oct. – Mocked by media before three debates with Al Gore.

2000, Oct. – Wins all three debates against Al Gore.

2000, Nov. – Dirty trick unleashed by Gore Campaign and media.

2000, Nov. – Dan Rather calls Florida for Gore one hour before polls close.

2000, Nov. – Bush Wins election when Katherine Harris certifies Florida’s election.

2000, Nov. – Democrats try to steal election through courts.

2000, Dec. – Supreme Court stops Democrats from stealing the election

2001, Spring – Bush gets his first round of tax cuts passed.

2001, Summer – Jim Jeffords hands Senate control to Tom Daschle and the Democrats.

2001, Aug. – Bush Job Approval hits all-time low according to lib media polls.

2001, Sep. – 9/11/01 terrorist attacks destroy WTC and define Bush Presidency.

2001, Sep. – Bush has bullhorn moment at the WTC.

2001, Sep. – Bush galvanizes the nation in his speech before a joint session of Congress.

2001, Oct. – Democrats say Bush is dragging his feet on responding to the attacks.

2001, Oct. – The U.S. Military begins destroying the Taliban the next day.

2001, Oct. – Democrats say the War in Afghanistan is a quagmire in week one.

2002, Spring – Media and Democrats say Bush knew about 9-11 before it happened.

2002, Summer – Bush begins debate on removing Saddam Hussein.

2002, Sep. – Democrats say Bush is dragging his feet on dealing with Iraq.

2002, Sep. - Democrats demand Homeland Security Department.

2002, Nov. – Bush bets his popularity and the GOP wins back Senate, gains in House.

2002, Nov. – Homeland Security Act of 2002 passes to create new department.

2002, Dec. – Congress passes the Iraq War Resolution. Most Democrats support it.

2003, Mar. – War in Iraq begins.

2003, Apr. – Democrats call Iraq a quagmire one week after war starts.

2003, Apr. – Baghdad falls.

2003, Apr. – Media focuses on looting of Museum. Turns out most artifacts are fine.

2003, May – Bush gets 2nd round of tax cuts passed with GOP Senate helped elect.

2003, July – ‘Bad tan’ Joe Wilson becomes media darling when he lies about Niger.

2003, July – Uday and Qusay take permanent dirt knap.

2003, Aug. – Foreign terrorists begin car bombing. Media calls foreigners, insurgents.

2003, Sep. – Bush Job Approval hits all time low. Lower than previous low in Aug. 2001

2003, Dec. – Saddam Hussein is captured in Mosul. Democrats cry.

2004, Jan. – Dean implodes. Kerry becomes ‘electable’ savior.

2004, Jan. – David Kay becomes media darling with his WMD testimony.

2004, Jan. – Paul ‘mumbles’ O’Neill gets 60 Minutes red carpet.

2004, Jan. – Bush Job Approval hits new all-time low according to lib media polls.

2004, Feb. – Richard Clarke gets 60 Minutes red carpet. The horror. The horror.

2004, Mar. – Bush Job Approval hits new all-time low according to lib media polls.

2004, May – Abu Ghraib photos are paraded on 60 Minutes Wednesday.

2004, May – Bush Job Approval hits new all-time low according to lib media polls.

2004, June – 9-11 Commission becomes platform for the Jersey girls to bash Bush.

2004, Summer – Fahrenheit 9-11 anti- American propaganda film becomes media hit.

2004, Aug. – Anti-Bush liberals led by the very fat Michael Moore march in NYC.

2004, Sep. – Bush ends convention with a speech that crushes Democrats hopes.

2004, Sep. – 60 Minutes Wednesday gives America Memo-gate with a story on Bush.

2004, Sep. – 1,000th soldier dies in Iraq War. Democrats and media celebrate.

2004, Oct. – Bush mocked for poor performances against John Kerry in debates.

2004, Oct. – Afghanistan holds a successful First Presidential Election.

2004, Oct. - NY Times puts out fake story on missing ammo in Iraq.

2004, Oct. – Osama Bin Laden endorses John Kerry.

2004, Nov. – Fake Exit Polls produced by the AP on Election Day to discourage GOP.

2004, Nov. – Bush Wins Re-election 51 – 48 with over 62 Million votes.

2004, Nov. – GOP makes huge gains in both the Senate and the House.

2004, Nov. – Insane liberals claim the election in Ohio was stolen.

2005, Jan. – Sen. Barbara Boxer embarrasses herself by protesting the Election Results.

2005, Jan. – Democrats say the election in Iraq will be a blood bath.

2005, Jan. – 8 Million Iraqis vote. Their turnout nearly matches ours.

2005, Apr. – Lebanon defies Syria. Moves toward kicking them out.

2005, Spring – Terrorists begin large car bombing campaign in Iraq. Democrats celebrate

2005, July – Bush Job Approval hits new all-time low according to lib media polls.

2005, July – Media puts Karl Rove in jail. Media resurrects ‘Bad tan’ Joe Wilson.

2005, Aug. – Media gives bullhorn to Cindy Sheehan.

2005, Aug. – Iraqis create their First Constitution.

2005, Aug. – Hurricane Katrina blows Cindy Sheehan off the map.

2005, Sep. – Media and Democrats blame Bush for the delay in the response.

2005, July – Bush Job Approval hits new all-time low according to lib media polls.

2005, Sep. – Bush sends General Honore to take control. The military succeeds.

2005, Sep. – Bush delivers speech that uplifts America and demoralizes Democrats.

2005, Sep. - On the verge of having John Roberts confirmed. The Teflon Bork.

IV

Elites throughout the West are living a lie, basing the futures of their societies on the assumption that all groups of people are equal in all respects. Lie is a strong word, but justified. It is a lie because so many elite politicians who profess to believe it in public do not believe it in private. It is a lie because so many elite scholars choose to ignore what is already known and choose not to inquire into what they suspect. We enable ourselves to continue to live the lie by establishing a taboo against discussion of group differences.

The taboo is not perfect—otherwise, I would not have been able to document this essay—but it is powerful. Witness how few of Harvard’s faculty who understood the state of knowledge about sex differences were willing to speak out during the Summers affair. In the public-policy debate, witness the contorted ways in which even the opponents of policies like affirmative action frame their arguments so that no one can accuse them of saying that women are different from men or blacks from whites. Witness the unwillingness of the mainstream media to discuss group differences without assuring readers that the differences will disappear when the world becomes a better place.

The taboo arises from an admirable idealism about human equality. If it did no harm, or if the harm it did were minor, there would be no need to write about it. But taboos have consequences.

The nature of many of the consequences must be a matter of conjecture because people are so fearful of exploring them.76 Consider an observation furtively voiced by many who interact with civil servants: that government is riddled with people who have been promoted to their level of incompetence because of pressure to have a staff with the correct sex and ethnicity in the correct proportions and positions. Are these just anecdotes? Or should we be worrying about the effects of affirmative action on the quality of government services?77 It would be helpful to know the answers, but we will not so long as the taboo against talking about group difference prevails.

How much damage has the taboo done to the education of children? Christina Hoff Sommers has argued that willed blindness to the different developmental patterns of boys and girls has led many educators to see boys as aberrational and girls as the norm, with pervasive damage to the way our elementary and secondary schools are run.78 Is she right? Few have been willing to pursue the issue lest they be required to talk about innate group differences. Similar questions can be asked about the damage done to medical care, whose practitioners have only recently begun to acknowledge the ways in which ethnic groups respond differently to certain drugs.79

How much damage has the taboo done to our understanding of America’s social problems? The part played by sexism in creating the ratio of males to females on mathematics faculties is not the ratio we observe but what remains after adjustment for male-female differences in high-end mathematical ability. The part played by racism in creating different outcomes in black and white poverty, crime, and illegitimacy is not the raw disparity we observe but what remains after controlling for group characteristics. For some outcomes, sex or race differences nearly disappear after a proper analysis is done. For others, a large residual difference remains.80 In either case, open discussion of group differences would give us a better grasp on where to look for causes and solutions.



What good can come of raising this divisive topic? The honest answer is that no one knows for sure. What we do know is that the taboo has crippled our ability to explore almost any topic that involves the different ways in which groups of people respond to the world around them—which means almost every political, social, or economic topic of any complexity.

Thus my modest recommendation, requiring no change in laws or regulations, just a little more gumption. Let us start talking about group differences openly—all sorts of group differences, from the visuospatial skills of men and women to the vivaciousness of Italians and Scots. Let us talk about the nature of the manly versus the womanly virtues. About differences between Russians and Chinese that might affect their adoption of capitalism. About differences between Arabs and Europeans that might affect the assimilation of Arab immigrants into European democracies. About differences between the poor and non-poor that could inform policy for reducing poverty.

Even to begin listing the topics that could be enriched by an inquiry into the nature of group differences is to reveal how stifled today’s conversation is. Besides liberating that conversation, an open and undefensive discussion would puncture the irrational fear of the male-female and black-white differences I have surveyed here. We would be free to talk about other sexual and racial differences as well, many of which favor women and blacks, and none of which is large enough to frighten anyone who looks at them dispassionately.

Talking about group differences does not require any of us to change our politics. For every implication that the Right might seize upon (affirmative-action quotas are ill-conceived), another gives fodder to the Left (innate group differences help rationalize compensatory redistribution by the state).81 But if we do not need to change our politics, talking about group differences obligates all of us to renew our commitment to the ideal of equality that Thomas Jefferson had in mind when he wrote as a self-evident truth that all men are created equal. Steven Pinker put that ideal in today’s language in The Blank Slate, writing that “Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group.”82

Nothing in this essay implies that this moral principle has already been realized or that we are powerless to make progress. In elementary and secondary education, many outcomes are tractable even if group differences in ability remain unchanged. Dropout rates, literacy, and numeracy are all tractable. School discipline, teacher performance, and the quality of the curriculum are tractable. Academic performance within a given IQ range is tractable. The existence of group differences need not and should not discourage attempts to improve schooling for millions of American children who are now getting bad educations.

In university education and in the world of work, overall openness of opportunity has been transformed for the better over the last half-century. But the policies we now have in place are impeding, not facilitating, further progress. Creating double standards for physically demanding jobs so that women can qualify ensures that men in those jobs will never see women as their equals. In universities, affirmative action ensures that the black-white difference in IQ in the population at large is brought onto the campus and made visible to every student. The intentions of their designers notwithstanding, today’s policies are perfectly fashioned to create separation, condescension, and resentment—and so they have done.

The world need not be that way. Any university or employer that genuinely applied a single set of standards for hiring, firing, admitting, and promoting would find that performance across different groups really is distributed indistinguishably. But getting to that point nationwide will require us to jettison an apparatus of laws, regulations, and bureaucracies that has been 40 years in the making. That will not happen until the conversation has opened up. So let us take one step at a time. Let us stop being afraid of data that tell us a story we do not want to hear, stop the name-calling, stop the denial, and start facing reality.

CHARLES MURRAY is the W.H. Brady Scholar in Freedom and Culture at the American Enterprise Institute. His previous contributions to COMMENTARY, available online, include “The Bell Curve and Its Critics” (May 1995, with a subsequent exchange in the August 1995 issue).


III

Turning to race, we must begin with the fraught question of whether it even exists, or whether it is instead a social construct. The Harvard geneticist Richard Lewontin originated the idea of race as a social construct in 1972, arguing that the genetic differences across races were so trivial that no scientist working exclusively with genetic data would sort people into blacks, whites, or Asians. In his words, “racial classification is now seen to be of virtually no genetic or taxonomic significance.”25

Lewontin’s position, which quickly became a tenet of political correctness, carried with it a potential means of being falsified. If he was correct, then a statistical analysis of genetic markers would not produce clusters corresponding to common racial labels.

In the last few years, that test has become feasible, and now we know that Lewontin was wrong.26 Several analyses have confirmed the genetic reality of group identities going under the label of race or ethnicity.27 In the most recent, published this year, all but five of the 3,636 subjects fell into the cluster of genetic markers corresponding to their self-identified ethnic group.28 When a statistical procedure, blind to physical characteristics and working exclusively with genetic information, classifies 99.9 percent of the individuals in a large sample in the same way they classify themselves, it is hard to argue that race is imaginary.

Homo sapiens actually falls into many more interesting groups than the bulky ones known as “races.”29 As new findings appear almost weekly, it seems increasingly likely that we are just at the beginning of a process that will identify all sorts of genetic differences among groups, whether the groups being compared are Nigerian blacks and Kenyan blacks, lawyers and engineers, or Episcopalians and Baptists. At the moment, the differences that are obviously genetic involve diseases (Ashkenazi Jews and Tay-Sachs disease, black Africans and sickle-cell anemia, Swedes and hemochromatosis). As time goes on, we may yet come to understand better why, say, Italians are more vivacious than Scots.

Out of all the interesting and intractable differences that may eventually be identified, one in particular remains a hot button like no other: the IQ difference between blacks and whites. What is the present state of our knowledge about it?

There is no technical dispute on some of the core issues. In the aftermath of The Bell Curve, the American Psychological Association established a task force on intelligence whose report was published in early 1996.30 The task force reached the same conclusions as The Bell Curve on the size and meaningfulness of the black-white difference. Historically, it has been about one standard deviation31 in magnitude among subjects who have reached adolescence;32 cultural bias in IQ tests does not explain the difference; and the tests are about equally predictive of educational, social, and economic outcomes for blacks and whites. However controversial such assertions may still be in the eyes of the mainstream media, they are not controversial within the scientific community.

The most important change in the state of knowledge since the mid-1990’s lies in our increased understanding of what has happened to the size of the black-white difference over time. Both the task force and The Bell Curve concluded that some narrowing had occurred since the early 1970’s. With the advantage of an additional decade of data, we are now able to be more precise: (1) The black-white difference in scores on educational achievement tests has narrowed significantly. (2) The black-white convergence in scores on the most highly “g­-loaded” tests—the tests that are the best measures of cognitive ability—has been smaller, and may be unchanged, since the first tests were administered 90 years ago.



With regard to the difference in educational achievement, the narrowing of scores on major tests occurred in the 1970’s and 80’s. In the case of the SAT, the gaps in the verbal and math tests as of 1972 were 1.24 and 1.26 standard deviations respectively.33 By 1991, when the gaps were smallest (they have risen slightly since then), those numbers had dropped by .37 and .35 standard deviations.

The National Assessment of Educational Progress (NAEP), which is not limited to college-bound students, is preferable to the SAT for estimating nationally representative trends, but the story it tells is similar.34 Among students ages nine, thirteen, and seventeen, the black-white differences in math as of the first NAEP test in 1973 were 1.03, 1.29, and 1.24 standard deviations respectively. For nine-year-olds, the difference hit its all-time low of .73 standard deviations in 2004, a drop of .30 standard deviations. But almost all of that convergence had been reached by 1986, when the gap was .78 standard deviations. For thirteen-year-olds, the gap dropped by .45 standard deviations, reaching its low in 1986. For seventeen-year-olds, the gap dropped by .52 standard deviations, reaching its low in 1990.

In the reading test, the comparable gaps for ages nine, thirteen, and seventeen as of the first NAEP test in 1971 were 1.12, 1.17, and 1.25 standard deviations. Those gaps had shrunk by .38, .62, and .68 standard deviations respectively at their lowest points in 1988.35 They have since remained effectively unchanged.

An analysis by Larry Hedges and Amy Nowell uses a third set of data, examining the trends for high-school seniors by comparing six large data bases from different time periods from 1965 to 1992. The black-white difference on a combined measure of math, vocabulary, and reading fell from 1.18 to .82 standard deviations in that time, a reduction of .36 standard deviations.36

So black and white academic achievement converged significantly in the 1970’s and 1980’s, typically by more than a third of a standard deviation, and since then has stayed about the same.37 What about convergence in tests explicitly designed to measure IQ rather than academic achievement?38 The ambiguities in the data leave two defensible positions. The first is that the IQ difference is about one standard deviation, effectively unchanged since the first black-white comparisons 90 years ago. The second is that harbingers of a narrowing difference are starting to emerge. I cannot settle the argument here, but I can convey some sense of the uncertainty.



The case for an unchanged black-white IQ difference is straightforward. If you take all the black-white differences on IQ tests from the first ones in World War I up to the present, there is no statistically significant downward trend. Of course the results vary, because tests vary in the precision with which they measure the general mental factor (g) and samples vary in their size and representativeness. But results continue to center around a black-white difference of about 1.0 to 1.1 standard deviations through the most recent data.39

The case for a reduction has two important recent results to work with. The first is from the 1997 re-norming of the Armed Forces Qualification Test (AFQT), which showed a black-white difference of .97 standard deviations.40 Since the typical difference on paper-and-pencil IQ tests like the AFQT has been about 1.10 standard deviations, the 1997 results represent noticeable improvement.41 The second positive result comes from the 2003 standardization sample for the Wechsler Intelligence Scale for Children (WISC-IV), which showed a difference of .78 standard deviations, as against the 1.0 difference that has been typical for individually administered IQ tests.42

One cannot draw strong conclusions from two data points. Those who interpret them as part of an unchanging overall pattern can cite another recent result, from the 2001 standardization of the Woodcock-Johnson intelligence test. In line with the conventional gap, it showed an overall black-white difference of 1.05 standard deviations and, for youths aged six to eighteen, a difference of .99 standard deviations.43

There is more to be said on both sides of this issue, but nothing conclusive.44 Until new data become available, you may take your choice. If you are a pessimist, the gap has been unchanged at about one standard deviation. If you are an optimist, the IQ gap has decreased by a few points, but it is still close to one standard deviation. The clear and substantial convergence that occurred in academic tests has at best been but dimly reflected in IQ scores, and at worst not reflected at all.



Whether we are talking about academic achievement or about IQ, are the causes of the black-white difference environmental or genetic? Everyone agrees that environment plays a part. The controversy is about whether biology is also involved.

It has been known for many years that the obvious environmental factors such as income, parental occupation, and schools explain only part of the absolute black-white difference and none of the relative difference. Black and white students from affluent neighborhoods are separated by as large a proportional gap as are blacks and whites from poor neighborhoods.45 Thus the most interesting recent studies of environmental causes have worked with cultural explanations instead of socioeconomic status.46

One example is Black American Students in an Affluent Suburb: A Study of Academic Disengagement (2003) by the Berkeley anthropologist John Ogbu, who went to Shaker Heights, Ohio, to explore why black students in an affluent suburb should lag behind their white peers.47 Another is Black Rednecks and White Liberals (2005) by Thomas Sowell, who makes the case that what we think of as the dysfunctional aspects of urban black culture are a legacy not of slavery but of Southern and rural white “cracker” culture.48 Both Ogbu and Sowell describe ingrained parental behaviors and student attitudes that must impede black academic performance. These cultural influences often cut across social classes.

From a theoretical standpoint, the cultural explanations offer fresh ways of looking at the black-white difference at a time when the standard socioeconomic explanations have reached a dead end. From a practical standpoint, however, the cultural explanations point to a cause of the black-white difference that is as impervious to manipulation by social policy as causes rooted in biology. If there is to be a rapid improvement, some form of mass movement with powerful behavioral consequences would have to occur within the black community. Absent that, the best we can hope for is gradual cultural change that is likely to be measured in decades.

This brings us to the state of knowledge about genetic explanations. “There is not much direct evidence on this point,” said the American Psychological Association’s task force dismissively, “but what little there is fails to support the genetic hypothesis.”49 Actually, there is no direct evidence at all, just a wide variety of indirect evidence, almost all of which the task force chose to ignore.50

As it happens, a comprehensive survey of that evidence, and of the objections to it, appeared this past June in the journal Psychology, Public Policy, and Law. There, J. Philippe Rushton and Arthur Jensen co-authored a 60-page article entitled “Thirty Years of Research on Race Differences in Cognitive Ability.”51 It incorporates studies of East Asians as well as blacks and whites and concludes that the source of the black-white-Asian difference is 50- to 80-percent genetic. The same issue of the journal includes four commentaries, three of them written by prominent scholars who oppose the idea that any part of the black-white difference is genetic.52 Thus, in one place, you can examine the strongest arguments that each side in the debate can bring to bear.

Rushton and Jensen base their conclusion on ten categories of evidence that are consistent with a model in which both environment and genes cause the black-white difference and inconsistent with a model that requires no genetic contribution.53 I will not try to review their argument here, or the critiques of it. All of the contributions can be found on the Internet, and can be understood by readers with a grasp of basic statistical concepts.54

For those who consider it important to know what percentage of the IQ difference is genetic, a methodology that would do the job is now available. In the United States, few people classified as black are actually of 100-percent African descent (the average American black is thought to be about 20-percent white).55 To the extent that genes play a role, IQ will vary by racial admixture. In the past, studies that have attempted to test this hypothesis have had no accurate way to measure the degree of admixture, and the results have been accordingly muddy.56 The recent advances in using genetic markers solves that problem. Take a large sample of racially diverse people, give them a good IQ test, and then use genetic markers to create a variable that no longer classifies people as “white” or “black,” but along a continuum. Analyze the variation in IQ scores according to that continuum. The results would be close to dispositive.57



None of this is important for social policy, however, where the issue is not the source of the difference but its intractability. Much of the evidence reviewed by Rushton and Jensen bears on what we can expect about future changes in the black-white IQ difference. My own thinking on this issue is shaped by the relationship of the difference to a factor I have already mentioned—“g”—and to the developing evidence for g’s biological basis.

When you compare black and white mean scores on a battery of subtests, you do not find a uniform set of differences; nor do you find a random assortment. The size of the difference varies systematically by type of subtest. Asked to predict which subtests show the largest difference, most people will think first of ones that have the most cultural content and are the most sensitive to good schooling. But this natural expectation is wrong. Some of the largest differences are found on subtests that have little or no cultural content, such as ones based on abstract designs.

As long ago as 1927, Charles Spearman, the pioneer psychometrician who discovered g, proposed a hypothesis to explain the pattern: the size of the black-white difference would be “most marked in just those [subtests] which are known to be saturated with g.”58 In other words, Spearman conjectured that the black-white difference would be greatest on tests that were the purest measures of intelligence, as opposed to tests of knowledge or memory.

A concrete example illustrates how Spearman’s hypothesis works. Two items in the Wechsler and Stanford-Binet IQ tests are known as “forward digit span” and “backward digit span.” In the forward version, the subject repeats a random sequence of one-digit numbers given by the examiner, starting with two digits and adding another with each iteration. The subject’s score is the number of digits that he can repeat without error on two consecutive trials. Digits-backward works exactly the same way except that the digits must be repeated in the opposite order.

Digits-backward is much more g-loaded than digits-forward. Try it yourself and you will see why. Digits-forward is a straightforward matter of short-term memory. Digits-backward makes your brain work much harder.59

The black-white difference in digits-backward is about twice as large as the difference in digits-forward.60 It is a clean example of an effect that resists cultural explanation. It cannot be explained by differential educational attainment, income, or any other socioeconomic factor. Parenting style is irrelevant. Reluctance to “act white” is irrelevant. Motivation is irrelevant. There is no way that any of these variables could systematically encourage black performance in digits-forward while depressing it in digits-backward in the same test at the same time with the same examiner in the same setting.61

In 1980, Arthur Jensen began a research program for testing Spearman’s hypothesis. In his book The g Factor (1998), he summarized the results from seventeen independent sets of data, derived from 149 psychometric tests. They consistently supported Spearman’s hypothesis.62 Subsequent work has added still more evidence.63 Debate continues about what the correlation between g-loadings and the size of the black-white difference means, but the core of Spearman’s original conjecture, that a sizable correlation would be found to exist, has been confirmed.64

During the same years that Jensen was investigating Spearman’s hypothesis, progress was also being made in understanding g. For decades, psychometricians had tried to make g go away. Confident that intelligence must be more complicated than a single factor, they strove to replace g with measures of uncorrelated mental skills. They thereby made valuable contributions to our understanding of intelligence, which really does manifest itself in different ways and with different profiles, but getting rid of g proved impossible. No matter how the data were analyzed, a single factor kept dominating the results.65

By the 1980’s, the robustness and value of g as an explanatory construct were broadly accepted among pyschometricians, but little was known about its physiological basis.66 As of 2005, we know much more. It is now established that g is by far the most heritable component of IQ.67 A variety of studies have found correlations between g and physiological phenomena such as brain-evoked potentials, brain pH levels, brain glucose metabolism, nerve-conduction velocity, and reaction time.68 Most recently, it has been determined that a highly significant relationship exists between g and the volume of gray matter in specific areas of the frontal cortex, and that the magnitude of the volume is under tight genetic control.69 In short, we now know that g captures something in the biology of the brain.



So Spearman’s basic conjecture was correct—the size of the black-white difference and g-loadings are correlated—and g represents a biologically grounded and highly heritable cognitive resource. When those two observations are put together, a number of characteristics of the black-white difference become predictable, correspond with phenomena we have observed in data, and give us reason to think that not much will change in the years to come.70

One implication is that black-white convergence on test scores will be greatest on tests that are least g-loaded. Literacy is the obvious example: people with a wide range of IQ’s can be taught to read competently, and it is the reading test of the NAEP in which convergence has reached its closest point (.55 standard deviations in the 1988 test). More broadly, the confirmation of Spearman’s hypothesis explains why the convergence that has occurred on academic achievement tests has not been matched on IQ tests.

A related implication is that the source of the black-white difference lies in skills that are hardest to change. Being able to repeat many digits backward has no value in itself. It points to a valuable underlying mental ability, in the same way that percentage of fast-twitch muscle fibers points to an underlying athletic ability. If you were to practice reciting digits backward for a few days, you could increase your score somewhat, just as training can improve your running speed somewhat. But in neither case will you have improved the underlying ability.71 As far as anyone knows, g itself cannot be coached.

The third implication is that the “Flynn effect” will not close the black-white difference. I am referring here to the secular increase in IQ scores over time, brought to public attention by James Flynn.72 The Flynn effect has been taken as a reason for thinking that the black-white difference is temporary: if IQ scores are so malleable that they can rise steadily for several decades, why should not the black-white difference be malleable as well?73

But as the Flynn effect has been studied over the last decade, the evidence has grown, and now seems persuasive, that the increases in IQ scores do not represent significant increases in g.74 What the increases do represent—whether increases in specific mental skills or merely increased test sophistication—is still being debated. But if the black-white difference is concentrated in g and if the Flynn effect does not consist of increases in g, the Flynn effect will not do much to close the gap. A 2004 study by Dutch scholars tested this question directly. Examining five large databases, the authors concluded that “the nature of the Flynn effect is qualitatively different from the nature of black-white differences in the United States,” and that “the implications of the Flynn effect for black-white differences appear small.”75

These observations represent my reading of a body of evidence that is incomplete, and they will surely have to be modified as we learn more. But taking the story of the black-white IQ difference as a whole, I submit that we know two facts beyond much doubt. First, the conventional environmental explanation of the black-white difference is inadequate. Poverty, bad schools, and racism, which seem such obvious culprits, do not explain it. Insofar as the environment is the cause, it is not the sort of environment we know how to change, and we have tried every practical remedy that anyone has been able to think of. Second, regardless of one’s reading of the competing arguments, we are left with an IQ difference that has, at best, narrowed by only a few points over the last century. I can find nothing in the history of this difference, or in what we have learned about its causes over the last ten years, to suggest that any faster change is in our future.
II

The technical literature documenting sex differences and their biological basis grew surreptitiously during feminism’s heyday in the 1970’s and 1980’s. By the 1990’s, it had become so extensive that the bibliography in David Geary’s pioneering Male, Female (1998) ran to 53 pages.2 Currently, the best short account of the state of knowledge is Steven Pinker’s chapter on gender in The Blank Slate (2002).3

Rather than present a telegraphic list of all the differences that I think have been established, I will focus on the narrower question at the heart of the Summers controversy: as groups, do men and women differ innately in characteristics that produce achievement at the highest levels of accomplishment? I will limit my comments to the arts and sciences.

Since we live in an age when students are likely to hear more about Marie Curie than about Albert Einstein, it is worth beginning with a statement of historical fact: women have played a proportionally tiny part in the history of the arts and sciences.4 Even in the 20th century, women got only 2 percent of the Nobel Prizes in the sciences—a proportion constant for both halves of the century—and 10 percent of the prizes in literature. The Fields Medal, the most prestigious award in mathematics, has been given to 44 people since it originated in 1936. All have been men.

The historical reality of male dominance of the greatest achievements in science and the arts is not open to argument. The question is whether the social and legal exclusion of women is a sufficient explanation for this situation, or whether sex-specific characteristics are also at work.

Mathematics offers an entry point for thinking about the answer. Through high school, girls earn better grades in math than boys, but the boys usually do better on standardized tests.5 The difference in means is modest, but the male advantage increases as the focus shifts from means to extremes. In a large sample of mathematically gifted youths, for example, seven times as many males as females scored in the top percentile of the SAT mathematics test.6 We do not have good test data on the male-female ratio at the top one-hundredth or top one-thousandth of a percentile, where first-rate mathematicians are most likely to be found, but collateral evidence suggests that the male advantage there continues to increase, perhaps exponentially.7

Evolutionary biologists have some theories that feed into an explanation for the disparity. In primitive societies, men did the hunting, which often took them far from home. Males with the ability to recognize landscapes from different orientations and thereby find their way back had a survival advantage. Men who could process trajectories in three dimensions—the trajectory, say, of a spear thrown at an edible mammal—also had a survival advantage.8 Women did the gathering. Those who could distinguish among complex arrays of vegetation, remembering which were the poisonous plants and which the nourishing ones, also had a survival advantage. Thus the logic for explaining why men should have developed elevated three-dimensional visuospatial skills and women an elevated ability to remember objects and their relative locations—differences that show up in specialized tests today.9

Perhaps this is a just-so story.10 Why not instead attribute the results of these tests to socialization? Enter the neuroscientists. It has been known for years that, even after adjusting for body size, men have larger brains than women. Yet most psychometricians conclude that men and women have the same mean IQ (although debate on this issue is growing).11 One hypothesis for explaining this paradox is that three-dimensional processing absorbs the extra male capacity. In the last few years, magnetic-resonance imaging has refined the evidence for this hypothesis, revealing that parts of the brain’s parietal cortex associated with space perception are proportionally bigger in men than in women.12

What does space perception have to do with scores on math tests?13 Enter the psychometricians, who demonstrate that when visuospatial ability is taken into account, the sex difference in SAT math scores shrinks substantially.14

Why should the difference be so much greater at the extremes than at the mean? Part of the answer is that men consistently exhibit higher variance than women on all sorts of characteristics, including visuospatial abilities, meaning that there are proportionally more men than women at both ends of the bell curve.15 Another part of the answer is that someone with a high verbal IQ can easily master the basic algebra, geometry, and calculus that make up most of the items in an ordinary math test. Elevated visuospatial skills are most useful for the most difficult items.16 If males have an advantage in answering those comparatively few really hard items, the increasing disparity at the extremes becomes explicable.

Seen from one perspective, this pattern demonstrates what should be obvious: there is nothing inherent in being a woman that precludes high math ability. But there remains a distributional difference in male and female characteristics that leads to a larger number of men with high visuospatial skills. The difference has an evolutionary rationale, a physiological basis, and a direct correlation with math scores.



Now put all this alongside the historical data on accomplishment in the arts and sciences. In test scores, the male advantage is most pronounced in the most abstract items. Historically, too, it is most pronounced in the most abstract domains of accomplishment.17

In the humanities, the most abstract field is philosophy—and no woman has been a significant original thinker in any of the world’s great philosophical traditions. In the sciences, the most abstract field is mathematics, where the number of great women mathematicians is approximately two (Emmy Noether definitely, Sonya Kovalevskaya maybe). In the other hard sciences, the contributions of great women scientists have usually been empirical rather than theoretical, with leading cases in point being Henrietta Leavitt, Dorothy Hodgkin, Lise Meitner, Irène Joliot-Curie, and Marie Curie herself.

In the arts, literature is the least abstract and by far the most rooted in human interaction; visual art incorporates a greater admixture of the abstract; musical composition is the most abstract of all the arts, using neither words nor images. The role of women has varied accordingly. Women have been represented among great writers virtually from the beginning of literature, in East Asia and South Asia as well as in the West. Women have produced a smaller number of important visual artists, and none that is clearly in the first rank. No female composer is even close to the first rank. Social restrictions undoubtedly damped down women’s contributions in all of the arts, but the pattern of accomplishment that did break through is strikingly consistent with what we know about the respective strengths of male and female cognitive repertoires.

Women have their own cognitive advantages over men, many of them involving verbal fluency and interpersonal skills. If this were a comprehensive survey, detailing those advantages would take up as much space as I have devoted to a particular male advantage. But, sticking with my restricted topic, I will move to another aspect of male-female differences that bears on accomplishment at the highest levels of the arts and sciences: motherhood.



Regarding women, men, and babies, the technical literature is as unambiguous as everyday experience would lead one to suppose. As a rule, the experience of parenthood is more profoundly life-altering for women than for men. Nor is there anything unique about humans in this regard. Mammalian reproduction generally involves much higher levels of maternal than paternal investment in the raising of children.18 Among humans, extensive empirical study has demonstrated that women are more attracted to children than are men, respond to them more intensely on an emotional level, and get more and different kinds of satisfactions from nurturing them. Many of these behavioral differences have been linked with biochemical differences between men and women.19

Thus, for reasons embedded in the biochemistry and neurophysiology of being female, many women with the cognitive skills for achievement at the highest level also have something else they want to do in life: have a baby. In the arts and sciences, forty is the mean age at which peak accomplishment occurs, preceded by years of intense effort mastering the discipline in question.20 These are precisely the years during which most women must bear children if they are to bear them at all.

Among women who have become mothers, the possibilities for high-level accomplishment in the arts and sciences shrink because, for innate reasons, the distractions of parenthood are greater. To put it in a way that most readers with children will recognize, a father can go to work and forget about his children for the whole day. Hardly any mother can do this, no matter how good her day-care arrangement or full-time nanny may be. My point is not that women must choose between a career and children, but that accomplishment at the extremes commonly comes from a single-minded focus that leaves no room for anything but the task at hand.21 We should not be surprised or dismayed to find that motherhood reduces the proportion of highly talented young women who are willing to make that tradeoff.

Some numbers can be put to this observation through a study of nearly 2,000 men and women who were identified as extraordinarily talented in math at age thirteen and were followed up 20 years later.22 The women in the sample came of age in the 1970’s and early 1980’s, when women were actively socialized to resist gender stereotypes. In many ways, these talented women did resist. By their early thirties, both the men and women had become exceptional achievers, receiving advanced degrees in roughly equal proportions. Only about 15 percent of the women were full-time housewives. Among the women, those who did and those who did not have children were equally satisfied with their careers.

And yet. The women with careers were four-and-a-half times more likely than men to say they preferred to work fewer than 40 hours per week. The men placed greater importance on “being successful in my line of work” and “inventing or creating something that will have an impact,” while the women found greater value in “having strong friendships,” “living close to parents and relatives,” and “having a meaningful spiritual life.” As the authors concluded, “these men and women appear to have constructed satisfying and meaningful lives that took somewhat different forms.”23 The different forms, which directly influence the likelihood that men will dominate at the extreme levels of achievement, are consistent with a constellation of differences between men and women that have biological roots.

I have omitted perhaps the most obvious reason why men and women differ at the highest levels of accomplishment: men take more risks, are more competitive, and are more aggressive than women.24 The word “testosterone” may come to mind, and appropriately. Much technical literature documents the hormonal basis of personality differences that bear on sex differences in extreme and venturesome effort, and hence in extremes of accomplishment—and that bear as well on the male propensity to produce an overwhelming proportion of the world’s crime and approximately 100 percent of its wars. But this is just one more of the ways in which science is demonstrating that men and women are really and truly different, a fact so obvious that only intellectuals could ever have thought otherwise.

Note: What follows is a fully annotated version of the article that appears in the print edition of the September 2005 issue of COMMENTARY.


The Inequality Taboo

Charles Murray

When the late Richard Herrnstein and I published The Bell Curve eleven years ago, the furor over its discussion of ethnic differences in IQ was so intense that most people who have not read the book still think it was about race. Since then, I have deliberately not published anything about group differences in IQ, mostly to give the real topic of The Bell Curve—the role of intelligence in reshaping America’s class structure—a chance to surface.

The Lawrence Summers affair last January made me rethink my silence. The president of Harvard University offered a few mild, speculative, off-the-record remarks about innate differences between men and women in their aptitude for high-level science and mathematics, and was treated by Harvard’s faculty as if he were a crank. The typical news story portrayed the idea of innate sex differences as a renegade position that reputable scholars rejected.

It was depressingly familiar. In the autumn of 1994, I had watched with dismay as The Bell Curve’s scientifically unremarkable statements about black IQ were successfully labeled as racist pseudoscience. At the opening of 2005, I watched as some scientifically unremarkable statements about male-female differences were successfully labeled as sexist pseudoscience.

The Orwellian disinformation about innate group differences is not wholly the media’s fault. Many academics who are familiar with the state of knowledge are afraid to go on the record. Talking publicly can dry up research funding for senior professors and can cost assistant professors their jobs. But while the public’s misconception is understandable, it is also getting in the way of clear thinking about American social policy.

Good social policy can be based on premises that have nothing to do with scientific truth. The premise that is supposed to undergird all of our social policy, the founders’ assertion of an unalienable right to liberty, is not a falsifiable hypothesis. But specific policies based on premises that conflict with scientific truths about human beings tend not to work. Often they do harm.

One such premise is that the distribution of innate abilities and propensities is the same across different groups. The statistical tests for uncovering job discrimination assume that men are not innately different from women, blacks from whites, older people from younger people, homosexuals from heterosexuals, Latinos from Anglos, in ways that can legitimately affect employment decisions. Title IX of the Educational Amendments of 1972 assumes that women are no different from men in their attraction to sports. Affirmative action in all its forms assumes there are no innate differences between any of the groups it seeks to help and everyone else. The assumption of no innate differences among groups suffuses American social policy. That assumption is wrong.

When the outcomes that these policies are supposed to produce fail to occur, with one group falling short, the fault for the discrepancy has been assigned to society. It continues to be assumed that better programs, better regulations, or the right court decisions can make the differences go away. That assumption is also wrong.

Hence this essay. Most of the following discussion describes reasons for believing that some group differences are intractable. I shift from “innate” to “intractable” to acknowledge how complex is the interaction of genes, their expression in behavior, and the environment. “Intractable” means that, whatever the precise partitioning of causation may be (we seldom know), policy interventions can only tweak the difference at the margins.

I will focus on two sorts of differences: between men and women and between blacks and whites. Here are three crucial points to keep in mind as we go along:

1. The differences I discuss involve means and distributions. In all cases, the variation within groups is greater than the variation between groups. On psychological and cognitive dimensions, some members of both sexes and all races fall everywhere along the range. One implication of this is that genius does not come in one color or sex, and neither does any other human ability. Another is that a few minutes of conversation with individuals you meet will tell you much more about them than their group membership does.

2. Covering both sex differences and race differences in a single, non-technical article, I had to leave out much in the print edition of this article. This online version is fully annotated and includes extensive supplementary material.

3. The concepts of “inferiority” and “superiority” are inappropriate to group comparisons. On most specific human attributes, it is possible to specify a continuum running from “low” to “high,” but the results cannot be combined into a score running from “bad” to “good.” What is the best score on a continuum measuring aggressiveness? What is the relative importance of verbal skills versus, say, compassion? Of spatial skills versus industriousness? The aggregate excellences and shortcomings of human groups do not lend themselves to simple comparisons. That is why the members of just about every group can so easily conclude that they are God’s chosen people. All of us use the weighting system that favors our group’s strengths.1

Monday, September 12, 2005

Ken Gorrell:
Katrina exposed America's harmful culture of dependence
By KEN GORRELL
Guest Commentary



WHAT IS a culture of dependence? It is generation after generation of families existing on direct government financial support and sapped of ambition to take care of their own immediate needs or prepare themselves for a better future. For those trapped in this dysfunctional culture, the normal cost-benefit equations of life don't apply. The government safety net becomes a smothering blanket, insulating citizens from the consequences of their actions while reinforcing the poisonous idea that the problems they create for themselves should become someone else's problem to solve.

This idea does not lead to a good or easy life, but it does enable a self-perpetuating existence, unhealthy for both society in general and specifically for those who make themselves wards of the state.

What happens when local and state elected officials — those layers of government most responsible for responding to the needs of local populations — fail in their basic duties to protect life, liberty and property? Members of the culture of dependence are hardest hit.

They are least able, by training, temperament or resources to act in their own best interests or to survive a breakdown in civil order. They are most in need of strong leadership and direct guidance. The self-synchronization of activities practiced daily by the broader population is, for them, an unlearned skill. An emergency situation is not the time to abandon them to the vagaries of fate.

I believe that Hurricane Katrina will be remembered as far more than a powerful storm. By exposing critical structural defects — and here I refer not to the failed levees, but to defects in society and our relationship to government — Katrina was, metaphorically, the perfect storm: The collision of the culture of dependence and ineffectual state and local government.

Effective leadership must be decisive, reasonable and believable. Mayor Ray Nagin of New Orleans was none of these. He was not decisive in the days leading up to Katrina's landfall. His expectations of citizen compliance with the voluntary evacuation order were not reasonable. Too many of his citizens did not believe or heed his order. Mayor Nagin failed to use the resources at his disposal to best effect or implement his city's own published disaster plan.

Rather than take charge of his city after the storm passed, he spent much of his time blaming others for his failures. While it would be too much to expect a Rudy Giuliani in every city, even half a Giuliani in charge in New Orleans would have saved lives.

An analysis of Governor Kathleen Blanco's actions before and after Katrina hit yields the same depressing conclusion: leadership was in short supply in Louisiana. Those most dependent on government suffered the most, as they always do and always will.

It would be preferred not to have to play the blame game while fellow Americans await rescue from the toxic stew of New Orleans or the devastation of the Gulf Coast. However, the usual suspects — the enablers of the culture of dependence — have already lined up to ensure their noxious propaganda becomes ground truth even before the rebuilding efforts begin.

Part and parcel of their viewpoint is the idea that a "federal case" should be made out of everything regardless of Constitutional strictures. Concentrating power in the federal government is their entering argument for every policy debate, so for them this is simply politics as usual despite the unusual circumstances. By blaming Washington for the multi-layered failures in dealing effectively with Katrina, they concentrate efforts on finding a federal solution to what is by law first and foremost a local and state problem.

The process of learning from our mistakes should be nonpartisan. Of course it will not be. The usual critics of President Bush will concentrate on federal government actions because it is in their partisan interest to do so. Harsh truths will be buried under harsher rhetoric. The President has been criticized for "overstepping" authority by requesting federal power to access local library records in the pursuit of suspected terrorists bent on inflicting Katrina-like death tolls in our cities. These same voices criticize him now for not stepping into local disaster planning and preparedness. A typical liberal dichotomy: Demand federal intrusion contrary to law but hamstring federal efforts to accomplish clearly delineated duties.

It is easier to blame Washington for the consequences we bring upon ourselves when we fail to live up to our responsibilities as individuals or when we elect mediocre state and local officials such as those recently thrust into the national spotlight from New Orleans and Baton Rouge. But in a free society, we deserve what we tolerate.

Accepting the culture of dependence as a constant burden should not be tolerated. Changing it to a culture of self-reliance is a worthy long-term goal. We can start the process by demanding more leadership from our local elected representatives and expecting more from our fellow citizens.

Ken Gorrell is an insurance agent in Northfield.

TCS: Tech Central Station - Imperium Americanum? Hardly.

Thursday, September 08, 2005

Blame Amid the Tragedy
Gov. Blanco and Mayor Nagin failed their constituents.

BY BOB WILLIAMS
Wednesday, September 7, 2005 12:01 a.m. EDT

As the devastation of Hurricane Katrina continues to shock and sadden the nation, the question on many lips is, Who is to blame for the inadequate response?

As a former state legislator who represented the legislative district most impacted by the eruption of Mount St. Helens in 1980, I can fully understand and empathize with the people and public officials over the loss of life and property.

Many in the media are turning their eyes toward the federal government, rather than considering the culpability of city and state officials. I am fully aware of the challenges of having a quick and responsive emergency response to a major disaster. And there is definitely a time for accountability; but what isn't fair is to dump on the federal officials and avoid those most responsible--local and state officials who failed to do their job as the first responders. The plain fact is, lives were needlessly lost in New Orleans due to the failure of Louisiana's governor, Kathleen Blanco, and the city's mayor, Ray Nagin.

The primary responsibility for dealing with emergencies does not belong to the federal government. It belongs to local and state officials who are charged by law with the management of the crucial first response to disasters. First response should be carried out by local and state emergency personnel under the supervision of the state governor and his emergency operations center.

The actions and inactions of Gov. Blanco and Mayor Nagin are a national disgrace due to their failure to implement the previously established evacuation plans of the state and city. Gov. Blanco and Mayor Nagin cannot claim that they were surprised by the extent of the damage and the need to evacuate so many people. Detailed written plans were already in place to evacuate more than a million people. The plans projected that 300,000 people would need transportation in the event of a hurricane like Katrina. If the plans had been implemented, thousands of lives would likely have been saved.

In addition to the plans, local, state and federal officials held a simulated hurricane drill 13 months ago, in which widespread flooding supposedly trapped 300,000 people inside New Orleans. The exercise simulated the evacuation of more than a million residents. The problems identified in the simulation apparently were not solved.





A year ago, as Hurricane Ivan approached, New Orleans ordered an evacuation but did not use city or school buses to help people evacuate. As a result many of the poorest citizens were unable to evacuate. Fortunately, the hurricane changed course and did not hit New Orleans, but both Gov. Blanco and Mayor Nagin acknowledged the need for a better evacuation plan. Again, they did not take corrective actions. In 1998, during a threat by Hurricane George, 14,000 people were sent to the Superdome and theft and vandalism were rampant due to inadequate security. Again, these problems were not corrected.
The New Orleans contingency plan is still, as of this writing, on the city's Web site, and states: "The safe evacuation of threatened populations is one of the principle [sic] reasons for developing a Comprehensive Emergency Management Plan." But the plan was apparently ignored.

Mayor Nagin was responsible for giving the order for mandatory evacuation and supervising the actual evacuation: His Office of Emergency Preparedness (not the federal government) must coordinate with the state on elements of evacuation and assist in directing the transportation of evacuees to staging areas. Mayor Nagin had to be encouraged by the governor to contact the National Hurricane Center before he finally, belatedly, issued the order for mandatory evacuation. And sadly, it apparently took a personal call from the president to urge the governor to order the mandatory evacuation.

The city's evacuation plan states: "The city of New Orleans will utilize all available resources to quickly and safely evacuate threatened areas." But even though the city has enough school and transit buses to evacuate 12,000 citizens per fleet run, the mayor did not use them. To compound the problem, the buses were not moved to high ground and were flooded. The plan also states that "special arrangements will be made to evacuate persons unable to transport themselves or who require specific lifesaving assistance. Additional personnel will be recruited to assist in evacuation procedures as needed." This was not done.

The evacuation plan warned that "if an evacuation order is issued without the mechanisms needed to disseminate the information to the affected persons, then we face the possibility of having large numbers of people either stranded and left to the mercy of a storm, or left in an area impacted by toxic materials." That is precisely what happened because of the mayor's failure.

Instead of evacuating the people, the mayor ordered the refugees to the Superdome and Convention Center without adequate security and no provisions for food, water and sanitary conditions. As a result people died, and there was even rape committed, in these facilities. Mayor Nagin failed in his responsibility to provide public safety and to manage the orderly evacuation of the citizens of New Orleans. Now he wants to blame Gov. Blanco and the Federal Emergency Management Agency. In an emergency the first requirement is for the city's emergency center to be linked to the state emergency operations center. This was not done.





The federal government does not have the authority to intervene in a state emergency without the request of a governor. President Bush declared an emergency prior to Katrina hitting New Orleans, so the only action needed for federal assistance was for Gov. Blanco to request the specific type of assistance she needed. She failed to send a timely request for specific aid.
In addition, unlike the governors of New York, Oklahoma and California in past disasters, Gov. Blanco failed to take charge of the situation and ensure that the state emergency operation facility was in constant contact with Mayor Nagin and FEMA. It is likely that thousands of people died because of the failure of Gov. Blanco to implement the state plan, which mentions the possible need to evacuate up to one million people. The plan clearly gives the governor the authority for declaring an emergency, sending in state resources to the disaster area and requesting necessary federal assistance.

State legislators and governors nationwide need to update their contingency plans and the operation procedures for state emergency centers. Hurricane Katrina had been forecast for days, but that will not always be the case with a disaster (think of terrorist attacks). It must be made clear that the governor and locally elected officials are in charge of the "first response."

I am not attempting to excuse some of the delays in FEMA's response. Congress and the president need to take corrective action there, also. However, if citizens expect FEMA to be a first responder to terrorist attacks or other local emergencies (earthquakes, forest fires, volcanoes), they will be disappointed. The federal government's role is to offer aid upon request.

The Louisiana Legislature should conduct an immediate investigation into the failures of state and local officials to implement the written emergency plans. The tragedy is not over, and real leadership in the state and local government are essential in the months to come. More importantly, the hurricane season is still upon us, and local and state officials must stay focused on the jobs for which they were elected--and not on the deadly game of passing the emergency buck.

Mr. Williams is president of the Evergreen Freedom Foundation, a free market public policy research organization in Olympia, Wash

Wednesday, September 07, 2005

More good stuff keeps on comin...Im really looking forward to 06 and 08:)
ENJOY
New Glory
By Jamie Glazov
FrontPageMagazine.com | September 7, 2005

Frontpage Interview’s guest today is Ralph Peters, a retired U.S. Army lieutenant colonel who served in infantry and intelligence units before becoming a Foreign Area Officer and a global strategic scout for the Pentagon. He has published three books on strategy and military affairs, as well as hundreds of columns for the New York Post, The Washington Post, The Wall Street Journal, Newsweek, and other publications. He is the author of the new book New Glory: Expanding America's Global Supremacy.







FP: Ralph Peters, welcome to Frontpage Interview.



Peters: I'm honored by the chance to reach your audience. Thanks.



FP: What inspired you to write New Glory?



Peters: New Glory is a book that literally took me a lifetime to write--in the sense that it contains decades of first-hand experience and observation in more than sixty countries. While I've written essays and columns over the years, I just sensed that the time was right to put it all together, to lay out as forthrightly and honestly as I could where I think the world is going--to offer a fresh vision of the world as it is and as it's going to be...no matter who might be offended by my views.



And, frankly, I was fed up with the countless "experts" all over the media who had never been anywhere or done anything, but who had an opinion on everything. You can't understand this complex world without going out to see it firsthand. The book's conclusions about where we've been and where we need to go strategically will surprise many readers, but they're based upon direct experience, not faculty-lounge chitchat. This book had been cooking inside me for a long time--and I'm glad I waited to write it. I needed all those years of getting dirty overseas to mature my thinking--and to escape Washington group-think.



FP: Tell us why the battle for Fallujah epitomized how we must fight -- and win -- the terror war.



Peters: Well, the First Battle of Fallujah, in the spring of 2004, was an example of how to get it as wrong as you possibly can. We bragged that we were going to "clean up Dodge." And the Marines went in, tough and capable as ever. Then, just when the Marines were on the cusp of victory, they were called off, thanks to a brilliant, insidious and unscrupulous disinformation campaign waged by al-Jazeera. I was in Iraq at the time, and the lies about American "atrocities" were stunning. But the lies worked and the Bush administration, to my shock and dismay, backed down.



Let's be honest: The terrorists won First Fallujah. And for six months thereafter Fallujah was the world capital of terror--a terrorist city-state. It was evident to all of us who had served that we'd have to go back into Fallujah, but the administration--which I support--made the further error of waiting until after the presidential election to avoid casualties or embarrassments during the campaign. Well, fortunately, in the Second Battle of Fallujah the Army and Marines realized they had to do it fast, before the media won again and the politicians caved in again. The military had been burned once and they were determined not to get burned again. And they did a stunning job--Second Fallujah was a model of how to take down a medium-size city. Great credit to the troops, mixed reviews for the politicos.



The bottom line is this: If you have to fight, fight to win, don't postpone what's necessary, and be prepared for the media's anti-American onslaught. Today, the media--with some noteworthy exceptions--are stooges of Islamist terrorists who, if they actually won, would butcher the journalists defending them.



We should never go to war lightly, but if we must fight, we have to give it everything we've got and damn the global criticism. There's a straightforward maxim that applies: In warfare, if you're unwilling to pay the butcher's bill up front, you will pay it with compound interest in the end.



FP: You note that terror of female sexuality underlies Islamic terror. You also make the point that a culture that hates and fears woman is incompatible with modernity and democracy. Can you illuminate these phenomena for us please?



Peters: No brainer on this one. Any society that refuses to exploit the talents and potential contributions of half of its population can't remotely hope to compete with the USA or the West in general. Worse, the virtual enslavement of women is as much a symptom of other ailments as it is a problem in and of itself. Where women are tormented by bitter old men in religious robes, there's never a meritocracy for males, either. And such societies are consistently racially and religiously bigoted. Take Pakistan: While the USA is operating at a phenomenal level of human efficiency in the 21st century, say 85%, Pakistan would likely measure in at 12 to 15%. They just keep falling comparatively farther and farther behind, they hate it, and, of course, they blame us. We're dealing with the abject and utter failure of the entire civilization of Middle Eastern Islam--not competitive in a single sphere (not even terror, since these days we're terrorizing the terrorists). It's historically unprecedented--and unspeakably dangerous.



As far as the inhuman, inhumane--and stupid--treatment of women in the Middle East, yep, Islam is scared of the girls. I wish Freud were alive--he'd really get a look at a civilization's discontents. If you're not terrified of female sexuality, you don't lock women up, insist on covering them up from scalp to toenail and stone them to death for their "sins." Every single Muslim culture in the greater Middle East is sexually infantile--to use the Freudian term. For all their macho posturing, the men are terrified of their feared inadequacy. It's like one big junior high school dance, with the boys on one side of the gym and the girls on the other--except the boys have Kalashnikovs.



Now, I realize this isn't the sort of thing most people consider as a strategic factor, but I am thoroughly convinced that the one foolproof test for whether or not a society has any hope of making it in the 21st century is its treatment of women. Where women are partners, societies take off--as ours has done for this reason and others. Where women are property, there's simply no hope of a competitive performance.



In the collective culture of the Middle East, we're dealing with a deeply neurotic, if not outright psychotic civilization. I wish I could be more positive. But the average Middle Eastern male just has snakes in his head. And, by the way, the place isn't much fun, either. A mega-mall or two does not make a civilization.



FP: You make the observation that “Islam produced a strain of violent homoeroticism that reaches into al-Qaeda and beyond.” Please expand on this reality a bit for us.



Peters: Another issue "sober" Washington wouldn't consider as a strategic concern, but this ties in with the fear of and disdain for women. If you read the notes and papers they left behind, it's evident that the hijackers of 9/11 were a boy's club with strong homoerotic tendencies. Read Mohammed Atta's lunatic note describing how women must be kept away from his funeral to avoid polluting his grave. Does that sound like a guy with a happy dating history? Of course, sex between men and boys is a long tradition from North Africa through Afghanistan (fear of women always leads to an excessive fixation on female virginity--so she won't know her husband's inadequate--as well as homoerotic undercurrents).



They don't talk about it, of course--it's supposed to be anathema--but very few Middle Eastern mothers would trust their good-looking young sons around many adult males. This has deep roots, right back to the celebrations of the Emperor Babur's fixation on a pretty boy in the Baburnama. And the related dread of the female as literal femme fatale, as vixen, as betrayer, appears in much of the major literature--especially the "Thousand and One Arabian Nights," which, in its unabridged, unexpurgated version, is one long chronicle of supposed female wantonness and insatiability (the men are always innocent victims of Eve).



Pretty hard for the president to work this into a State of the Union message, but I'm convinced that sexual dysfunction is at the core of the Middle East's sickness--and it's certainly sick. Nothing about our civilization so threatens the males of the Middle East as the North American career woman making her own money and her own decisions. We don't think of it this way, but from one perspective the best symbols of the War on Terror would be the Islamic veil versus the two-piece woman's business suit.



There is no abyss more unbridgeable between our civilizations than that created by our respect for women and the Islamic disdain for the female. There are many aspects of our magnificent civilization that threaten traditional, backward societies, but nothing worries them so much as the independence of the Western woman--not that they approve of freedom of any kind.



FP: You write that the developments in Iran pose a great danger to the Islamists and great hope for the West. Tell us what the possibilities are. Perhaps a domino theory? (i.e, if the Iranians overthrow their religious despots, the rest of the Islamic world might do the same?)



Peters: No matter what the outcome in Iraq, the Middle East isn't going to change overnight. This is a very long process. But if you want an irrefutable indicator of how important Iraq's future is, just consider how many resources our enemies are willing to spend to stop the emergence of an even partially functional rule-of-law democracy in Iraq. The terrorists are throwing in everything they've got. Surely, that should tell us something.



Despite all the yelling and jumping up and down in the "Arab Street" (where someone needs to pick up the litter, by the way), the truth is that Arabs, especially, are afraid they can't do it, that they can't build a modern, let alone a postmodern, market democracy. The Arabs desperately need a win--they've been losing on every front for so long. If Iraq is even a deeply flawed success, it will be success enough to spark change across the region. But we must not expect overnight results. This is all very hard. We're not just trying to change a country--we're asking a civilization to change, to revive itself.



Iraq matters immensely. But no matter the outcome, it will be a long time before we see the rewards. It's an agonizingly slow process--which is tough for our society, which expects quick results.



And if Iraq should fail, despite our best efforts, it won't really be an American (or Anglo-American) failure. The consequences will be severe, but we'll work it off at the strategic gym. A failed Iraq will be another tragic Arab failure.



This is our best shot, but it's their last chance.



FP: You observe that Islamist terror sprouts from the failure of Arab and Islamic civilization, that they are humiliated, envious and seek to destroy the reminder of everything we have done right. Please illustrate this picture for us.



Peters: Back to our disdain for new strategic factors: Certainly economic statistics and demographics, hydrology and terms of trade all matter. But the number one deadly and galvanizing strategic impulse in the world today is jealousy. And it's jealousy of the West in general, but specifically of the United States. Jealousy is a natural, deep human emotion, which afflicts us all in our personal lives--to some degree. But when it afflicts an entire civilization, it's tragic. The failed civilization of the Middle East--where not one of the treasured local values is functional in the globalized world--is morbidly jealous of us. They've succumbed to a culture of--and addiction to--blame. Instead of facing up to the need to change and rolling up their sleeves, they want the world to conform to their terms. Ain't going to happen, Mustapha.



I've been out there. And while anti-Americanism is really much exaggerated, where it does exist among the terrorists and their supporters, jealousy is a prime motivating factor. You've heard it before, but it's all too true: They do hate us for our success.



The populations of the Middle East blew it. They've failed. Thirteen hundred years of effort came down to an entire civilization that can't design and build an automobile. And thanks to the wonders of the media age, it's daily rubbed in their faces how badly they've failed.



Oil wealth? A tragedy for the Arabs, since it gave the wealth to the most backward. The Middle East still does not have a single world-class university outside of Israel. Not one. The oil money has been thrown away--it's been a drug, not a tool.



The terrorists don't want progress. They want revenge. At the risk of punning on the title of the book, they don't want new glory--they want their old (largely imagined) glory back. They want to turn back the clock to an imagined world. The terrorists are the deadly siblings of Westerners who believe in Atlantis.



FP: It is clear you are not very fond of France and Germany. How come?



Peters: Actually, I love France and Germany. They're two of my favorite museums. And what's not to like about two grotesquely hypocritical societies who are, between them, responsible for the worst savagery in and beyond Europe over the past several centuries?



Anybody who really wants to see how I take "Old Europe" apart will just have to read the book. Too much to say to get it down here. But the next time the continent that perfected genocide and ethnic cleansing plays the moral superiority card, let's remind them that no German soldier ever liberated anybody--and the most notable achievement of the French military in the past century and a half has been the slaughter of unarmed black Africans.



And just watch their brutal treatment of their Islamic residents. Old Europe--France and Germany--is just the Middle East-lite.



FP: Explain why you believe there are great benefits to America reaching out to India.



Peters: Human capital. Trade. Healthy competition. Strategic position. Common interests. Brilliant, hard-working people. Great food. That enough?



FP: Are there grounds to have hope about Africa?



Peters: Yes. There are plentiful reasons to be hopeful about parts--parts--of Africa. But much of the continent is every bit as disastrous as the popular image has it. My complaint is that we treat that vast, various continent as one big, failed commune. Well, Congo or Sierra Leone certainly aren't inspiring...but in the course of several, recent, lengthy trips to Africa, I was just astonished at the vigor, vision and strategic potential of South Africa. South Africa is well on the way to becoming the first true sub-Saharan great power--and it's another natural ally for us. Oh, the old revolutionary, slogan-spouting generation and their protégés have to die off--and they will. But, in the long-term, I expect great things from South Africa, that they'll control (economically and culturally) southern Africa at least as far north as the Rovuma River. The one qualifier is this: Their next presidential election will be the turning point, either way. If they elect a demagogue, South Africa could still turn into another failing African state. But if they elect a technocrat, get out of the way, because the South Africans are coming.



I explain much of this far better in the book than I can here. Suffice to say that, for all the continent's horrid misery, there are islands of genuine hope. And, of course, there's plenty of wreckage...and AIDS, civil wars, corruption (the greatest bane of all for the developing world). I'm not a Pollyanna. But over the years I've gotten pretty good at spotting both potential crises and potential successes--and South Africa, for all its problems, is a land of stunning opportunities with neo-imperial potential.



FP: Overall, as a former military man, tell us what the United States has to stop doing, and has to start doing, to win this terror war.



Peters: Knock off the bluster and fight like we mean it. To a disheartening degree, the War on Terror has been a war of (ineptly chosen) words. Look, this is a death struggle, a strategic knife fight to the bone. I wish our civilian leaders would stop beating their chests and saying that we're going to get this terrorists or that one--because when we fail to make good on our promises, the terrorists wins by default. More deeds, fewer words.



Above all, we need to think clearly, to cast off the last century's campus-born excuses for the Islamic world of the Middle East. We need to be honest about the threat, in all its dimensions. "Public diplomacy" isn't going to convert the terrorists who were recruited and developed while we looked away from the problem for thirty years. In the end, only deeds convince. And not just military deeds, of course, although those remain indispensable.



Most Americans still do not realize the intensity or the dimensions of the struggle with Islamist terror. Despite 9-11, they just don't have a sense that we're at war. And I'm afraid I have to fault the Bush administration on that count: Good Lord, we're at war with the most implacable enemies we've ever faced (men who regard death as a promotion), and what was our president's priority this year? The reform of Social Security. While I continue to support the administration's overall intent and efforts in Iraq and around the world, I believe the president has failed us badly by not driving home to the people that we're at war.



The Bush administration has done great and necessary things--but all too often they've done those things badly. And only the valor and blood of our troops has redeemed the situation, time after time, from Fallujah to the struggles of the future.



FP: Ralph Peters thank you for joining us today.



Peters: My pleasure, and my thanks. And allow me to say a special thanks to all your readers in uniform, those troops defending the values of our civilization and human decency in distant, discouraging places. Freedom truly isn't free.

Click Here to support Frontpagemag.com.