Prabir Purkayastha

Prabir Purkayastha is the founding editor of Newsclick.in, a digital media platform. He is an activist for science and the free software movement.

Private Greed Prevails Over Humanity’s Survival

/
344 views
6 mins read

COP27 has begun in Sharm el-Sheikh. Although the Ukraine war and the U.S. midterm elections have shifted our immediate focus away from the battle against global warming, it still remains a central concern of our epoch. Reports indicate that not only are we failing to meet our climate change goals, but we are also falling short of the targets by a large margin. Worse, the potent methane greenhouse gas emissions have grown far more rapidly, posing as much of a climate change threat as carbon dioxide. Even though methane lasts for a shorter time in the atmosphere, viewed over a period of 100 years, it is a more potent greenhouse gas than carbon dioxide.

The net result is that we are almost certain to fail in our target to limit global temperature rise to 1.5 degrees Celsius above preindustrial levels. And if we do not act soon, even a target of 2 degrees Celsius is hard to achieve. At this rate, we are looking at a temperature rise of 2.5-3 degrees Celsius and the devastation of our civilization. Worse, the impact will be much higher in the equatorial and tropical regions, where most of the world’s poor live.

In this column, I will address two issues. One is the shift from coal to natural gas as a transitional fuel, and the other is the challenge of storing electricity, without which we cannot shift successfully to renewable energy.

The advanced countries—the U.S. and members of the European Union—bet big on natural gas, which is primarily methane, as the transition fuel from coal. In Glasgow during COP26, advanced countries even made coal the key issue, shifting the focus from their greenhouse emissions to that of China and India as big coal users. The assumption in using natural gas as a transitional fuel is that its greenhouse impact is only half that of coal. Methane emissions also last for a shorter time—about 12 years—in the atmosphere before converting to carbon dioxide and water. The flip side is that it is a far more potent greenhouse gas. Its effects are 30 times greater over a 100-year period than an equivalent amount of carbon dioxide. So even a much smaller amount of methane has a much more significant global warming impact than carbon dioxide.

The bad news on the methane front is that methane leakage from the natural gas infrastructure is much higher, possibly as much as six times more—according to a March 2022 Stanford University study—than the advanced countries have been telling us. The high methane leakage from natural gas extraction not only cancels out any benefits of switching to natural gas as an intermediary fuel but even worsens global warming.

There are two sets of data on methane now available. One measures the actual leakage of methane from the natural gas infrastructure with satellites and planes using infrared cameras. The technology of measuring methane leaks from natural gas infrastructure is easy and cheap. After all, we are able to detect methane in exoplanets far away from the solar system. Surely, saving this planet from heat death is a much higher priority! The other data is the measurement of atmospheric methane conducted by the World Meteorological Organization (WMO).

The Environment Protection Agency (EPA) in the U.S. estimates that 1.4 percent of all natural gas produced in the U.S. leaks into the atmosphere. But the March 2022 Stanford University study using cameras and small planes that fly over natural gas infrastructure found that the figure is likely to be 9.4 percent—more than six times higher than the EPA’s estimate. Even if methane leaks are only 2.5 percent of natural gas production, they will offset all the benefits of switching from coal to natural gas. “Clean” natural gas may be three to four times worse than even dirty coal. At least in the hands of capital!

The EPA does not conduct any physical measurements. All it uses to estimate methane emissions is a formula that involves a number of subjective factors, along with the number of wells, length of pipelines, etc. Let us not forget that there are many people in the U.S. who either do not believe in or choose to ignore the fact of global warming. They would like to take a crowbar to even a weakened EPA, dismantling all measures to reduce global warming.

The impact of methane leaks can be seen in another set of figures. The World Meteorological Organization reported the biggest jump in “methane concentrations in 2021 since systematic measurements began nearly 40 years ago.” While WMO remains discreetly silent on why this jump has occurred, the relation between switching to natural gas and the consequent rise of methane emissions is hard to miss.

The tragedy of methane leaks is that they are easy to spot with today’s technology and not very expensive to fix. But companies have no incentive to take even these baby steps as it impacts their current bottom line. The larger good—even bigger profits, but over a longer time frame—does not interest them. They aren’t likely to change unless they are forced to by regulatory or direct state action.

The cynicism of the rich countries—the U.S. and members of the EU—on global warming can be seen in their conduct during the Ukraine war. The European Union has restarted some of its coal plants, increasing coal’s share in the energy mix. Further, the EU has cynically argued that developing oil and gas infrastructure in Africa is all right as long as it is solely for supply to Europe, not for use in Africa. African nations, according to the EU, must instead use only clean, renewable energy! And, of course, such energy infrastructure must be in the hands of European companies!

The key to a transition to renewable energy—the only long-term solution to global warming—is to find a way of storing energy. Renewables, unlike fossil fuels, cannot be used at will, as the wind, sun, and even water provide a continuous flow of energy. While water can be stored in large reservoirs, wind and sun cannot be, unless they are converted to chemical energy in batteries. Or unless they are converted to hydrogen and then stored in either tank or natural storage in geological formations, underground or in salt caverns.

There has been a lot of hype about batteries and electric cars. Missing here is that batteries with current technology have a much lower energy density than oil or coal. The energy from oil or natural gas is 20-40 times that of the most efficient battery today. For an electric vehicle, that is not such a major issue. It simply determines how often the vehicle’s batteries need to be charged and how long charging will take. It means developing a charging infrastructure with a quick turnaround time. The much bigger problem is how to store energy at the grid level.

Grid-level storage means supplying the grid with electricity from stored energy. Grid-level batteries are being suggested to meet this task. What the proponents of grid-level batteries neglect to inform us is that they may supply power for short-term fluctuations—night and day, windy and non-windy days—but they cannot meet the demand from long-term or seasonal fluctuations. This brings us to the question of the energy density of storage: How much energy does a kilogram of lithium battery hold as compared to a kilogram of oil, natural gas, or coal? The answer with current technology is 20-40 times less. The cost of building such mammoth storage to meet seasonal fluctuations will simply exhaust all our lithium (or any other battery material) supplies.

I will not address the prohibitive energy cost—electric or fossil fuel—of private versus public or mass transportation, and why we should switch to the latter. I will instead focus on addressing the larger question of how to store renewable energy so that we can run our electricity infrastructure when wind or sun is not there.

Is it possible that a new technology will solve this problem? (Remember the dream of nuclear energy that will be not only clean but also so cheap that it will not need to be metered?) But do we bet our civilization’s future on such a possibility?

If not, we have to look at existing solutions. They exist, but using them means seeking alternatives to batteries for addressing our grid-level problems of intermittent renewable energy. It means repurposing our existing hydro-projects to work as grid-level storage and developing hydrogen storage for use in fuel cells. No extra dams or reservoirs, as the opponents of hydroelectricity projects fear. And of course, it means more public transportation instead of private transportation.

All of these existing solutions mean making changes on a societal level that corporate interests oppose—after all, doing so would require public investments for social benefits and not for private profits. Capital privileges short-term private profits over long-term social benefits. Remember how oil companies had the earliest research to show the impact of global warming due to carbon dioxide emissions? They not only hid these results for decades but also launched a campaign denying that global warming is linked to greenhouse gases. And they funded climate change deniers.

The contradiction at the heart of global warming is private greed over social needs. And who funds such a transition, the poor or the rich? This is also what COP27 is all about, not simply about how to stop global warming.

Does the U.S. Chip Ban on China Amount to a Declaration of War in the Computer Age?

/
532 views
6 mins read

The United States has gambled big in its latest across-the-board sanctions on Chinese companies in the semiconductor industry, believing it can kneecap China and retain its global dominance. From the slogans of globalization and “free trade” of the neoliberal 1990s, Washington has reverted to good old technology denial regimes that the U.S. and its allies followed during the Cold War. While it might work in the short run in slowing down the Chinese advances, the cost to the U.S. semiconductor industry of losing China—its biggest market—will have significant consequences in the long run. In the process, the semiconductor industries of Taiwan and South Korea and equipment manufacturers in Japan and the European Union are likely to become collateral damage. It reminds us again of what former U.S. Secretary of State Henry Kissinger once said: “It may be dangerous to be America’s enemy, but to be America’s friend is fatal.”

The purpose of the U.S. sanctions, the second generation of sanctions after the earlier one in August 2021, is to restrict China’s ability to import advanced computing chips, develop and maintain supercomputers, and manufacture advanced semiconductors. Though the U.S. sanctions are cloaked in military terms—denying China access to technology and products that can help China’s military—in reality, these sanctions target almost all leading semiconductor players in China and, therefore, its civilian sector as well. The fiction of ‘barring military use’ is only to provide the fig leaf of a cover under the World Trade Organization (WTO) exceptions on having to provide market access to all WTO members. Most military applications use older-generation chips and not the latest versions.

The specific sanctions imposed by the United States include:

  • Advanced logic chips required for artificial intelligence and high-performance computing
  • Equipment for 16nm logic and other advanced chips such as FinFET and Gate-All-Around
  • The latest generations of memory chips: NAND with 128 layers or more and DRAM with 18nm half-pitch

Specific equipment bans in the rules go even further, including many older technologies as well. For example, one commentator pointed out that the prohibition of tools is so broad that it includes technologies used by IBM in the late 1990s.

The sanctions also encompass any company that uses U.S. technology or products in its supply chain. This is a provision in the U.S. laws: any company that ‘touches’ the United States while manufacturing its products is automatically brought under the U.S. sanctions regime. It is a unilateral extension of the United States’ national legal jurisdiction and can be used to punish and crush any entity—a company or any other institution—that is directly or indirectly linked to the United States. These sanctions are designed to completely decouple the supply chain of the United States and its allies—the European Union and East Asian countries—from China.

In addition to the latest U.S. sanctions against companies that are already on the list of sanctioned Chinese companies, a further 31 new companies have been added to an “unverified list.” These companies must provide complete information to the U.S. authorities within two months, or else they will be barred as well. Furthermore, no U.S. citizen or anyone domiciled in the United States can work for companies on the sanctioned or unverified lists, not even to maintain or repair equipment supplied earlier.

The global semiconductor industry’s size is currently more than $500 billion and is likely to double its size to $1 trillion by 2030. According to a Semiconductor Industry Association and Boston Consulting Group report of 2020—“Turning the Tide for Semiconductor Manufacturing in the U.S.”—China is expected to account for approximately 40 percent of the semiconductor industry growth by 2030, displacing the United States as the global leader. This is the immediate trigger for the U.S. sanctions and its attempt to halt China’s industry from taking over the lead from the United States and its allies.

While the above measures are intended to isolate China and limit its growth, there is a downside for the United States and its allies in sanctioning China.

The problem for the United States—more so for Taiwan and South Korea—is that China is their biggest trading partner. Imposing such sanctions on equipment and chips also means destroying a good part of their market with no prospect of an immediate replacement. This is true not only for China’s East Asian neighbors but also for equipment manufacturers like the Dutch company ASML, the world’s only supplier of extreme ultraviolet (EUV) lithography machines that produces the latest chips. For Taiwan and South Korea, China is not only the biggest export destination for their semiconductor industry as well as other industries, but also one of their biggest suppliers for a range of products. The forcible separation of China’s supply chain in the semiconductor industry is likely to be accompanied by separation in other sectors as well.

The U.S. companies are also likely to see a big hit to their bottom line—including equipment manufacturers such as Lam Research Corporation, Applied Materials, and KLA Corporation; the electronic design automation (EDA) tools such as Synopsys and Cadence; and advanced chip suppliers like Qualcomm, Nvidia, and AMD. China is the largest destination for all these companies. The problem for the United States is that China is not only the fastest-growing part of the world’s semiconductor industry but also the industry’s biggest market. So the latest sanctions will cripple not only the Chinese companies on the list but also the U.S. semiconductor firms, drying up a significant part of their profits and, therefore, their future research and development (R&D) investments in technology. While some of the resources for investments will come from the U.S. government—for example, the $52.7 billion chip manufacturing subsidy—they do not compare to the losses the U.S. semiconductor industry will suffer as a result of the China sanctions. This is why the semiconductor industry had suggested narrowly targeted sanctions on China’s defense and security industry, not the sweeping sanctions that the United States has now introduced; the scalpel and not the hammer.

The process of separating the sanctions regime and the global supply chain is not a new concept. The United States and its allies had a similar policy during and after the Cold War with the Soviet Union via the Coordinating Committee for Multilateral Export Controls (COCOM) (in 1996, it was replaced by the Wassenaar Arrangement), the Nuclear Suppliers Group, the Missile Control Regime, and other such groups. Their purpose is very similar to what the United States has now introduced for the semiconductor industry. In essence, they were technology denial regimes that applied to any country that the United States considered an “enemy,” with its allies following—then as now—what the United States dictated. The targets on the export ban list were not only the specific products but also the tools that could be used to manufacture them. Not only the socialist bloc countries but also countries such as India were barred from accessing advanced technology, including supercomputers, advanced materials, and precision machine tools. Under this policy, critical equipment required for India’s nuclear and space industries was placed under a complete ban. Though the Wassenaar Arrangement still exists, with countries like even Russia and India within the ambit of this arrangement now, it has no real teeth. The real threat comes from falling out with the U.S. sanctions regime and the U.S. interpretation of its laws superseding international laws, including the WTO rules.

The advantage the United States and its military allies—in the North Atlantic Treaty Organization, the Southeast Asia Treaty Organization, and the Central Treaty Organization—had before was that the United States and its European allies were the biggest manufacturers in the world. The United States also controlled West Asia’s hydrocarbon—oil and gas—a vital resource for all economic activities. The current chip war against China is being waged at a time when China has become the biggest manufacturing hub of the world and the largest trade partner for 70 percent of countries in the world. With the Organization of the Petroleum Exporting Countries no longer obeying the U.S. diktats, Washington has lost control of the global energy market.

So why has the United States started a chip war against China at a time that its ability to win such a war is limited? It can, at best, postpone China’s rise as a global peer military power and the world’s biggest economy. An explanation lies in what some military historians call the “Thucydides trap”: when a rising power rivals a dominant military power, most such cases lead to war. According to Athenian historian Thucydides, Athens’ rise led Sparta, the then-dominant military power, to go to war against it, in the process destroying both city-states; therefore, the trap. While such claims have been disputed by other historians, when a dominant military power confronts a rising one, it does increase the chance of either a physical or economic war. If the Thucydides trap between China and the United States restricts itself to only an economic war—the chip war—we should consider ourselves lucky!

With the new series of sanctions by the United States, one issue has been settled: the neoliberal world of free trade is officially over. The sooner other countries understand it, the better it will be for their people. And self-reliance means not simply the fake self-reliance of supporting local manufacturing, but instead means developing the technology and knowledge to sustain and grow it.

This article was produced in partnership by Newsclick and Globetrotter.

The Apology That Never Came

/
646 views
5 mins read

How should we remember Queen Elizabeth II and her 70 years on the British throne? It’s perhaps better to consider after the media parade about her funeral is in the rearview mirror.

A number of people have reacted to the glorification of her rule, pointing out the British Royals’ direct connection to the slave trade, Britain’s colonial massacres, mass famines and its loot from the colonies. Britain’s wealth—$45 trillion at current prices from India alone—was built on the blood and sweat of people who lost their land and homes and are today poor countries. Lest we forget, the slave trade was a monopoly of the British throne: first, as the Company of Royal Adventurers Trading into Africa in 1660, later converted to the Royal African Company of England. The battle over “free trade” fought by British merchant capital was against this highly lucrative Royal monopoly so that they could participate in it as well: enslaving people in Africa and selling them to plantations in the Americas and the Caribbean.

According to western legends of the European Age of Discovery, co-terminus with Enlightenment, was what started it all in the 16th century. Explorers such as Vasco de Gama, Columbus, and Magellan went across the world, discovering new lands. The Enlightenment led to the development of reason and science, the basis of the industrial revolution in England. The Industrial Revolution then reached Europe and the United States, creating the difference between the wealthy West and the poverty-stricken rest. Slavery, genocide, land expropriation from “natives” and colonial loot do not enter this sanitized picture of the development of capitalism. Or, if mentioned, only as marginal to the larger story of the rise of the west.

Actual history is quite different. Chronologically, the Industrial Revolution takes place in the second half of the 18th century. The 16th-17th centuries is when a small handful of western countries reached the Americas, followed by the genocide of its indigenous population and enslaving of the rest. The 16th-17th centuries also see the rise of the slave trade from Africa to the Caribbean and the Americas. It destroys African society and its economy, what Walter Rodney calls How Europe Undeveloped Africa. The plantation economy—based on slavery in the Caribbean and Continental America—created large-scale commodity production and global markets.

While sugar, the product of the plantations, was the first global commodity, it was followed by tobacco, coffee and coca, and later cotton. While the plantation economy provided commodities for the world market, let us not forget that slaves were still the most important “commodity”. The slave trade was the major source of European—British, French, Dutch, Spanish and Portuguese—capital. Gerald Horne writes, “The enslaved, a peculiar form of capital encased in labor, represented simultaneously the barbarism of the emerging capitalism, along with its productive force” (The Apocalypse of Settler Colonialism, Monthly Review, April 1, 2018).

Marx characterized it as so-called Primitive Accumulation and as “expropriation,” not accumulation. Capital from the beginning was based on expropriation—robbery, plunder and enslaving of people by the use of force; there was no accumulation in this process. As Marx writes, capital was born “dripping from head to foot, from every pore, with blood and dirt.”

The British Royals played a key role in this history of slavery and the so-called primitive accumulation. Britain was a second-class power at the beginning of the 17th century. Britain’s transformation was initially based on the slave trade and, later, the sugar plantations in the Caribbean. Its ships and traders emerged as the major power in the slave trade and, by the 1680s, held three-fourths of this “market” in human beings. Out of this, the Royal African Company, owned by the British Crown, held a 90% share: the charge for Britain’s domination of the slave trade was led by the British Royals.

Interestingly, the slogan of “free trade,” under which slogan the World Trade Organization (WTO) was created, was the British merchant capital wanting the abolition of the Royal Monopoly over slave trade. It was, in other words, the freedom of capital to enslave human beings and trade in them, free of the royal monopoly. It is this capital, created out of slave trade and outright piracy and loot, that funded the industrial revolution.

While slavery was finally abolished, in Britain it was not the slaves but the slave owners that were paid compensation for losing their “property.” The amount paid in 1833 was 40% of its national budget, and since it was paid by borrowings, the UK citizens paid off this “loan” only in 2015. For the people of India, there is another part to the story. As the ex-slaves refused to work on the plantations they had served as slaves, they were replaced by indentured labor from India.

To go back to the British Royalty. The Crown’s property and portfolio investments are currently worth 28 billion pounds, making King Charles III one of the richest persons in the UK. Charles III personal property itself is more than a billion pounds. Even by today’s standards of obscene personal wealth, these are big numbers, particularly as its income is virtually tax-free. The royals are also exempt from death duties.

In the three hundred years of the history of British colonialism, brutal wars, genocide, slavery, and expropriation were carried out in its name and under its leadership. After the industrial revolution, Britain wanted only raw materials from its colonies and not any industrial products: the slogan was “not even a nail from the colonies.” All trade from the colonies to other countries had to pass through Britain and pay taxes there before being re-exported. The complement of the industrial revolution in Britain was de-industrializing its colonies, confining them to be a producer of raw materials and agricultural products.

Why are we talking about Britain’s colonial past on the occasion of the death of Queen Elizabeth II? After all, she only saw the last 70 years when the British colonial empire was liquidated. This is not simply about the past, but that neither the British Crown nor its rulers have ever expressed any guilt over the brutality of its empire, and its foundation based on slavery and genocide. No apology for the empire’s gory history: not even for the massacres and mass incarcerations that took place. In Jallianawala Bagh, which Elizabeth II visited in 1997, she called the massacre a “distressing episode” and a “difficult episode”; not even a simple “We are sorry.” Prince Phillip even questioned the number of martyrs.

How do we reconcile the anger that people who suffered from Britain’s colonial empire feel about their leaders making a bee-line to pay homage to the Queen? Does it not shame the memory of those who laid down their lives in the freedom struggle against the British Crown that India lowered the national flag to half-mast to honor the Queen?

One can argue that this happened long before Elizabeth II took over the Crown, and we cannot hold her personally responsible for Britain’s colonial history. We should: she as Queen represented the British state: it is not Elizabeth, the person that people want an apology from, but the titular head of the British state. That is why Mukoma Wa Ngugi, the son of Kenya’s world-renowned writer Ngugi wa Thiong’o said, “If the queen had apologized for slavery, colonialism and neocolonialism and urged the crown to offer reparations for the millions of lives taken in her/their names, then perhaps I would do the human thing and feel bad,” he wrote. “As a Kenyan, I feel nothing. This theater is absurd.”

Mukoma Ngugi was referring to the Mau Mau revolt for land and freedom in which thousands of Kenyans were massacred, and 1.5 million were held in brutal concentration camps.

This was 1952-1960; Queen Elizabeth II came to the throne in 1952, very much in her lifetime!

This article was produced in partnership by Newsclick and Globetrotter.

Inhuman Electric Shock

/
842 views
3 mins read

The price of electricity has risen astronomically in Europe over the last two years: by four times over the previous year and 10 times over the last two years. The European Union (EU) has claimed that this rise in prices is due to the increase in the price of gas in the international market and Russia not supplying enough gas. This raises the critical question: Why should, for example, the German electricity price rise four times when natural gas contributes around one-seventh of its electricity production? Why does the UK, which generates 40 percent of its electricity from renewables and nuclear plants, and produces half the natural gas it consumes, also see a steep rise in the price of electricity? All this talk of blaming Russia for the recent increase in gas prices masks the reality that the electricity generators are actually making astronomical windfall profits. The poorer consumers, already pushed to the wall by the pandemic, face a cruel dilemma: With electricity bills likely to make up 20-30 percent of their household budget during winter, should they buy food or keep their houses warm?

This steep rise in electricity prices is the other side of the story of the so-called market reforms in the electricity sector that have taken place over the last 30 years. The cost of electricity is pegged to the costliest supply to the grid in the daily and hourly auctions. Currently, this is natural gas, and that is why electricity prices are rising sharply even if gas is not the main source of electricity supply to the grid. This is market fundamentalism, or what the neoclassical economists call marginal utility theory. This was part of Augusto Pinochet’s electricity sector reformsthat he introduced in Chile during his military dictatorship there from 1973 to 1990. The guru of these Pinochet reforms was Milton Friedman, who was assisted by his Chicago Boys. The principle that electricity price should be based on its “marginal price” even became a part of Pinochet’s 1980 constitution in Chile. The Chilean reforms led to the privatization of the country’s electricity sector, which was the objective of initiating these reforms.

It was the Chilean model that Margaret Thatcher copied in the UK, which then the EU copied. The UK dismantled its Central Electricity Generating Board (CEGB) that ran its entire electricity infrastructure: generation, transmission, and bulk distribution. This move also helped the UK shift away from domestic coal for its thermal power plants, breaking the powerful coal miners’ union. These were also the Enron market ‘reforms’ in California, leading to the summer meltdown of its grid in 2000-2001.

The European Union banked heavily on natural gas as its preferred fuel to bring down its greenhouse gas emissions; it also ramped up renewables—solar and wind power—and phased out lignite and coal. The EU has imposed a series of sanctions on Russia, made public its plans to impose further sanctions on the country, and seized about 300 billion euros of Russia’s reserves lying in EU banks. The EU has also said that it will cut down on Russia’s oil and gas supplies. Not surprisingly, Russia has sharply reduced its gas supplies to the EU. If the West thought it could weaponize its financial power, why did it think that Russia would not retaliate by doing the same with its gas supplies to the EU?

Due to the Russian supplies of natural gas to Western Europe falling, the price of LNG has risen sharply in the international market. Worse, there is simply not enough LNG available in the market to replace the gas that Russia supplied to the EU through its pipelines.

With the price of gas rising by four to six times in the last few months, the price of electricity has also risen sharply. But as only a fraction of the electricity is powered by gas, all other sources of energy—wind, solar, nuclear, hydro, and even dirty coal-fired plants—are making a killing. It is only now that the EU and the UK are discussing how to address the burden of high electricity prices on consumers and the windfall profits made by the electricity generators during this period.

It is not only the EU and UK consumers who are badly hit. It is also the European and British industries. Stainless steel, fertilizer, glass making, aluminum, cement, and engineering industries are all sensitive to input costs. As a result, all these industries are at risk of shutting down in the EU and the UK.

Former Greek Finance Minister Yanis Varoufakis in his article “Time to Blow Up Electricity Markets” writes, “The European Union’s power sector is a good example of what market fundamentalism has done to electricity networks the world over… It’s time to wind down the simulated electricity markets.” The rest of the world would do well not to follow the EU’s example.

Why then is Indian Prime Minister Narendra Modi’s central government rushing into this abyss? Did it not learn from last year’s experience when, after a coal shortage, the prices in the electricity spot market went up to Rs 20 ($0.25) per unit before public outcry saw it capped at Rs 12 ($0.15)? So why push again for these bankrupt policies of market fundamentalism in the guise of electricity reforms? Who will benefit from these market reforms? Certainly, neither the consumers nor the Indian state governments, which bear the major burden of subsidizing electricity prices for their consumers.

This article was produced in partnership by Newsclick and Globetrotter.

Mendel’s Genetic Revolution and the Legacy of Scientific Racism

/
883 views
7 mins read

In July, the world celebrated 200 years since the birth of Gregor Mendel, who is widely accepted as the “father of modern genetics” for his discovery of the laws of inheritance. His experiments with peas, published in 1866 under the title “Experiments in Plant Hybridization,” identified dominant and recessive traits and how recessive traits would reappear in future generations and in what proportion. His work would largely remain unacknowledged and ignored until three other biologists replicated his work in 1900.

While Mendel’s work is central to modern genetics, and his use of experimental methods and observation is a model for science, it also set off the dark side with which genetics has been inextricably linked: eugenics and racism. But eugenics was much more than race “science.” It was also used to argue the superiority of the elite and dominant races, and in countries like India, it was used as a “scientific” justification for the caste system as well.

People who believe that eugenics was a temporary aberration in science and that it died with Nazi Germany would be shocked to find out that even the major institutions and journals that included the word eugenics as part of their names have continued to operate by just changing their titles. The Annals of Eugenics became the Annals of Human Genetics; the Eugenics Review changed its name to the Journal of Biosocial Science; Eugenics Quarterly changed to Biodemography and Social Biology; and the Eugenics Society was renamed the Galton Institute. Several departments in major universities, which were earlier called the department of eugenics, either became the department of human genetics or the department of social biology.

All of them have apparently shed their eugenics past, but the reoccurrence of the race and IQ debate, sociobiology, the white replacement theory and the rise of white nationalism are all markers that theories of eugenics are very much alive. In India, the race theory takes the form of the belief that Aryans are “superior” and fair skin is seen as a marker of Aryan ancestry.

While Adolf Hitler’s gas chambers and Nazi Germany’s genocide of Jews and Roma communities have made it difficult to talk about the racial superiority of certain races, scientific racism persists within science. It is a part of the justification that the elite seek, justifying their superior position based on their genes, and not on the fact that they inherited or stole this wealth. It is a way to airbrush the history of the loot, slavery and genocide that accompanied the colonization of the world by a handful of countries in Western Europe.

Why is it that when we talk about genetics and history, the only story that is repeated is that about biologist Trofim Lysenko and how the Soviet Communist Party placed ideology above science? Why is it that the mention of eugenics in popular literature is only with respect to Nazi Germany and not about how Germany’s eugenic laws were inspired directly by the U.S.? Or how eugenics in Germany and the U.S. were deeply intertwined? Or how Mendel’s legacy of genetics become a tool in the hands of racist states, which included the U.S. and Great Britain? Why is it that genetics is used repeatedly to support theories of superiority of the white race?

Mendel showed that there were traits that were inherited, and therefore we had genes that carried certain markers that could be measured, such as the color of the flower and the height of the plant. Biology then had no idea of how many genes we had, which traits could be inherited, how genetically mixed the human population is, etc. Mendel himself had no idea about genes as carriers of inheritance, and this knowledge became known much later.

From genetics to society, the application of these principles was a huge leap that was not supported by any empirical scientific evidence. All attempts to show the superiority of certain races started with a priori assuming that certain races were superior and then trying to find what evidence to choose from that would help support this thesis. Much of the IQ debate and sociobiology came from this approach to science. In his review of The Bell Curve, Bob Herbert wrote that the authors, Charles Murray and Richard Herrnstein, had written a piece of “racial pornography,” “…to drape the cloak of respectability over the obscene and long-discredited views of the world’s most rabid racists.”

A little bit of the history of science is important here. Eugenics was very much mainstream in the early 20th century and had the support of major parties and political figures in the UK and the U.S. Not surprisingly, former British Prime Minister Winston Churchill was a noted supporter of race science, although eugenics had some supporters among progressives as well.

The founder of eugenics in Great Britain was Francis Galton, who was a cousin of Charles Darwin. Galton pioneered statistical methods like regression and normal distribution, as did his close collaborators and successors in the Eugenics Society, Karl Pearson and R.A. Fisher. On the connection of race and science, Aubrey Clayton, in an essay in Nautilus, writes, “What we now understand as statistics comes largely from the work of Galton, Pearson, and Fisher, whose names appear in bread-and-butter terms like ‘Pearson correlation coefficient’ and ‘Fisher information.’ In particular, the beleaguered concept of ‘statistical significance,’ for decades the measure of whether empirical research is publication-worthy, can be traced directly to the trio.”

It was Galton who, based supposedly on scientific evidence, argued for the superiority of the British over Africans and other natives, and that superior races should replace inferior races by way of selective breeding. Pearson gave his justification for genocide: “History shows me one way, and one way only, in which a high state of civilization has been produced, namely the struggle of race with race, and the survival of the physically and mentally fitter race.”

The eugenics program had two sides: one was that the state should try to encourage selective breeding to improve the stock of the population. The other was for the state should take active steps to “weed out” undesirable populations. The sterilization of “undesirables” was as much a part of the eugenics societies as encouraging people toward selective breeding.

In the U.S., eugenics was centered on Cold Spring Harbor’s Eugenics Record Office. While Cold Spring Harbor Laboratory and its research publications still hold an important place in contemporary life sciences, its original significance came from the Eugenics Record Office, which operated as the intellectual center of eugenics and race science. It was supported by philanthropic money from the Rockefeller family, the Carnegie Institution and many others. Charles Davenport, a Harvard biologist, and his associate Harry Laughlin became the key figures in passing a set of state laws in the U.S. that led to forced sterilization of the “unfit” population. They also actively contributed to the Immigration Act of 1924, which set quotas for races. The Nordic races had priority, while East Europeans (Slavic races), East Asians, Arabs, Africans and Jews were virtually barred from entering the country.

Sterilization laws in the U.S. at the time were controlled by the states. U.S. Supreme Court Justice Oliver Wendell Holmes, the doyen of liberal jurisprudence in the U.S., gave his infamous judgment in Virginia on justifying compulsory sterilization, “Three generations of imbeciles are enough,” he ruled in Buck v. Bell. Carrie Buck and her daughter were not imbeciles; they paid for their “sins” of being poor and perceived as threats to society (a society that failed them in turn). Again, Eugenics Research Office and Laughlin played an important role in providing “scientific evidence” for the sterilization of the “unfit.”

While Nazi Germany’s race laws are widely condemned as being the basis for Hitler’s gas chambers, Hitler himself stated that his inspiration for Germany’s race laws was the U.S. laws on sterilization and immigration. The close links between the U.S. eugenicists and Nazi Germany are widely known and recorded. Edwin Black’s book War Against the Weak: Eugenics and America’s Campaign to Create a Master Race described how “Adolf Hitler’s race hatred was underpinned by the work of American eugenicists,” according to an article in the Guardian in 2004. The University of Heidelberg, meanwhile, gave Laughlin an honorary degree for his work in the “science of racial cleansing.”

With the fall of Nazi Germany, eugenics became discredited. This resulted in institutions, departments and journals that had any affiliation to eugenics by name being renamed, but they continued to do the same work. Human genetics and social biology became the new names for eugenics. The Bell Curve was published in the 1990s justifying racism, and a recent bestseller by Nicholas Wade, a former science correspondent of the New York Times, also trot out theories that have long been scientifically discarded. Fifty years back, Richard Lewontin had shown that only about 6 to 7 percent of human genetic variation exists between so-called racial groups. At that time, genetics was still at a nascent stage. Later, data has only strengthened Lewontin’s research.

Why is it that while criticizing the Soviet Union’s scientific research and the sins of Lysenko 80 years back, we forget about race science and its use of genetics?

The answer is simple: Attacking the scientific principles and theories developed by the Soviet Union as an example of ideology trumping science is easy. It makes Lysenko the norm for Soviet science of ideology trumping pure science. But why is eugenics, with its destructive past and its continuing presence in Europe and the U.S., not recognized as an ideology—one that has persisted for more than 100 years and that continues to thrive under the modern garb of an IQ debate or sociobiology?

The reason is that it allows racism a place within science: changing the name from eugenics to sociobiology makes it appear as a respectable science. The power of ideology is not in the ideas but in the structure of our society, where the rich and the powerful need justification for their position. That is why race science as an ideology is a natural corollary of capitalism and groups like the G7, the club of the rich countries who want to create a “rule-based international order.” Race science as sociobiology is a more genteel justification than eugenics for the rule of capital at home and ex-colonial and settler-colonial states abroad. The fight for science in genetics has to be fought both within and outside science as the two are closely connected.

This article was produced in partnership by Newsclick and Globetrotter.