Following excerpts adapted from the author’s newly released book, Against the World: Anti-Globalism and Mass Politics between the World Wars published by W. W. Norton & Company
“In a world of falling prices, no stock has dropped more catastrophically than International Cooperation.” — DOROTHY THOMPSON, 1931
The era of globalism was over.
Even committed internationalists “have lost faith and join in the chorus of those who never sympathized with our ideals, and say internationalism has failed,” despaired Mary Sheepshanks, a British feminist and internationalist. Although she was confident that the spirit of internationalism would return once “the fumes cleared from men’s brains,” it had been replaced for the moment by “race hatred and national jealousy, leading to tariffs, militarism, armaments, crushing taxation, restricted intercourse, mutual butchery, and the ruin of all progress.”
The year was 1916. Hundreds of thousands of European boys and men were already dead, and nearly everyone was penning obituaries for internationalism. The fumes did not clear quickly. More than twenty-five years later, the Austrian Jewish writer Stefan Zweig would publish his memoir, The World of Yesterday. It was a nostalgic eulogy for a lost era of globalism. Zweig, a self-described “citizen of the world,” recalled, “Before 1914, the earth had belonged to all. People went where they wished and stayed as long as they pleased. There were no permits, no visas, and it always gives me pleasure to astonish the young by telling them that before 1914 I travelled from Europe to India and to America without passport and without ever having seen one.” After the war, everything changed. “The world was on the defensive against strangers . . . The humiliations which once had been devised with criminals alone in mind now were imposed upon the traveller, before and during every journey. There had to be photographs from right and left, in profile and full face, one’s hair had to be cropped sufficiently to make the ears visible; fingerprints were taken . . . they asked for the addresses of relatives, for moral and financial guarantees, questionnaires, and forms in triplicate and quadruplicate needed to be filled out, and if only one of this sheath of papers was missing one was lost.” He linked these bureaucratic humiliations to a loss of human dignity and the lost dream of a united world. “If I reckon up the many forms I have filled out during these years . . . the many examinations and interrogations at frontiers I have been through, then I feel keenly how much human dignity has been lost in this century which, in our youth, we had credulously dreamed as one of freedom, as of the federation of the world.”
In Britain, economist John Maynard Keynes penned his famous obituary for globalization shortly after the war ended. “What an extraordinary episode in the economic progress of man that age was which came to an end in August, 1914!” he wrote. In the golden age before the war, “The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep.” It was an age in which “the projects of militarism and imperialism, of racial and cultural rivalries, of monopolies, restrictions, and exclusion, which were to play the serpent to this paradise, were little more than the amusements of his daily newspaper.” These looming threats “appeared to exercise almost no influence at all on the ordinary course of his social and economic life, the internationalization of which was nearly complete in practice.”
Stefan Zweig and John Maynard Keynes remain among the most renowned analysts of the changes brought by the First World War. They both understood these changes in terms of the end of a golden era of globalization, during which people, goods, and capital breezed across international frontiers. But their very nostalgia for a lost world of globalism offers an important clue as to the causes of its downfall. Both men were myopic about the extent to which the freedoms they associated with globalization were the privileges of a narrow elite (“It may be I was too greatly pampered,” Zweig speculated . . . ). The earth had not belonged to everyone before 1914. It had, however, belonged to people like Keynes and Zweig.
Zweig and Keynes traveled the world unmolested by bureaucrats before the First World War largely because they were wealthy, highly educated, white European men. They traveled freely for business and pleasure, with no concern for their physical safety. Nor did they worry about the meddlesome interference of husbands, fathers, or state authorities.
In steerage, the World of Yesterday looked quite different. Migrants headed toward the United States in the late nineteenth century were already subjected to the poking and prodding of doctors charged with excluding sick, disabled, and mentally ill migrants, along with those deemed “likely to become a public charge” (including most single women). Nonwhite migrants were categorically excluded. Millions of people in the world lived in deep poverty in regions that were denied political sovereignty and exploited economically for the benefit of Europeans and North Americans. While it was true that international trade benefited all parties in the aggregate, it exacerbated inequality between rich countries and poor countries. Likewise, within industrialized countries, globalization did not benefit everyone equally: there were clear winners and losers.
Keynes frankly acknowledged all this. The bounty of globalization was not shared equally. But inequality, he claimed, had been seen as a necessary corollary to progress in the nineteenth century. “The greater part of the population, it is true, worked hard and lived at a low standard of comfort, yet were, to all appearances, reasonably contented with this lot.” This was because they believed in the prospect of social mobility. “Escape was possible,” he insisted, “for any man of capacity or character at all exceeding the average.”
The war shattered those illusions. The magnitude of wartime sacrifices bred popular demands for more immediate justice. Across Europe and the world, workers, women, and colonial subjects took to the streets, demanding sovereignty and greater equality. In Russia the discontent combusted into revolution, which seemed poised to spread westward. The wheels of global integration ground to a halt. This spelled disaster for Europe and the world, Keynes warned. “An inefficient, unemployed, disorganized Europe faces us, torn by internal strife and international hate, fighting, starving, pillaging, and lying.”
His warning was prescient. The era of anti-globalism lasted another two decades, punctuated by the greatest global economic crisis in world history, the Great Depression. Nor would the strife be overcome with a new treaty or a peaceful handshake. Rather, as American journalist Dorothy Thompson would observe from Berlin in 1931, “Looking at Europe, from the British Isles to the Balkans, one is forced to the admission that after twelve years of the League of Nations, the International Court . . . multilateral treaties, Kellogg Pacts, the International Bank and disarmament conferences, the whole world is retreating from the international position and is taking its dolls and going home.”
WHY AND how did so many people turn against the world after 1918? And what were the consequences of this anti-global turn? This book attempts to answer these questions. In the process, it reframes the history of interwar Europe not only as a battle between fascism and communism, democracy and dictatorship, but also as a contest over the future of globalization and globalism. The era between the two world wars was defined by attempts to resolve mounting tensions between globalization on the one hand and equality, state sovereignty, and mass politics on the other.
Moving through time and across space, I aim to give voice to the diversity of individuals who participated in this debate, to how it played out in local everyday contexts and at the level of national and international politics. The protagonists include several famous and infamous people—dictators, internationalists, industrialists, and economists— but also many individuals on the margins of history, including migrant women, garment workers, shopkeepers, unemployed veterans, radical gardeners, and disillusioned homesteaders.
There is no doubt about the decline of global mobility and trade in this period. On the one hand, the First World War was a “global” war. It mobilized human and material resources around the world and increased international financial entanglement through a massive web of international debt (especially debts to the United States). But at the same time, the war produced unprecedented supply shocks. The cost of shipping tripled, and inflation soared. Meanwhile, states introduced new tariffs, exchange controls, and other protectionist measures, and sought to cut off supplies to their enemies. Economic historians estimate that global exports declined by 25 percent due to the outbreak of the First World War, recovering to prewar levels only in 1924. There was a brief period of growth in the late 1920s, but all these gains were lost during the Great Depression. By 1933, world trade had declined 30 percent from 1929 levels and was 5 percent lower than it had been in 1913. Trade did not reach pre-1913 rates of growth again until the 1970s.
Transatlantic migration, which reached a peak of 2.1 million in 1913, came to a sputtering halt during the First World War and recovered only briefly after the war ended. Global migration rates remained high in the 1920s, especially within Asia, but the Great Depression radically curbed mobility everywhere in the world. This was partly due to a reduction in demand for migrant labor, but it was also caused by the closing of state borders and new restrictions on migration and mobility. Global communication also slowed down. News that raced via telegraph from Europe to North America and Australia in a single day in 1913 took weeks to arrive in 1920. And the gold standard, the motor of global financial integration, broke down during the First World War and was abandoned by the roadside during the 1930s, first by Great Britain (1931), then the United States (1933), and finally by France and other European powers.
These numbers, and the broader economic histories of globalization and deglobalization between the two world wars, are critically important. But my focus is rather on the grassroots origins and human consequences of the popular revolt against globalism, both for self-professed globalists such as Keynes and Zweig and for individuals who saw globalism as a threat to their aspirations for greater equality and stability. It was this popular confrontation with globalization that ultimately caused its disruption and transformation. Popular anti-globalism arose with accelerating globalization itself in the late nineteenth century, but in the 1920s and 1930s, the demands of anti-global activists were increasingly taken up by political parties and states. Their efforts to render individuals, families, and states more self-sufficient had mixed results, but lasting consequences.
Following excerpts adapted from the author’s new book, Syria Betrayed: Atrocities, War, and the Failure of International Diplomacy published by Columbia University Press.
Everybody had their agenda and the interests of the Syrian people came second, third, or not at all. — Lakhdar Brahimi, UN special envoy for Syria, August 31, 2015
In Early 2011 the world was stunned as the Arab Spring tore through Tunisia, then Egypt, and then Libya, Bahrain, Oman, Yemen, and Jordan. Syria stood at the precipice. As diplomats at the United Nations argued about what to do in Libya and the deteriorating situation in Côte d’Ivoire, few understood that Syria was descending into a hell of civil war that would consume more than half a million lives, displace more than half the country’s population, host the brutally genocidal Islamic State, and draw in the militaries of Iran, Hezbollah, Russia, the United States, Turkey, and others. As Syria’s tragedy unfolded, not one foreign government consistently prioritized the protection of Syrians from atrocity crimes. Not only did they do little to alleviate suffering, much of what they did made matters worse. They betrayed Syria’s civilians by breaking the trust between peoples, states, and global institutions exemplified by the responsibility to protect.
It is difficult to convey the extent of the brutality inflicted on Syria’s tormented civilians since the uprising began in 2011, since raw numbers have a numbing effect. Syrians have been shot in the streets as they protested. Tens of thousands were hauled into prisons and tortured until dead. Tens of thousands more live on in those conditions. Barrel bombs packed with high explosives, nails, and other makeshift shrapnel have been hurled indiscriminately by the dozen into civilian neighborhoods. Men, women, and children have been gassed to death with sarin and chlorine. Civilians have been shot, knifed, beheaded, and even crucified. They have been denied food, water, and medicine to the point of malnutrition. Children have had their homes brought down on top of them and have been raped, shot, tortured, and forcibly recruited into armed groups. Women and girls have been kidnapped, trafficked, and sold as sex slaves. Schools have been systematically targeted and destroyed. Hospitals and medical centers suffered the same fate. The government and its allies were not responsible for all Syria’s atrocities, but they were responsible for the overwhelming majority. Syrian civilians found themselves trapped between ISIS extremism and its deranged ideology enforced by beheading, immolation, and slavery and the indiscriminate barrel bombs, artillery fire, rockets, missiles, and militia of the government and its allies. Yet even at the peak of ISIS’s power in Syria, jihadists killed Syrian civilians at a lower rate than the government. Different datasets record the number of civilians killed by the government and its allies in the decade between 2011 and 2021 as being between 175,000 and 207,000. In comparison, those same datasets record that ISIS was responsible for the deaths of between 5,000 and 6,500 Syrian civilians. The number of civilians killed by other opposition groups ranges between 6,000 and 11,000. Put another way, the Syrian government and its allies are likely responsible for between 86 and 94 percent of all civilian deaths directly caused by the war. These stark discrepancies show that while opposition groups certainly perpetrated atrocities, they did not do so on anything like the scale perpetrated by the government and its allies. There is no place for moral equivalency in the story of Syria’s war.
More than sixty years earlier, the newly established United Nations General Assembly adopted a convention to prohibit genocide and establish a legal duty to prevent it. Two years later the four Geneva Conventions established what we today call International Humanitarian Law. Additional protocols agreed to in 1977 stipulated that “the civilian population as such, as well as individual civilians, shall not be the object of attack. Acts or threats of violence the primary purpose of which is to spread terror among the civilian population are prohibited” (article 13, protocol II). The protocols required that any use of force be strictly confined to military goals and established the legal principle of discrimination—the rule that soldiers are obliged to discriminate between soldiers and civilians and should refrain from violence if they cannot tell the difference. Violations of these laws have become known as “war crimes” and “crimes against humanity.” New laws restricted the use of “Certain Conventional Weapons” (1980, 1995, 1996, 2008). The Chemical Weapons Convention in 1997 prohibited possession, manufacture, and use of chemical weapons, and the Organization for the Prohibition of Chemical Weapons was established to oversee it. In the same year, the Ottawa Treaty banned the manufacture, stockpiling, and use of antipersonnel land mines. In 2008 cluster munitions were also prohibited, by a treaty that garnered the support of more than a hundred states. The scope of legal obligations doesn’t end with the prohibition of genocide, war crimes, and crimes against humanity, however. States have legal obligations to prevent these crimes, protect their victims, and promote compliance with the law. These laws reshaped expectations about how war ought to be conducted and civilians protected from its worst ravages. They established legal limits to what a government can lawfully do to its people. They codified the notion that sovereignty entails legal responsibilities as well as rights.
But these laws always stood in tension with two harsh political realities: First, that in war power tends to matter more than justice, since when the fighting starts actors rarely yield to law and justice alone. Indeed, it is precisely because they disagree about what justice is and what it entails that they fight. Second, that for all the talk of the rights of individuals and groups to protection from atrocity crimes, governments have tended to privilege sovereignty—especially their own—over the protection of basic human rights. There is a good reason for that, for sovereignty and its attendant right to noninterference protects postcolonial and small states from the coercive interference of the powerful and helps maintain a basic condition of orderly conduct among states. The awkward juxtaposition of the humanitarian aspirations expressed in international humanitarian law and a sovereignty-based international order raised difficult practical and ethical questions about what to do when states themselves committed atrocities against sections of their own population. The result was an acute gap between what the law said about how states should behave and how they actually behaved. Genocide, war crimes, and crimes against humanity persisted, often untroubled by outside interference. This became a matter of global concern after the Cold War and high-profile failures to stem genocide in Rwanda and Srebrenica; mass killing and ethnic cleansing in Angola, Bosnia, Burundi, Croatia, East Timor, Kosovo, Liberia, Sierra Leone, Zaire/Democratic Republic of Congo; and state repression in Iraq. Time and again, international society proved unwilling or unable to uphold its own laws in the face of such disasters. The principle of the “responsibility to protect”—or R2P as it has become known—was devised as a way of navigating these dilemmas. Unanimously endorsed by the UN General Assembly in 2005, the principle meant that governments recognized they have a responsibility to protect their populations from genocide, war crimes, ethnic cleansing, and crimes against humanity. They agreed to encourage and help one another fulfill their responsibility. They also pledged to use diplomatic, humanitarian, and other peaceful means to protect populations and decided that when a state is manifestly failing to protect its population from atrocities, the international community has a responsibility to take “timely and decisive action” to do so, using all necessary means through the United Nations Security Council. This commitment was made unanimously by the largest ever gathering of Heads of State and Government at the United Nations in 2005. It was reaffirmed by the General Assembly in 2009 and 2021. At the time of this writing, R2P had featured in ninety-two UN Security Council resolutions and statements and fifty-eight resolutions of the UN Human Rights Council. All this counted for little in Syria.
This book explains how and why the world failed to fulfill its responsibility to protect Syrians. Ultimately, it is a story of priorities, of how other things came to be seen as being more important than protecting Syrians from their government. So-called realists might say that this is inevitable; that we live in a brutal and illiberal world where power matters more than justice and where even trying to stop atrocities in other countries invariably makes things worse. But this takes too much for granted. It ignores evidence that determined action can mitigate and end atrocities.1 And, like all structural theories, it absolves individuals of responsibility for their choices. As I will show, political leaders were presented time and again with choices, and almost every time they chose not to make alleviation of Syria’s suffering their priority. These choices had direct, sometimes immediate, consequences for the lives of Syrians, usually for the worse. Things could have been different. Steps could have been taken to save lives, perhaps even lots of lives. I will show how decision making was guided by shibboleths; false assumptions that were exposed one by one. Chief among them was the conviction that Syria’s president, Bashar al-Assad, could be persuaded to reform or agree to share power through a political settlement. Foreign actors clung to that belief despite its evident faults even as their peace processes zombified. There were other shibboleths too, about the impossibility of using force to good effect, about the opposition’s inherent extremism, and about Russian good faith.
There are innumerable ways of telling this tragic story, but however one tells it, the central point remains the same: that despite moral imperatives, legal obligations, and our knowledge of what happens when the world turns a blind eye to atrocities, governments and international organizations chose not to prioritize the protection of Syrians because they thought other things were more important. First, Syria’s civilians were betrayed by their own government. To Assad, killing civilians was always a price worth paying for regime survival. Then, they were betrayed by the government’s foreign allies who blocked any meaningful multilateral approach to the crisis. Almost from the start, Assad’s tottering government depended for its survival on foreign allies, principally Iran, Russia, and Hezbollah, cheered on from the sidelines by China and to a lesser extent, at the beginning at least, India, Brazil, and South Africa. Then those who claimed to be the friends of Syria’s people, their most immediate neighbors, betrayed them. For all their posturing, Syria’s Arab neighbors also had other priorities and were often more concerned with their own survival and legitimacy and their regional competition for hegemony, status, and influence, than they were with the plight of Syrians. They competed against one another as much as with Damascus and fostered the fragmentation and radicalism that doomed Syria’s opposition. Turkey stayed the course longer than the others but mainly because it had a Kurdish problem and a refugee crisis to resolve. And then, those states most vociferous in their support for R2P and the principles of protection betrayed Syria’s civilians. The West stridently condemned the violence, demanded reform, and agonized over what to do. Admittedly, the actions of others presented concerned Westerners with few appealing options. But protecting Syria’s civilians was never their main priority either. For the United States at different times, priorities included military withdrawal from the Middle East, combatting Islamist terrorism, rapprochement with Iran, and protecting itself and its allies from the perceived threat posed by refugees fleeing for their lives. For Europeans, distracted by economic crisis and disunity, fear of terrorism and refugees always loomed larger than humanitarian concerns. Priorities shifted, but the protection of Syrian civilians was rarely even close to being at the top of the list. Even the United Nations—the institution entrusted to implement R2P—succumbed. As earnest efforts to negotiate peace crumbled, the organization propped up a zombie peace process that helped Assad while its humanitarian agencies funneled millions of dollars to the government and hundreds of millions of dollars worth of aid to government-controlled areas, despite that same government prohibiting the flow of aid to opposition areas it was besieging, bombarding, and starving. Thus did the United Nations aid and abet a government strategy based on atrocities.
Following excerpts are adapted from the author’s new book, Spare, published by Penguin Random House.
Days Later I Was In Botswana, with Chels. We went to stay with Teej and Mike. Adi was there too. The first convergence of those four special people in my life. It felt like bringing Chels home to meet Mum and Dad and big bro. Major step, we all knew.
Luckily, Teej and Mike and Adi loved her. And she saw how special they were too.
One afternoon, as we were all getting ready to go for a walk, Teej started nagging me.
Bring a hat!
And sunscreen! Lots of sunscreen! Spike, you’re going to fry with that pale skin!
All right, all right.
It just flew out of my mouth. I heard it, and stopped. Teej heard it and stopped. But I didn’t correct myself. Teej looked shocked, but also moved. I was moved as well. Thereafter, I called her Mom all the time. It felt good. For both of us. Though I made a point, always, to call her Mom, rather than Mum.
There was only one Mum.
A happy visit, overall. And yet there was a constant subtext of stress. It was evident in how much I was drinking.
At one point Chels and I took a boat, drifted up and down the river, and the main thing I remember is Southern Comfort and Sambuca. (Sambuca Gold by day, Sambuca Black by night.) I remember waking in the morning with my face stuck to a pillow, my head not feeling like it was fastened to my neck. I was having fun, sure, but also dealing in my own way with unsorted anger, and guilt about not being at war—not leading my lads. And I wasn’t dealing well. Chels and Adi, Teej and Mike said nothing. Maybe they saw nothing. I was probably doing a pretty good job of covering it all up. From the outside my drinking probably looked like partying. And that was what I told myself it was. But deep down, on some level, I knew.
Something had to change. I knew I couldn’t go on like this.
So the moment I got back to Britain I asked for a meeting with my commanding officer, Colonel Ed Smyth-Osbourne.
I admired Colonel Ed. And I was fascinated by him. He wasn’t put together like other men. Come to mention it, he wasn’t put together like any other human I’d encountered. The basic ingredients were different. Scrap iron, steel wool, lion’s blood. He looked different too. His face was long, like a horse’s, but not equine smooth; he had a distinctive tuft of hair on each cheek. His eyes were large, calm, capable of wisdom and stoicism. My eyes, by contrast, were still bloodshot from my Okavango debauch, and darting all around as I delivered my pitch.
Colonel, I need to find a way of getting back onto operations, or else I’m going to have to quit the Army.
I’m not certain Colonel Ed believed my threat. I’m not certain I did. Still, politically, diplomatically, strategically, he couldn’t afford to discount it. A prince in the ranks was a big public-relations asset, a powerful recruiting tool. He couldn’t ignore the fact that, if I bolted, his superiors might blame him, and their superiors too, and up the chain it might go.
On the other hand, much of what I saw from him that day was genuine humanity. The guy got it. As a soldier, he felt for me. He shuddered at the thought of being kept from a scrap. He really did want to help.
Harry, there might be a way…
Iraq was permanently off the table, he said. Alas. No two ways about that, I’m afraid. But maybe, he added, Afghanistan was an option.
I squinted. Afghanistan?
He muttered something about it being “the safer option.”
What on earth was he banging on about? Afghanistan was worlds more dangerous than Iraq. At that moment Britain had seven thousand soldiers in Afghanistan and each day found them engaged in some of the fiercest combat since the Second World War.
But who was I to argue? If Colonel Ed thought Afghanistan safer, and if he was willing to send me there, great.
What job would I do in Afghanistan, Colonel?
FAC. Forward air controller.
Highly sought-after job, he explained. FACs were tasked with orchestrating all air power, giving cover to lads on the ground, calling in raids—not to mention rescues, medevacs, the list went on. It wasn’t a new job, certainly, but it was newly vital in this new sort of warfare.
Why’s that, sir?
Because the bloody Taliban is everywhere! And nowhere!
You simply couldn’t find them, he explained. Terrain was too rugged, too remote. Mountains and deserts honeycombed with tunnels and caves—it was like hunting goats. Or ghosts. You had to get the bird’s-eye view.
Since the Taliban had no air force, not one plane, that was easy. We British, plus the Yanks, owned the air. But FACs helped us press that advantage. Say a squadron out on patrol needed to know about nearby threats. The FAC checked with drones, checked with fighter pilots, checked with helicopters, checked his high-tech laptop, created a 360-degree picture of the battlefield.
Say that same squadron suddenly came under fire. The FAC consulted a menu—Apache, Tornado, Mirage, F-15, F-16, A-10—and ordered up the aircraft best suited to the situation, or the best one available, then guided that aircraft onto the enemy. Using cutting-edge hardware, FACs didn’t simply rain fire on the enemy’s heads, they placed it there, like a crown.
Then he told me that all FACs get a chance to go up in a Hawk and experience being in the air.
By the time Colonel Ed stopped talking I was salivating. FAC it is, sir. When do I leave?
Not so fast.
FAC was a plum job. Everyone wanted it. So that would take some doing. Also, it was a complex job. All that technology and responsibility required loads of training.
First things first, he said. I’d have to go through a challenging certification process.
At RAF Leeming.
In…the Yorkshire Dales?
Copyright © 2023 by Prince Harry, The Duke of Sussex
Following excerpts adapted from the author’s new book, “India Is Broken: A People Betrayed, Independence to Today,” published by Stanford University Press, Stanford, California
By the rules of the Indian National Congress (the Congress Party), Vallabhbhai Patel should have been the party’s president at the time of independence. If that had been so, he might well have been India’s first prime minister. However, in August 1947, Jawaharlal Nehru, not Patel, became prime minister.
Patel and Nehru differed greatly in their economic and social philosophies and in their approaches to the use of government authority and power. Patel, however, lived to see only the first three years of postindependence India. As deputy prime minister and home (interior) minister, he left a lasting legacy. Even during those few years, he and Nehru fought bitterly on the priorities for India’s political and economic future. If Patel had become India’s first prime minister or if he had lived longer as Nehru’s deputy, post-independence India would have taken a very different shape.
Two Leaders—Two Worlds
Patel was born to a peasant family in October 1875 and was raised in a modest two-story home. As a young man, he observed that fame and fortune came easily to barristers educated in England. As he later explained, “I studied very earnestly” and “resolved firmly to save sufficient money for a visit to England.” Patel became a British-trained lawyer and, upon returning to India, established a very successful criminal law practice.
Patel made his initial mark in politics in the first half of 1928, when he led peasants in Bardoli, an administrative area in the current state of Gujarat, in their fight against the British government’s onerous demands for land revenue. Despite its peaceful nature, the contest with the powerful British Raj became, in the popular imagination, the “battle of Bardoli.” Patel’s protest won the battle of Bardoli against British might, a victory for which Bardoli’s people conferred on him the title “Sardar,” chief or general. Vallabhbhai Patel has ever since been known as Sardar Patel.
Nehru was born in November 1889 to one of India’s most prominent families. His father, Motilal Nehru, was a wealthy lawyer and senior Congress Party leader. Anand Bhavan, the stately Nehru family home in Allahabad, now houses a historic museum and a planetarium. Jawaharlal studied at Harrow, the elite British public school, before attending the University of Cambridge. He qualified as a barrister in England, although he barely ever entered a courtroom. In August 1942, after Gandhi launched the Quit India movement, the British threw all Indian leaders in jail. Interned at the Ahmednagar Fort, Nehru grew a rose garden and played badminton with other prisoners. In a five-month period between April and September 1944, Nehru wrote his magnificent and timeless history The Discovery of India.
Patel was as much a man of action as Nehru was a historian and philosopher. As Gandhi pithily observed, “Jawahar is a thinker, Sardar a doer.”
Gandhi Chooses Nehru
In late 1945 and early 1946, India’s British rulers held elections for the central and provincial assemblies in preparation for the transfer of power. The Congress Party won large majorities in these elections, aided in part by campaign funds Patel helped raise. In a gushing profile, Time magazine wrote that Patel had no “pretensions to saintliness.” The magazine described him as, “in American terms, the Political Boss. Wealthy industrialists thrust huge campaign funds into his hands.”
In late April 1946, the Congress Party was ready to select its next president. Since India’s freedom was imminent, the choice of the party’s president was critical. The Congress Party president would lead the party, and hence India, into independence. Under the established process, twelve of the fifteen Provincial or “Pradesh” Congress Committees nominated Patel; three abstained. As the veteran Congress Party leader Jivatram Bhagwandas (Acharya) Kripalani would later write, the party favored Patel because he was a “great executive, organizer, and leader.” Provincial leaders also felt beholden to Patel for the campaign funds he had raised. The Pradesh Congress Committees were not necessarily endorsing Patel as India’s first prime minister. They understood that Nehru was popular with the Indian public. But they recognized Patel’s leadership qualities and his contributions to the Congress Party. So they placed Patel in a position of prominence from which he could well have emerged as India’s first prime minister.
Gandhi, however, stood above the rules, and he made the decision on who would be the party’s president. Just as he had in 1929 and 1937, when Patel and Nehru competed for the presidency of the Congress Party, Gandhi chose Nehru, knowing on this last occasion that no Pradesh Congress Committee had nominated him. Gandhi saw Nehru as “a Harrow boy, a Cambridge graduate,” who would represent India in international affairs more effectively than Patel. Nehru also had a stronger connection than Patel did with India’s Muslim community. Above all, Nehru was fifty-six years old and like a son to the seventy-six-year-old Gandhi. Patel, whom Gandhi thought of as a younger brother, was seventy-one and in poor health.
The British viceroy, Lord Wavell, had set up an Executive Council as the midway step to India’s independence. As the Congress Party’s president, Nehru became vice president to the viceroy in his Executive Council and, hence, India’s de facto prime minister until the country became independent. Once so established, in addition to the huge popularity he enjoyed with the Indian public, Nehru also had the incumbent’s advantage to become independent India’s first prime minister.
Gandhi believed that Nehru and Patel would be like “oxen yoked to the governmental cart. One will need the other and both will pull together.” According to Patel’s daughter, Maniben, Gandhi expected that Patel would prevent Nehru from “making mischief.”
The Oxen Pull Apart
Prime Minister Nehru and Deputy Prime Minister and Home Minister Sardar Patel began the post-independence years entangled in a stormy relationship. They fought about the most consequential matters that defined India back then and continue to do so today.
With Pakistan partitioned as a Muslim nation, a question on people’s minds was what the role and place of Muslims in India would be. Within that broader context, an immediate issue arose as the horrors of religious hatred continued after partition in both India and Pakistan. In the Indian areas marked by Hindu-Muslim tensions, the government’s machinery had collapsed or become “fiercely partisan.” A rumor spread that Patel, as home minister, was protecting and aiding Hindus but not Muslims. Nehru seemed to buy into the rumor, even though it had no basis. The historian Rajmohan Gandhi, grandson of the Mahatma and Patel’s biographer, writes that Patel “was unquestionably roused more by a report of 50 Hindu and Sikh deaths than by another of 50 Muslim deaths. But his hand was just.”
Patel, in turn, was impatient with Nehru’s soft approach toward Pakistani leaders, who were making only half-hearted efforts to contain the violence against Hindus and Sikhs on their side of the border. Patel insisted that the news of this violence was triggering a “mass psychology” of resentment and anger among India’s Hindus and Sikhs. Nehru and Patel never resolved their differences on how best to deal with India’s Hindu-Muslim issue.
They also sparred over Kashmir. On October 22, 1947, a contingent of about five thousand armed tribesmen from Pakistan drove into Kashmir. The maharaja of Kashmir, Hari Singh, was a Hindu, but the Kashmir Valley had a predominantly Muslim population. The maharaja had avoided choosing between Pakistan and India, but on October 24, he desperately appealed to the Indian government for help. On the morning of October 26, Hari Singh signed the instrument of accession to India. That evening, an Indian infantry battalion landed in Kashmir and halted the tribesmen. Pakistani authorities gave the name “Azad Kashmir” (Free Kashmir) to the land west of where the Indian Army stopped the tribesmen. Indians called that area “Pakistan-occupied Kashmir.”
Patel, as minister of states, directed the Kashmir operations. But in early December 1947, he found to his surprise that Nehru, as prime minister, had taken control of India’s Kashmir policy. Patel complained that he had been blindsided, and the two exchanged acrimonious letters.
With Nehru and Patel evidently at loggerheads, Gandhi in late December delivered an ultimatum to Patel: “Either you should run things or Jawaharlal should.” Patel wearily replied, “I do not have the strength. He is younger. Let him run the show. I will help him as much as I can from the outside.” Gandhi, who had kept Patel and Nehru together for so long, agreed that it was time for Patel to step aside but said that he wanted to think the matter over. Fate, however, intervened. On January 31, 1948, a Hindu nationalist named Nathuram Godse shot and killed Gandhi.
After Gandhi’s death, in their moment of shared grief and to quash the swirling rumors of their imminent split, Nehru and Patel came together. In a radio address, Nehru said, “We have had our differences. But India at least should know that these differences have been overshadowed by fundamental agreements about the most important aspects of our public life.” On March 3, Nehru wrote to Patel that the crisis required them to work together as “friends and colleagues.” He ended graciously: “this letter carries with it my friendship and affection.” Patel replied with equal grace: “I am deeply touched, indeed, overwhelmed. We have been lifelong friends and comrades in a common cause.” All talk of Patel’s leaving was forgotten. The twists of history continued, however. On March 8, 1948, while eating lunch at home with his daughter Maniben, Patel had a massive heart attack.
Patel Integrates the States
Patel returned to work quickly after his heart attack and poured his energies into a monumental task that he had begun but not finished. That task was to integrate the princely states into a unified India.
When the British left India, the Indian government in New Delhi did not have authority over the entire land area known today as India. Scattered all over the country were more than five hundred princely states ruled by hereditary princes. All together, the princes ruled over one-third of India’s land area and one-fourth of its population. They had survived as princes because, after the 1857 mutiny of Indian soldiers in the British army, British authorities stopped annexing new territories. They feared that more annexation would trigger another mutiny. Instead, the British Crown established the Doctrine of Paramountcy, which granted the British authorities control over the princely states’ foreign policy, defense, and communications, leaving, at least in principle, administration of the states to the princes. At independence, the British transferred to the new Indian parliament full control only over “British India,” the part annexed before 1857; the British also transferred their paramountcy powers over the princely states. In independent India, therefore, the princely states could determine their political relations with the rest of India and set their own commercial policies. India risked becoming a politically and economically balkanized nation.
©2023 by Ashoka Mody. All rights reserved.
Following excerpts adapted from the author’s latest book, And There Was Light: Abraham Lincoln and the American Struggle, published by Penguin Random House
“Fellow countrymen,” Lincoln said, “at this second appearing to take the oath of the presidential office, there is less occasion for an extended address than there was at the first. Then a statement, somewhat in detail, of a course to be pursued, seemed fitting and proper.” His task today was less about how the nation must move forward than it was about why he believed the war had been fought, and what it meant. He had begun his presidency with a brief on secession and Union—a brief that had included support for a constitutional amendment that would have banned the federal government from abolishing slavery where it existed at the time. He was opening his second term with a searching statement about human nature, the relationship between the temporal and the divine, and the possibilities of redemption and of renewal.
Lincoln acknowledged that mortal powers were limited. “Both parties deprecated war; but one of them would make war rather than let the nation survive; and the other would accept war rather than let it perish,” he said. “And the war came.” The gulf between North and South was so profound, so unbridgeable, that only the clash of arms could decide the contest between freedom and bondage. In a speech that stipulated the ambiguity of the world, Lincoln was unambiguous about why the war had come: slavery. “One eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the Southern part of it,” the president said. “These slaves constituted a peculiar and powerful interest. All knew that this interest was, somehow, the cause of the war.” There was no escaping this central truth.
Lincoln turned to the perils of self-righteousness and self-certitude, North and South. “Both read the same Bible, and pray to the same God,” he said, “and each invokes His aid against the other.” Then the president rendered a moral verdict: “It may seem strange that any men should dare to ask a just God’s assistance in wringing their bread from the sweat of other men’s faces; but let us judge not that we be not judged.” In speaking of the strangeness of profiting from the labor of others—a subtle but unmistakable indictment of slave owners—the president drew on the third chapter of the Book of Genesis: “In the sweat of thy face,” the Lord commanded, “shalt thou eat bread.” Adam and Eve are being expelled from the Garden of Eden; the whole structure of the world as we know it was being formed in this moment. To work for one’s own wealth, rather than taking wealth from others, was the will of God.
In the same breath in which he framed slavery as a violation of God’s commandment, Lincoln invoked the words of Jesus: Judge not. This injunction is found in the Gospel of Matthew, in a plea for forbearance, forgiveness, and proportion: “Judge not, that ye be not judged.” The president believed he was doing the right thing—yet he knew that those who opposed him believed the same. “The prayers of both could not be answered; that of neither has been answered fully,” Lincoln said of North and South. “The Almighty has His own purposes.”
Lincoln had come to believe that the Civil War might well be a divine punishment—a millstone—for a national sin. The president hoped the strife would soon be over, and the battle won. “Yet,” Lincoln said, “if God wills that it continue, until all the wealth piled by the bond-man’s two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash, shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said, ‘the judgments of the Lord, are true and righteous altogether.’ ” To Frederick Douglass, “these solemn words…struck me at the time, and have seemed to me ever since to contain more vital substance than I have ever seen compressed in a space so narrow.”
Lincoln’s point was a startling one from an American president: God was exacting blood vengeance for the sin of human enslavement in a specific place and a specific time—in the United States of America in the mid-nineteenth century. This was not routine political rhetoric. In the Second Inaugural Address, Lincoln was affirming a vision of history as understood in the Bible: that there was a beginning, and there will be an end. In the meantime, the only means available to a nation “under God” to prosper was to seek to follow the commandments of that God.
The alternative? Chaos and the reign of appetite without restriction and without peace. Lincoln once said that “the author of our being, whether called God or Nature (it mattered little which), would deal very mercifully with poor erring humanity in the other, and, he hoped, better world.” Until then, “poor erring humanity” was charged with making its words and work acceptable in the sight of a God who had enjoined humankind to love one another as they would be loved. That is where Lincoln left the matter in his peroration on Saturday, March 4. “With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation’s wounds, to care for him who shall have borne the battle, and for his widow, and his orphan—to do all which may achieve and cherish a just, and a lasting peace among ourselves, and with all nations.”
His speech done, Lincoln turned to Chief Justice Salmon P. Chase for the oath of office. The sun came through the clouds. “It made my heart jump!” the president recalled of the breaking light. Lincoln, Noah Brooks wrote, “was just superstitious enough to consider it a happy omen.” A Black man who worked at the Washington Navy Yard, Michael Shiner, recorded the moment in his diary: “The wind ceas[ed] blowing the rain ceased raining and the Sun came out and it was as clear as it could be.”
The chief justice noted the passage of the Bible the president kissed—Isaiah 5:27–28, which reads: “None shall be weary nor stumble among them; none shall slumber nor sleep; neither shall the girdle of their loins be loosed, nor the latchet of their shoes be broken: Whose arrows are sharp, and all their bows bent, their horses’ hoofs shall be counted like flint, and their wheels like a whirlwind.”
There would be no rest. The wheels turned. The work went on.
Later that afternoon, the president would ask Frederick Douglass what he’d thought of the speech.
“Mr. Lincoln,” Douglass replied, “that was a sacred effort.”
Copyright © 2022 by Merewether LLC
In Benjamin “Bibi” Netanyahu’s sweeping, moving autobiography, one of the most formidable and insightful leaders of our time tells the story of his family, his path to leadership, and his unceasing commitment to defending Israel and securing its future. Following excerpts adapted from the author’s latest book, “Bibi: My Story” published by Simon & Schuster, Inc
Author’s Note: Some details of military and Mossad operations described in the book are excised due to Israeli national security requirements. For the same reason, other such operations, as well as details of certain diplomatic missions, are excluded in their entirety.
What do I remember from my earliest years?
Our house on the corner of Ein Gedi Street in the garden neighborhood of Talpiot in South Jerusalem. It was a one-story home with tall ceilings, shaded by cypress trees. These were the years of spartan austerity that followed the end of Israel’s War of Independence in 1948, a year before I was born. Determined to ensure that our family would have enough to eat, my mother raised chicks in our backyard. They were soon devoured by weasels. She found other ways to pamper us. In this she was helped by her friend Tessie from New York, who sent us food packets. What a wonder it was for me as a toddler and my brother Yoni to peer through those packages and discover glistening chocolate bars embedded in nylon stockings, along with other bounties sent to us from that magical land across the sea, America.
Soon, when I was three years old, my brother Iddo arrived. I vividly recall him confined in his crib, wailing in protest as his older brothers played freely around him. Perhaps some constraints on Yoni and me should have been in order: in one of my forays I explored an electrical socket with my mouth and the electrical current tore my upper lip, leaving a permanent scar. Often asked about it, I never claimed it was a battle scar. Those would come later.
Jerusalem in those days resembled more a sleepy town than the sprawling, vibrant metropolis it is today. The quiet Talpiot neighborhood where we lived was home to a few prominent intellectuals, writers and scholars, of which my father, Benzion Netanyahu, was one. As early as I can remember I knew my father worked on something called “the sicklopedia.”
A historian by profession, Father was the editor of the Encyclopedia Hebraica, which he modeled on the Encyclopaedia Britannica.
We led a comfortable life by Israeli standards because he was handsomely paid for producing a new volume each year. By 1959 the encyclopedia was purchased by 60,000 families out of Israel’s roughly 450,000 households,1 an impressive 14 percent, meriting our reputation as the People of the Book. My father broadened the orientation of the encyclopedia from a narrow Jewish one to one of general knowledge with emphasis on Jewish subjects. Families would wait for the next volume to come out, perusing the entries for their own erudition. The secret to the encyclopedia’s great success, my father said, was clarity. Eighth graders and doctoral students, he said, should be able to read and understand with equal ease complex entries made simple by his rigorous editing. And they did.
Father had a decidedly empirical approach to the search for truth and an intimate familiarity with Jewish history. He once asked his science editor, Professor Yeshayahu Leibowitz, to review an entry on the origin of the universe submitted to the encyclopedia by a British scholar. Leibowitz, later an icon of the Israeli left, was my father’s friend. An eccentric who visited our home frequently, he combined devout religiosity with scientific expertise.
Sometime after my father requested the entry, Leibowitz submitted his edited version of the British scholar’s essay on the various theories of the universe’s creation. My father read it with great interest.
“Leibowitz,” he said, “you crossed out the theory that the universe was created by an omnipotent force. To me that makes as much sense as the other theories. You are, after all, a religious man. Don’t you believe in this possibility?”
“My dear Netanyahu,” Leibowitz said, “from a religious point of view of course I believe it. But scientifically? It doesn’t hold.”
Like his prolific mathematician brother, Professor Elisha Netanyahu, who was among the founding members of the math department at the Technion (Israel’s MIT), Father retained an unquenchable intellectual curiosity until the end of his life. In his nineties, he gave me two books he had just read, the first describing the development of the atom bomb and the second a biography of Richard Feynman, the Nobel Prize–winning physicist.
In many ways he was an intellectual descendant of our distant relative, the Vilna Gaon, the great Jewish sage who two hundred years earlier instructed yeshiva students to add mathematics and physics to the study of the Scriptures.
As a historian, Father sought the unvarnished truth and went where the facts took him. He would study historical developments with great depth, balancing conflicting theories and data, and only then make up his mind. But once he did, he was fearless in defending his views.
My father’s mentor, Professor Joseph Klausner, lived on a hill around the corner from our house in Talpiot. Klausner was a world-renowned historian of Second Temple Jewish history. He had written two definitive works on the origins of Christianity, Jesus of Nazareth and From Jesus to Paul. He was also a great expert on modern Hebrew literature. A linguist, he had invented the modern Hebrew words for “shirt,” “pencil,” and many other terms. The rebirth of the Jewish state required the revival and modernization of ancient Hebrew, a task undertaken by several ingenious scholars, including Klausner.
As small children, Yoni and I of course knew none of this as each Sabbath we made our way to Klausner’s house, on whose door mantel he had inscribed the words “Judaism and Humanism,” the title of one of his books.
Crossing a field, we would pick flowers along the way, which we would give the professor in a fixed ritual. Klausner would greet us at the door, a kindly bespectacled man in his late seventies with a white goateed beard. A widower with no children and living alone, he would always greet us warmly.
“Welcome children,” Klausner would say.
“Shalom Professor Klausner,” Yoni would respond for both of us.
Klausner would then pose the obligatory question: “Tell me, Jonathan, did you come to see me or did you come for the chocolates?”
“Oh no, Professor Klausner,” Yoni would unerringly respond, “I came to see you.”
Klausner would then usher us to the living room, where he would pull out a box of chocolates from a heavy Central European cabinet. We would pick our choices.
Time after time, this procedure guaranteed success. Then a mishap occurred. One Saturday after Yoni assured the professor of the purpose of our visit, Klausner suddenly turned to me and asked, “And what about you, Benjamin? What did you come for?”
Three years old, I had never been confronted by such a question. Totally disoriented, I covered my eyes with my forearm to shield my bewilderment. For lack of a better answer I kept silent, stuck my other hand into my pocket, and thrust a bunch of crumpled flowers at my interrogator.
Klausner smiled. We got the chocolates.
This was not our only encounter with the great minds of the day. Next to our house was a green wooden shack that served as the neighborhood synagogue. As I peered from the outside through the slats, I saw Yoni join the other worshippers who included Klausner, the writer Shai Agnon, who would later receive the Nobel Prize in literature, and others.
“Why are you here alone, Jonathan?” they would ask Yoni.
“I am Aduk,” Yoni answered, using an arcane Hebrew word for ultra-Orthodox.
“And where is your father?” they pressed.
“He is not Aduk.”
That was definitely true, yet although we were a secular family, throughout most of our childhood my parents made Kiddush, kept Shabbat dinner and celebrated all the major Jewish holidays.
The affection that Yoni received from adults was mirrored by the respect he received from the children in the neighborhood. In the face of the unique, children often respond with either extraordinary cruelty or extraordinary respect. In Yoni’s case it was the latter.
I remember him as a small boy surrounded by children almost twice his age. Quiet and serious, he was totally lacking in bravado. He never posed. Yet older children strangely looked up to him in a manner that would follow him throughout his life, until his tragic death.
Click here to order your copy of this book
The following experts adapted from the author’s latest book, Boris Johnson: The Rise and Fall of a Troublemaker at Number 10, published by Simon & Schuster, Inc.
‘Success is the child of Audacity.’ ~ Benjamin Disraeli, prime minister in 1868 and from 1874 to 1880, in The Rise of Iskander, published in 1833
On the morning of Tuesday 7 April 2020, I was commissioned by the Daily Mail to write Boris Johnson’s obituary. At 7 p.m. on Monday evening the prime minister had been admitted to the intensive care unit at St Thomas’ Hospital, and nobody knew whether he would pull through. Death laid its icy hand on him, and opinion polls show he received greater public approval and sympathy than at any time before or since. For a few days Johnson was no more the hated Brexiteer, unscrupulous populist and brazen liar, but a fellow human being, equal with any other victim of the pandemic, mortal like the rest of us.
Your eye may have slid smoothly over the last phrase, but you, dear reader, will die soon enough, as will the author of this book. The glories of our blood and state are shadows, not substantial things. So says the poet, and I have tried while writing about Johnson, as insatiable a glory-seeker as our times can show, to bear in mind that he is also a man.
But an extraordinarily difficult man to write about. When I asked my children, then aged twenty-five, twenty-one and nineteen, if I could dedicate this book to them, provided I put in a line about their having slight reservations about Johnson, one of them replied: ‘Only if you say we think he’s a vile, disgusting human being.’ Boris Johnson inspires in many people a profound and implacable aversion; in many others the warmest affection and support. I do not aspire to change anyone’s mind about him: that would be a vain endeavour. But I do hope, perhaps just as presumptuously, to write a book which partisans on both sides will reckon is fair, and can read with amusement.
A great, maybe insoluble problem at once arises. As soon as I start to explain why Johnson has not, at certain times in his career, been a total failure, I open myself to the charge of seeking to ignore or extenuate his faults. But any sympathy that I extend to him (and I do not think he can be understood without a degree of sympathy) is liable to be dismissed by his admirers as pitifully inadequate.
There was no time to worry about all that while writing his obituary for the Daily Mail, which at a time of national shock and mourning would expect, I assumed, an account which at least ended on a relatively favourable note. This, roughly speaking, is what I sent them:
Boris Johnson loved the Chumbawamba song, ‘I get knocked down, but I get up again. You’re never going to keep me down.’ He was often knocked down, but until his life was cut short by Covid-19 always got back up again. Johnson was far less cautious than the usual run of career politician, took risks which onlookers regarded as mad, but came back from blows which would have crushed a less resilient figure.
On entering the Commons in 2001 as MP for Henley, he decided, in defiance of all prudent advice, to remain editor of The Spectator. Senior politicians and pundits warned him that riding two horses was bound to end in tears. He defied their predictions, and at first all went well. He became more and more famous, and at the start of September 2004, Vanity Fair billed him as ‘the Tory MP who could one day be Britain’s prime minister’.
Michael Woolf, who wrote that magazine’s profile, likened him to two famous actors who had gone into politics: ‘He is, it occurs to me, as he woos and charms and radiates good humour, Ronald Reagan. And Arnold Schwarzenegger… He is, I find, inspirational.’ No other Conservative MP could have been compared to Reagan, one of the most successful (though at first derided) post-war American presidents, or to Schwarzenegger, then serving as governor of California. Johnson had an astounding ability to connect with the wider public. He had star quality, and the Conservatives began to think he might be the leader who could end Labour’s decade of success under Tony Blair.
In the summer of 2004 I started work on my first volume about Johnson, published in 2006 and updated in 2007, 2008, 2012 and 2016. As recounted in the introduction to that work, he was at first tremendously keen on the idea of a book all about him (‘Such is my colossal vanity that I have no intention of trying to forbid you’), but then got cold feet (‘Anything that purported to tell the truth really would be intolerable’) and offered me £100,000 to abandon the project, which I, annoyed by his assumption that I could be bought, turned down.
In October 2004, The Spectator published an editorial in which it abused the people of Liverpool and made several atrocious mistakes about the Hillsborough disaster. There was uproar, and Michael Howard, the Conservative Party leader, who was a Liverpool fan, was warned that the next time he went to a game he would be booed. Howard was furious and ordered Johnson to go and apologise to the people of Liverpool, speaking only to the local media. This Johnson did, but the national press were determined to cover the story too, and during his visit to the city a media scrum developed which amused the watching nation, but made Howard look ridiculous.
Worse soon followed. Johnson dismissed press reports of his affair with Petronella Wyatt as ‘an inverted pyramid of piffle’, the press proved he was lying and Howard, who had only a few months previously promoted him to the post of shadow arts spokesman, now sacked him. By the end of 2004, Johnson’s political career lay in ruins. Many of his fellow Tory MPs, jealous of his fame and angered by his neglect of parliamentary duties, had concluded he was hopelessly dishonest and unreliable.
So when Howard lost the 2005 general election to Blair, and resigned the Tory leadership, Johnson was in no fit state to mount a bid for the vacant post, and instead supported David Cameron, who came through and won. Cameron had been junior to him at Eton, junior to him at Oxford, had a less original mind and, until becoming leader, was less famous than Johnson, who had reached the wider public by giving a series of brilliantly amusing performances on Have I Got News For You.
The following excerpts are adapted from the author’s new book, Spies and Lies: How China’s Greatest Covert Operations Fooled the World published by Hardie Grant Books
Yu Enguang’s story has never previously been told. Before his death in 2013, he rose into the highest ranks of China’s intelligence community. He was instrumental in creating the organisations, practices and culture that make influence operations by today’s Ministry of State Security so successful. The MSS continues to emulate the boldness Yu showed as he engaged directly with an international power player, turning Soros’s dream of an open society in China into a source of funds, legitimacy and cover for influence operations.
The China International Culture Exchange Center that Yu led was an MSS-run front organisation, custom-built for engaging with foreigners like Soros. Nearly forty years later, it’s still in active operational use.
To foreigners who met him, Yu seemed like a man deeply interested in and acquainted with the capitalist world, not some paranoid Stalinist. He was a witty and memorable character, skilled at interacting with targets and adept at English – something that stands out in all accounts. While posted to America undercover as a Xinhua journalist, he charmed a Washington Post reporter with his commentary on the Cantonese meal they were sharing. He’d been trained well – the ability to introduce Chinese cuisine to foreigners was specifically drilled into Chinese spies during their English-language courses.
Yu made a mark on Soros representative Liang Heng too, who was persuaded to accept MSS control over the China Fund: ‘The impression Yu gave me was quite good. He was about fifty, tall, with strong eyebrows, big eyes and a sophisticated manner and he talked pragmatically … he’d been to many countries, seen and experienced much, and spoke fluent English.’3 Soros likewise bonded with him, despite some apprehension about his special background. Both of them had lived in London and Soros liked the British accent of Yu’s English.
Yu was not just any MSS officer. At the time he was a vice minister of the agency and among the Communist Party’s top foreign intelligence officers. Few within the agency could rival the depth of his overseas experience. Most of all, his operations in hostile capitalist nations taught him that loyalty to the Party came before all else. Only a politically secure officer would feel comfortable ‘dropping cover’ by revealing his MSS affiliation to Liang and Soros. This is also reflected in the fact that he was trusted to represent the MSS abroad, where he built partnerships with foreign intelligence agencies such as in Afghanistan.
But who was he, really? The first two decades of his spy career were spent embedded in the state-owned Xinhua News Agency, giving him rare opportunities to travel the world. In the 1970s he worked in Xinhua’s London bureau for eight years. One Thai woman living in London who met him at a Chinese embassy function noted that ‘he often worked at home late at night writing dispatches’. Clearly, he had more on his plate than journalism.
From London he was reassigned to the United States, which had only recently opened diplomatic relations with the People’s Republic of China (PRC). During the Carter and Reagan years he headed the newly established Xinhua bureau in Washington, DC, overseeing coverage of the White House. ‘While I tirelessly reported on the activities and speeches of Carter, Mondale, Reagan and Bush and other key White House figures, I also observed many phenomena and gathered many materials,’ he reflected years later in a compilation of his US reportage, which doesn’t reveal his MSS affiliation. Hinting at his dual life as a spy and a journalist, he wrote that it was a job where most achievements were ‘fragile goods, and hard to attach to my name’. By 1985, while he was officially deputy director of the Xinhua department responsible for foreign correspondents, he was in fact probably leading an entire bureau of MSS officers.
Yet ‘Yu Enguang’ may not have existed at all. MSS officers use pseudonyms throughout their careers, even as vice ministers. These aliases often read like puns on their true names, with characters dissected and jumbled into new ones, or surnames replaced with homophones.
Yu is no exception. Though one writer on Chinese espionage assumed they were different people, little-known MSS vice minister Yu Fang looks identical to Yu Enguang. In the only published photo of Yu Fang, taken after his retirement, he stands with the same slouch, wears the same belt and dons the same pair of shaded glasses as Yu Enguang. Both reportedly studied at Renmin University and grew up in Liaoning province in China’s northeast. Yu Enguang was just a pseudonym for Yu Fang.
Among his comrades in the MSS, Yu Fang was just as respected as ‘Yu Enguang’ was by the targets he cultivated. At some point in his career he headed the agency’s important central administrative office, and in the early nineties helped secure the passage of China’s first National Security Law, which expanded and codified MSS powers. The authors of several MSS publications, marked for internal distribution only, thank him for advising on and improving their drafts. He also oversaw MSS production and censorship of histories, TV dramas and movies about spies, which were designed to build public awareness and support for the MSS’s mission.
Ironically for a man who helped bring Chinese intelligence history into the public sphere, Yu’s true legacy is an official secret. Official references to his achievements are brief and elliptical. The authoritative People’s Daily eulogised him in 2013, an honour only a handful of intelligence officers receive: ‘In his sixty years of life in the revolution, Comrade Yu Fang was loyal to the Party, scrupulously carried out his duties and selflessly offered himself to the Party’s endeavours, making important contributions to the Party’s state security endeavour.’ The article also noted that he’d been a member of the National People’s Congress, China’s national legislature, but lists of delegates include only his pseudonym.
The MSS seizure of the China Fund was an impressive display of the agency’s confidence in engaging with one of America’s best-connected and wealthiest men. What it learnt could be applied to future operations as the agency grew more aggressive and internationally focused over the following decade. But it was far from a flawless effort: exposing Yu Enguang and CICEC as arms of the MSS leads to a string of covert operations against the United States, continuing right to the present day.
Soros had at first accepted the management change at his China Fund as a necessary cost of operating in China. Liang Heng claims he told Soros the truth about Yu’s identity in 1988. The MSS and Ministry of Public Security ‘were co-equal and they couldn’t interfere in each other’s affairs’, Soros argued in 2019, but partnering with the MSS offered quite the opposite of protection in the end. He may have thought he could handle the situation, that his ties to Party leaders could override the conservative proclivities of their spies. After all, his political philanthropy was thriving in Hungary and the Soviet Union despite their security agencies having been formed in the same ideological mould as the MSS.
Views expressed are personal
Following excerpts adapted from the author’s latest book, Marijuana on My Mind: The Science and Mystique of Cannabis, published by Cambridge University Press
Martin was an intelligent and cocky young man whose father hoped therapy would help him cut down his cannabis use and mature enough to take over the family’s successful road construction business. Martin only wanted me to help him get away from his father. He was convinced that his cannabis use was part of a healthy life, and he constantly tried to prove that he knew more about the plant than I did. In fact, he did know far more than I about the latest varieties of cannabis and new methods of cultivation. I allowed him to be the expert about what was available at his favorite dispensaries, while I was more interested in hearing how it altered his mind. He dismissed my concerns as those of an old man, but he tolerated me because he liked to argue. In the end, he didn’t change his cannabis use, but he did gradually develop a better relationship with his family and returned home to manage the business when his father suffered a heart attack.
I saw an interesting and warm man beneath Martin’s arrogant exterior, and he knew I liked him. He arrived at our final session with a small, neatly wrapped gift and insisted I open it immediately. Inside the box was a dried cannabis bud resting on a royal purple, velvet pillow. It looked like a giant, withered, alien Brussels sprout (Figure 2.1). Martin proudly pronounced, “This is the best bud I have ever found, with a really spiritual high. You should know about it.”
I told Martin I appreciated the thought and knew he was giving me something precious, then closed the box and placed it back on the end table next to his chair. I handed it back to him as we shook hands when he left my office, but then I found it on the floor just outside my door when the next patient arrived. As I picked up the box, I said, “Drug reps are always leaving me little trinkets,” but I knew Martin’s gift was more heartfelt than the usual swag left by drug companies. I eventually encased the bud in a plexiglass cube and set it on my bookshelf beside the Freud action figure another patient had given me.
The History Behind Today’s Cannabis
The dried bud Martin ceremoniously presented to me descended from a unique plant with a history that long predates humanity. Cannabis first emerged nearly 30 million years ago on the high-altitude, arid grasslands of Tibet, where it diverged from hops, its closest relative best known for flavoring beer. Cannabis became unique among Earth’s vegetation by developing chemical compounds never seen before. Like all flowering plants, different varieties of cannabis evolved, some with revolutionary new chemistry and some without. Marijuana on My Mind focuses on those varieties that developed medicinal and mind-altering properties. The varieties of cannabis lacking this chemistry, called hemp, evolved strong, flexible fibers that humans have used for a wide range of utilitarian purposes for the past 10,000 years.
Hemp arrived in the New World 53 years after Columbus first landed, but it is less clear when the smokable, intoxicating varieties of cannabis were imported. What we call marijuana today was brought to Brazil by the Portuguese and to Jamaica and other Caribbean islands by kidnapped Africans. The British also imported cannabis, primarily to pacify their slaves. It then arrived in the USA along four primary routes. Patent medicines from pharmaceutical companies containing cannabis extracts were popular through the mid- to late 1800s. Many Americans had their first puff of hash at the exotic Turkish Village in the 1893 Chicago World’s Fair. Marijuana also arrived with sailors, Caribbean migrants filtering into New Orleans, and asylum seekers fleeing Mexico’s violent revolution in 1910, primarily through El Paso. All the derogatory racial stereotypes that white Americans held regarding brown and Black people were quickly ascribed to marijuana as well. Its use by despised and feared minorities was alien to white American culture and reinforced prejudice against immigrants. Smokable cannabis and racism were intertwined from the very beginning, until white youth cracked the mold in the 1960s.
Before the Mexican-Spanish word “marihuana” first entered English usage, “cannabis” and variants of “hash” were the only terms used. Bristol Myers Squibb and Eli Lilly listed cannabis and cannabis extracts as ingredients in their medicines during the 1800s. The word “marijuana” was later popularized in racially derogatory stories about Mexican refugees. When Pancho Villa and his bandoliered men briefly invaded New Mexico in 1916, they openly flaunted their pot use as they sang creative verses of “La Cucaracha” that included cockroaches smoking marijuana. But Pancho Villa’s greater crime was when his support of land reform wrested 800,000 acres of timberland from newspaper magnate William Randolph Hearst during Mexico’s uprising against foreign capitalist control. Hearst, who owned 8 million acres of Mexican land, used his media empire to strike back against the rebels by luridly sensationalizing both the Mexicans and marijuana.
In 1920, the USA joined Norway, Finland, and Russia in banning alcohol. This grand experiment of prohibition failed and was reversed in 1933, both because of the public’s widespread disregard for the law and state governments’ thirst for new tax revenue during the Great Depression. The Federal Bureau of Prohibition’s assistant commissioner, Harry Anslinger, had become commissioner of the new Bureau of Narcotics in 1930. He initially had little interest in leading a campaign against what he saw as a mere weed; he insisted that cannabis was not a problem, did not harm people, and that “there is no more absurd fallacy” than the idea that it makes people violent. His mind changed with the deepening economic depression and end of alcohol prohibition. Fears that Mexican migrants might take scarce jobs intensified racial prejudice and combined with bureaucratic mission creep after alcohol was relegalized. William Randolph Hearst paved the way for Anslinger’s conversion to antimarijuana crusader by publishing a steady stream of stories about the evils of marijuana that reached 20 million daily readers during the 1930s. Hearst needed to sell newspapers, and racially charged descriptions of violent and sexual crimes caused by this killer weed sold very well.
© Cambridge University Press 2022
The following excerpts are adapted from the author’s later book, Winston Churchill – His Times, His Crimes, published by Verso Books, the largest independent, radical publishing house in the English-speaking world, publishing one hundred books a year.
Life: enough of this poetry
We need hard, harsh prose;
Silence the poetry-softened noises;
Strike with the stern hammer of prose today!
No need for the tenderness of verse;
Poetry: I give you leave of absence;
In the realm of hunger, the world is prosaic
The full moon is scalded bread.
Sukanta Bhattacharya, ‘Hey Mahajibon’ (O, Great Life) (1944)
During the interwar period India was in a state of continuous turmoil. The reforms of 1919 – which had promised increased political participation of Indians in government but denied them power – were regarded by most Indians as ill-intentioned and offering very little. In Parliament in 1917, Edwin Montagu, the secretary of state for India, had declared ‘the gradual development of self-governing institutions, with a view to the progressive realization of responsible government in India as an integral part of the British Empire’. The result was a build up of pressure from below.
The British Empire clearly faced a choice: it could grant India dominion status or it could rule largely through repression. The failure to grant the first necessitated the second.
The Pashtuns, Punjabis, Bengalis and Malabari (now Keralans) saw the rise of mass movements and terrorism on the pre-revolutionary Russian model. Peaceful marches were violently broken up by the police. The 1919 massacre in Jallianwala Bagh in Amritsar is the best known, but there were others. The Moplah peasant uprising in Malabar in 1924 was deliberately misinterpreted by Raj ideologues. The Chittagong Armoury Raid in April 1930 was an audacious attempt to seize police and auxiliaries’ weapons and launch an armed uprising in Bengal. The raiders were revolutionaries of various sorts, united by the belief that only an armed struggle inspired by the Easter Rising of 1916 (they called themselves the IRA: Indian Republican Army) could rid them of the British. The plan was to take government and military officials hostage in the European Club where they hung out after work, seize the bank, release political prisoners, destroy the telegraph offices and telephone exchanges and cut off all railway communications.
They partially succeeded, but could not capture the British officers and civil servants. It was Good Friday. The European Club was empty. Despite this, the main leader of the uprising, Surya Sen, assembled their forces outside the police armoury, where he took the salute as IRA members (numbering under a hundred) paraded past him. They hoisted the Indian flag and declared a Provisional Revolutionary Government. The British swiftly took back control and guerrilla warfare ensued. The IRA was outnumbered. A traitor gave away Sen’s hiding place. He was captured, tortured and, together with another comrade, hanged. Other prisoners were packed off to the Andaman Islands.
In Lahore, the capital of the Punjab, a twenty-two-year-old, Bhagat Singh, who hailed from a staunch anti-imperialist family, decided with a handful of supporters to carry out two missions in 1930. The aim of the first was to assassinate the British police officer who had badly beaten up the nationalist leader, Lala Lajpat Rai, at a demonstration in Lahore. But they shot the wrong police officer. The second was to throw a few bombs into the Central Legislative Assembly in Delhi when it was empty. Bhagat Singh declared they did so because they wanted the noise of the blast to wake up India.
In prison he became a communist and wrote that terrorist tactics were not useful, but he refused to plead for mercy. Gandhi half-heartedly spoke on his behalf to Lord Irwin, the liberal Viceroy, but was rebuffed. Bhagat Singh and two comrades, Sukhdev Thapar and Shivaram Rajguru (all members of the tiny Hindustan Republican Socialist Party), were hanged in Lahore Jail in 1931.
There were similar events on a lesser scale elsewhere, and peasant uprisings too, the largest of which, in modern Kerala, shook the landlords and their British protectors. The peasants were mainly poor Muslims. They were defeated and the leaders of the revolt despatched to the Andamans for fifteen years. In 1935, the British realised the seriousness of the situation and passed a second Government of India Act through the House of Commons.
Churchill was vehemently opposed to the new law but was out of office. The Act provided for a controlled provincial autonomy, with the governors in each province holding reserve powers to dismiss ‘irresponsible’ governments. The tiny franchise was somewhat enlarged, and in 1937 the dominant Congress Party virtually swept the board in provincial elections, with the crucial exceptions of the Punjab and Bengal where secular-conservative, landlord-run parties obtained majorities.
Within two years of these elections Britain was at war. The Congress leaders, astounded that they had not been consulted before India was dragged into the war, instructed all their provincial governments to resign in protest and refused to offer support for the war. All this confirmed Churchill’s prejudices. He simply refused to grasp Indian realities.
The volume of protests and resistance from the end of the First World War till the late thirties had been rising with each passing year. Gandhi himself, in his South African phase, was a staunch Empire-loyalist. His view that ‘the British Empire existed for the benefit of the world’ neatly coincided with that of Churchill, and the Indian lawyer was not in the least embarrassed at acting as a recruiting sergeant during the First World War. He moderated these views when he returned to India and reinvented himself as a political deity. He was happy to mobilise the masses, but on a ‘moral level’. He would leave statecraft to the politicians, mainly Nehru and Patel. Though when they needed his imprimatur during crisis times (Partition and the Indian occupation of Kashmir), he always obliged.
Gandhi’s decision to make the Congress a mass party by appealing to the vast countryside had increased its size and political weight. In an overwhelmingly Hindu country, Gandhi had used religious symbols to mobilise the peasantry. This began to alienate Muslims, and since the Brahmins dominated the Congress leadership, the ‘untouchables’ knew their grievances would never get a hearing. Despite this, Gandhi, Patel and Nehru built a formidable political machine that covered the whole of India. The 1937 elections demonstrated as much, and it’s worth pointing out that in the north-western frontier province bordering Afghanistan, the predominantly Muslim Pashtuns had voted for the Congress Party as well.
The decision to take India into the Second World War without consulting its only elected representatives was yet another avoidable error on London’s part. The British underestimated the change in mood among the masses and some of their leaders. Had they consulted Gandhi and Nehru, offering them a fig-leaf to support the war, things might have panned out differently. The Congress leaders felt they had been treated shabbily and, after internal discussions that lasted a few months (revealing a strong anti-war faction led by the Bengali leader, Subhas Chandra Bose), they opted to quit office.
The British Viceroy immediately began to woo the Muslim League, and vice versa. The League’s leader gave full-throated backing to the war as did the conservative pro-British elected governments in Punjab and Bengal.
When, on 22 December 1939, the Congress Party announced its decision to resign and did so a week later, Jinnah declared that henceforth 22 December should be celebrated as a ‘day of deliverance’ from Congress rule. Ambedkar, the ‘Untouchables’ leader, provided strong backing, saying he ‘felt ashamed to have allowed [Jinnah] to steal a march over me and rob me of the language and the sentiment which I, more than Mr Jinnah, was entitled to use.’ Surprisingly, Gandhi also sent his congratulations to Jinnah for ‘lifting the Muslim League out of the communal rut and giving it a national character’. Little did he know where this would lead.
Emboldened by the emergence of an anti-Congress minority, the Viceroy, Lord Linlithgow expressed some optimism:
In spite of the political crisis, India has not wavered in denunciation of the enemy in Europe, and has not failed to render all help needed in the prosecution of the war. The men required as recruits for the Army are forthcoming: assistance in money from the Princes and others continues to be offered: a great extension of India’s effort in the field of supply is proceeding apace.
With this in mind, Linlithgow was confident he could survive the storm. When the Congress ministers resigned en masse, the Viceroy ordered the arrest of its leaders and activists. They were released in December 1941 as the British attempted to reach some accommodation. Gandhi was carefully studying the development of the war in Europe as well as Japanese moves closer to the region, and wondering whether the British might be able to hold out. He was not yet sure. The local impact of Operation Barbarossa was the release of imprisoned Communist Party leaders and militants, who now came out openly in support for the war. Gandhi continued to wait. It was the humiliation inflicted on the British in Singapore in February 1942 that led to a change of course. The Congress leaders began to think about calling for a Quit India movement and, in this fashion, declared their own (partial if not complete) independence from the British. Gandhi had engineered Bose’s isolation within the Congress, but he was very critical of Nehru’s anti-Japanese militancy. Nehru had suggested that Congress should organise armed militias to fight against the Japanese were they to take India. Gandhi reprimanded him strongly. He should not forget that Japan was at war with Britain, not India.
In contrast to Gandhi’s handwringing and delays, the Bengali Congress leader, Subhas Chandra Bose, always deeply hostile to the notion of offering any support to the British war, went on the offensive. Of the entire Congress high command, he was the most radical nationalist. He began to work out a master plan that owed more to the organisers of the Chittagong Armoury Raid than to Gandhi. Bose did not believe that peaceful methods could prevail. They were fine at certain times, but the situation was now critical. Britain had insulted India by taking its young men away once again to fight in inter-imperialist wars. Bose wanted to create an Indian National Army and began to explore all possibilities.
In 1942 Churchill agreed that Sir Stafford Cripps, the left-wing, former ambassador to Moscow, be sent to India to meet with Nehru, Gandhi and other leaders and plead with them to help Britain. If they agreed, he could offer a verbal pledge of independence after the war. However, before Cripps could depart, bad news came from South-East Asia: Singapore had fallen. Churchill blamed the men in the field. The British Army had not fought back effectively: ‘We had so many men in Singapore – so many men – they should have done better.’ As stressed above, it was a huge blow.
Cripps arrived in India, but few were willing to listen to his message. Jinnah’s Muslim League and the Communist Party were backing the war, but so speedy was the Japanese advance that Gandhi genuinely believed they might soon be negotiating Indian independence with Hirohito and Tojo rather than Churchill and Attlee. When Cripps insisted he was offering Congress a ‘blank cheque’ they could cash after the war, Gandhi famously riposted: ‘What is the point of a blank cheque from a failing bank?’
After Cripps returned empty-handed, Churchill pinned his hopes for a stable Indian army largely on Jinnah and Sikandar Hyat Khan, the leader of the Unionist Party and elected Premier of the Punjab, a province crucial to the war effort in terms of manpower and for being the granary of India. When, after Cripps’s return, Churchill said ‘I hate Indians. They are a beastly people with a beastly religion’, he was expressing a long-held view, but in this instance was referring to the Hindus who had badly let him down.
All rights reserved © Tariq Ali 2022