“Better that right counsels be known to enemies than that the evil secrets of tyrants should be concealed from the citizens. They who can treat secretly of the affairs of a nationMore
Time turns a page to mark a year since the re-emergence of the Taliban in Afghanistan through the colossal failure of the Biden administration’s botched military withdrawal. The threat that the Taliban now emanates to Asia and the world is not pristine but rather a neo-mandate of its former leadership. This is not to say that its leadership is weak or incapacitated as the same leadership is ultimately responsible for kicking the Americans out whilst largely being operational out of Afghan cave systems. It is simply to say that there exists a visible shift in Taliban strategy towards international acceptance and ratification.
During the pre-9/11 days of Taliban control in Afghanistan, the country hosted a profusion of training camps run by al-Qaeda and other terror groups.During the Soviet invasion of Afghanistan in the late 1970s and 80s, thousands of fighters from the Muslim world flocked to Afghanistan to fight the Soviets. Young fighters that formed the Afghan Mujahedeenincluded Osama Bin Laden from Saudi Arabia and Abu Musab al-Zarqawi from Jordan who would ultimately form al-Qaeda and the Islamic State respectively.
As passenger airliners flew into the majestic towers in daytime New York, Bin Laden became a more influential entity than any other government, leader or organisation in modern history. As the towers fell to dust, America fell to its knees, thus triggering the global War on Terror – an ongoing conflict that has snatched millions of lives and dismantled countless communities across the world.
Twenty years down the line, the US withdrawal from Afghanistan is a major political win for the Taliban cementing their return to power in Central Asia. This return to power is not merely a Taliban comeback but rather the aggravation of the al-Qaeda alliance in the region.
With the Taliban’s phoenix-like rise to power, dozens of terror and non-terror groups across the world sent them their congratulations and praises – including Sri Lanka’s Tamil National Alliance. Naturally, groups like al-Qaeda and Islamic State cells are bolstered in their international politico-religious agendas as Afghanistan has once again become a haven for threat groups. The highly unstable and ever-changing political situation in Afghanistan clearly illustrates how tribes and government groups have often switched sides and backed terror groups to ensure their own survival.
The Taliban had emerged through the Afghan Mujahedeen as a defensive group that assembled to form a bulwark of sorts against Soviet assault on the traditional Pashtun culture. This initial stance by the Taliban has cemented their popularity among the Pashtun people for decades. However, the Taliban’s historic ties, familial relations and shared outlooks with other groups had resulted in a slow infiltration of the Taliban to function as a jihadist group. The Salafi Wahhabi ideologies that emanated from the Gulf had ideologically penetrated the Taliban ranks to shadow its Pashtun roots and embrace fundamentalist and violent Islamist perceptions.
Although twelve months have passed after the rise of the Taliban, the world has not seen the violent consequences of Biden’s failure – yet. A momentary glimpse of the boiling pot was made when it was revealed that Ayman al-Zawahiri had taken refuge in the capital of Taliban-controlled Kabul. Al-Zawahiri remained al-Qaeda’s most consequential leader after Bin Laden was shot in Abbottabad, Pakistan eleven years ago. The very fact that al-Zawahiri was given refuge in a villa belonging to Sirajuddin Haqqani, the deputy leader of Taliban’s Afghanistan, underpins the threat that the Taliban posits to the world at large.
The Taliban in itself may not necessarily be a threat to global security as its neo-mandate appears to focus on national governance and international ratification – however, the group’s emergence to power creates a black hole in Afghanistan that functions as a terror haven for other terror groups to train, bolster and consolidate. Groups like Lashkar-e-Taiba threaten regional security especially in India, while groups like al-Qaeda threaten the status of global security. Both groups operated training camps during the Taliban’s previous phase of power and are likely to run camps under the new Taliban.
The Taliban and al-Qaeda were linked to the killing of Maldivian journalist Ahmed Rilwan Abdulla back in 2014 and have sowed seeds of discord in the country since the 1990s. Terrorist networks in South Asia do not stop at borders and easily transcend them. This is especially true of international global terrorist groups like al-Qaeda and the Islamic State brand of terrorism.
The Taliban/AQ alliance and Islamic State, however, are rivals. Although Salafi Wahhabism has infiltrated the ranks of the Taliban, the top leadership of the Islamic State and al-Qaeda have deep-rooted long-standing disputes. Operational as Islamic State Khurasan Province (IS-KP), its attacks have become frequent in targeting Taliban efforts in a tug-of-war fight for power, dominance and authority in the region and amongst the population. The two groups frequently engage in propaganda campaigns against each other that easily divide and sow discontent.
Sri Lanka, at present, is a figurative sitting duck amidst a massive geopolitical powerplay between the US, Russia, China and India while the threat of terrorism looms from the black hole in Central Asia. A unified Islamic State and Taliban/AQ alliance would spell doom for South Asia and other regions of the world.
Two of the deadliest Islamist terror attacks that occurred in South Asian history are tied to the Islamic State. The 2019 Easter Attack killed more than 270 people in Sri Lanka and was the largest IS attack outside of Iraq and Syria and the 2016Dhakaterror attack killed 22 people. This acts as a clear indication of the propensity for the Islamic State brand to be adopted by local bad actors to gain political advantage and recognition for their terror attacks globally. Earlier this year, Indian authorities arrested two terrorists belonging to the al-Qaeda-affiliated Ansar Ghazwat-ul-Hind in the northern state of Uttar Pradesh. The two men, according to the arresting officers, were planning to conduct deadly attacks in the state capital of Lucknow. In the same month, three were arrested in connection with setting up terror networks in Kolkata. The overall risk of the Taliban and its affiliates inspiring regional conflict is significant and growing.
Many high-ranking officials of the Islamic State cite South Asia as an important region for their activities. Even though they have enjoyed success of sorts in the form of successful terror attacks, they have not gained a strong foothold there yet. With the largely successful decimation of the IS caliphate in the Middle East, IS has not been able to appoint a charismatic leader, build a strong chain of command in the region or sustain coordinated operations in South Asia. However, after the US killing of al-Qaeda’s al-Zawahiri in Taliban-controlled territory in July 2022, the possibility looms of a temporary truce between al-Qaeda, Taliban and IS working together. If this fusion transpires, the threat to global security will rise significantly.
As the US shifts its foreign policy from the Middle East to the Indo-Pacific region, intense conflict and deep-rooted crises could materialise within South Asia.With the Taliban firmly ensconced in Afghanistan and enjoying political freedom from the lack of pressure the United States previously applied, this possibility is strongly underpinned. Training facilities, recruitment efforts, and offensive staging capabilities could all be protected under this terror ecosystem being redeveloped in Central Asia.
This is of course coupled with the ignominious failure of the Biden-Harris administration in abandoning billions of dollars worth of state-of-the-art equipment – something that now gives the Taliban and its allied terror factions greater access to launch mid- and high-level operations across South Asia. The high-tech equipment has effectively equipped the Taliban to be a force to be reckoned with.
Sri Lanka, at present, is a figurative sitting duck amidst a massive geopolitical powerplay between the US, Russia, China and India while the threat of terrorism looms from the black hole in Central Asia. A unified Islamic State and Taliban/AQ alliance would spell doom for South Asia and other regions of the world. The establishment of intelligence-sharing mechanisms among regional and international agencies will significantly reduce the threat that emanates from Afghanistan. Likewise, strict monitoring of online spaces, especially social media and chat rooms, is paramount to a strong defence capability against an ideologically-charged terrorism threat. Sri Lanka must brace herself for impact.
The post-war Sri Lankan state’s inability to fulfil the demands of global finance capital has devastated the lives of millions of Sri Lankans. This is a country where we see the socio-economic impact of the new period of capitalist transition that emphasised the private sector, markets and openness to global capitalism for more than four decades. The other dimension that has had a wide-ranging impact on society has been three decades of a military strategy to consolidate the territory of the centralised Sinhala nationalist state. This began in 1979 with the enactment of the Prevention of Terrorism Act and sending troops to the North and was achieved in 2009. A full understanding of the social impact of this dimension needs a lot more research. If we add to these the impact of COVID and the economic crisis we get the full picture of the socio-economic issues that people within the Sri Lankan state are facing. Of course, the impact of these issues is mediated through the social structure. Therefore socio-economically marginalised population is facing a worse situation, and their condition is bound to deteriorate further.
The media is full of analyses and answers to these problems. But as one of my favourite Critical Theorists – Robert Cox – who worked in international political economy, says all analyses are done for someone and for some purpose. There is no politically neutral analysis. The bulk of the discussion is geared towards restoring capitalist growth. Often this is accompanied by a desire for what is called political stability. But advocating political stability without defining how this is to be achieved, in a country where we have seen thousands of deaths through state repression, is not only dangerous but downright reactionary. Rather than political stability, we need to talk about political legitimacy. A regime that has a greater degree of political legitimacy can give leadership to developing a new social contract, which is essential for Sri Lanka to face the current situation. This note is aimed at the social and political forces that have begun to challenge existing orthodoxies in the current context. These protests are found cutting across ethnic groups. It aims to combine social justice and pluralism and point towards new areas where progressive politics can be strengthened.
In a situation where wages become a major source of income, education and skills development becomes a critical area for social mobility. State monopoly on education was broken in the new period of capitalist transition
To begin, we need to consider the impact of more than four decades of the more liberal period of capitalist transition. The bulk of the economy is concentrated in the Western Province, which was better endowed to benefit from the new directions in the economy. Central Bank data shows that in 2019 these areas accounted for around 39 per cent of the total national output. According to the Household Income and Expenditure Survey of 2019, which managed to cover the entire territory of the state because it was unified through military means, 11.9 per cent of the households in Sri Lankan live below the official poverty line. There is a great degree of variation in this indicator between districts. While Colombo district had 1.8 per cent of households below the poverty line, the Mullativu district in the Northern Province. it was as high as 39.5 per cent of households. This area has also been affected by three decades of armed violence. In reading this data it is necessary to remember that the poverty line, as defined by the state, measures a basic minimum a household needs for survival, although it is propagated as a great achievement in development. What these figures show is the proportion of the population who could not secure even this basic minimum.
With the deepening of capitalist relations of production, there have been significant changes in the agrarian sector. The share of agriculture in the economy has significantly declined. In 1977, 30.7 percent of the national output was from agriculture. By 2019 it had declined to 7.0 percent. There has been a gradual deterioration in the viability of smallholder paddy. The 2019 Household Income and Expenditure Surveyshows that only 8.6 per cent of income in the rural sector was from agriculture.Further reforms in capitalist transition backed by international actors will try to promote markets relations in state land. This will make it even more difficult for the smallholder peasantry to earn a living from their land.
The other side of this rural transformation is the growth of a population depending on wages. The growing working class is found in multiple socio-economic formations – organised, informal, sub-contractors, etc. A significant section of this labour are women. Some sections of the working class sell their labour in other countries. While the working class has grown, institutions that protect their rights and working conditions don’t operate in many sectors. What existed in the past has been gradually dismantled. The effectiveness of these institutions depends a lot on the presence of trade unions. But the working class is not organised in all sectors. On the other side, business interests will continue to try and dismantle the remaining institutions that protect rights of the working class.
It is necessary to pay attention to the austerity measures that are sure to follow an agreement with the IMF.
In a situation where wages become a major source of income, education and skills development becomes a critical area for social mobility. Since the state monopoly on education was broken in the new period of capitalist transition, the role of the private sector in education has expanded. This has become a new avenue where richer classes can ensure an education for their children. In addition, the state sector is not an equal system. Therefore, both private sector education and state education provides more opportunities for the richer section of population to provide a quality education for their children. The cumulative effect of these changes has been the growth of a significant level of inequality. Data for 2019 show that while the richest 20 percent of the population acquired 51.4 percent of national income the poorest 20 percent had only 4.6 percent.
The answer to these socio-economic issues from those whose main agenda is restoring capitalist growth in the current context, is the same old idea of protecting vulnerable groups that we heard 40 years ago. The foundation of this idea is the notion of growth and trickle-down. Sometimes these policies are called targeted safety nets. The argument is that these policies are meant to safeguard the poor from the impact of capitalist reforms. This is supposed to be the main role that the state should play vis-à-vis the poor – in the long run, economic growth would take place, and the benefits would trickle down to the poor. The analysis that underpins these ideas always focuses on households in isolation from the structures of socio-political power that maintain this population in this condition in the first place. Therefore, it takes us away from the need to tackle the reasons for marginalisation.
During the new period of capitalist transition that emphasised markets, private sector and openness to global capitalism, all sections of the Sri Lankan population did not accept these ideas of safety nets propagated by the political elite and their international backers. That is why there have been struggles in various sectors, such as the urban working class, plantation working class, sections of smallholder peasantry, fisheries, etc., to improve their living conditions by challenging the structures of power that maintained existing the social relations of production. The entry of women’s groups into these struggles added a new dimension. Of course, there were setbacks, such as state repression of the July 1980 strike. At present what is needed is to take a close look at this experience, learn lessons and look for possibilities of reviving these struggles.
Finally, it is necessary to pay attention to the austerity measures that are sure to follow an agreement with the IMF. In approaching this question, it is important to remember what we have is a post-war state. The result of more than four decades of the new period of capitalist transition and three decades of armed conflict has been the growth of armed forces and proliferation of state institutions at several levels. Today the institutional structure of the centralised Sinhala nationalist state has institutions at presidential, parliament, provincial, districts, sub-district and local authority levels. Almost all these levels include elected members and a bureaucracy. In addition, institutions of the central state have undergone numerous divisions. One of the reasons has been the need to maintain coalition regimes and large cabinets.The strategy of the political elite has been to divide state institutions and distribute them among coalition partners. State is bound to give priority to ensure resources for these institutions in implementing austerity. The objective of progressive sections should be to focus on how policy changes will affect the marginalised, and counter possible negative impacts.
During the new period of capitalist transition that emphasised markets, private sector and openness to global capitalism, all sections of the Sri Lankan population did not accept these ideas of safety nets propagated by the political elite and their international backers.
If we are to go by the past experience in Sri Lanka, the politics of economic reforms to ensure capitalist growth need not be peaceful. We need to remember that the state has draconian laws and a better-developed security apparatus to meet any challenges to these reforms from society. As in the past, the political elite is more likely to use this repressive apparatus to achieve their own political objectives, rather than aim towards political legitimacy through a new social contract.
As pointed out at the beginning of this note,policy discussions on social justice should not ignore the political demands that have been raised by ethnic minorities right throughout the post-colonial period. While the political elite inaugurated the new period of capitalist transition after the 1977 elections, the Tamil minority demanded a separate state in the same election. One of the responses of the centralised Sinhala nationalist state, that presided over the new period of capitalist transition was to enact a Prevention of Terrorism Act (PTA), establish a discourse of terrorism and send troops to the North-East. This military effort lasted for 30 years, and in 2009 the territory of the centralised state was consolidated. But none of the major issues in relation to minorities and the centralised Sinhala nationalist state have been resolved. In fact, in some areas, the situation has worsened for example, with the Muslim population becoming a target of extremist violence.
At present some of the key aspects that the ensure security of the post-war state are maintaining the strength of the armed forces, continuing a presence of armed forces, especially in the Northern Province at a level higher than when Tamils demanded a separate state, and keeping the PTA in the statute books. In this context various activities under the title of reconciliation become an element to stabilise the post-war state.
The focus of reconciliation is society, rather than the nature of the state and state-society relations. Although there is an element of prejudice and animosity between identity groups in Sri Lanka’s conflict, they exist in a context of a centralised Sinhala nationalist state. Ignoring this within the discourse of reconciliation means ignoring the need for fundamental state reform focusing on its identity, public policies and structure to suit a multi-ethnic society. What we need is not a nation-state with a unified identity, but a state that has space for multiple identities. Its structure and public policies have to fit into this vision.
Given the socio-economic impact of the economic crisis that is affecting all ethnic groups at present, there is space to focus on a strategy where diverse ethnic groups come together to struggle for common socio-economic rights. This can give a new meaning to reconciliation and combine social justice with a vision of a plural society. There are scattered examples of this happening already. For example, every year we see demonstrations by the mothers of the disappeared from the North (victims of the military strategy to consolidate the territory of the Sinhala nationalist) and South (victims of the state repression in 1989/90). There have been some links between them. But this certainly can expand. During my work in the North/East after the war ended, I have seen examples where people from a Tamil village and adjoining Sinhala village come together to lobby state institutions to restore their land documents. Another example is Muslim and Sinhala villages coming together to lobby about supply of water in the irrigation canals. There can be many such examples that can built upon. These are small examples. In the current context there is space to build on this strategy.
To end this note I would like to emphasise the need to get away from an analysis that places class and ethnicity in isolated compartments when dealing with the problems of the marginalised in Sri Lanka. In this regard, my best personal experience has been when working with the plantation working class/Hill Country Tamils. There was no way one could separate ethnicity and class at analytical or political level when working with this population. The struggle for political,socio-economicand right to a distinct ethnic identity had to go together. This is an experience that we can learn from.
Views are personal
A nation is a community of people formed into one state in a particular territory based on shared features such as language, history, ethnicity, culture, or a combination of one or several of those. All citizens of the nation should be able to live together peacefully and feel togetherness towards one’s own country irrespective of individual or group differences concerning religion, race, region, culture, caste, etc.
Once national integration is achieved, individuals will likely work together to build a system that enhances the nation’s and people’s prosperity. The nation-building process in Europe commenced a few centuries back and grew gradually in several phases as viable governance units.
Colonisation by the West
After becoming strong nations, some European countries invaded, migrated, and colonised other parts of the world. Also, many people migrated to western countries from those colonies looking for greener pastures, and migrants became a cheap source of labour in Europe. But the local communities look at them as inferior non-Europeans and relegate them. The scholars of the west developed concepts such as co-existence, pluralism, multiculturalism, etc., to influence local communities to accept those migrated aliens. The governments of those countries also provided legal support to reinforce the above concepts to maintain the cheap labour force.
Colonial administration was the integrating factor, like the basket that kept all potatoes (traditional governance units) together. Once the basket is removed (independence), potatoes start rolling away in different directions (separatism).
Colonialists competitively established governance units (colonies) in the Third World according to their ability to capture territories and parts of territories, irrespective of historical factors, languages, ethnicities, etc. Sometimes, one territory of a traditional leader may have been divided among several colonialists. Also, on some occasions, several governing units of traditional leaders may have been brought under one governance unit (colony). Such states were like baskets of potatoes. Colonial administration was the integrating factor, like the basket that kept all potatoes (traditional governance units) together. Once the basket is removed (independence), potatoes start rolling away in different directions (separatism).
In the 20th century, while responding to independence movements, colonial masters did not attempt to carve out or amalgamate governance units again according to the citizens’ wishes, such as culture, history, religion, language etc. Therefore, in many instances, newly independent nations are creations of colonialists, not by the citizens, according to the historical and cultural factors. Consequently, many newly independent nations struggle to separate or amalgamate to align with their customary identities. Under this scenario, nation-building in Asia, Africa, and part of South America became highly complex, leading to eternal conflicts between and within countries.
The Case of Sri Lanka
After falling off the Polonnaruwa kingdom, Lanka did not have a stable central government to face the South Indian invasions. The Kingdom of Lanka shifted to more secure areas such as Gampola, Kandy, Kurunegala, Dambadeniya, Kotte, etc., to avoid such attacks. During these political instabilities, the Sinhala population also moved to southern and hilly regions, abandoning the Ancient Great Sinhala Buddhist Civilisation centre.
During the Kotte period, regional kings/rulers became independent from time to time. Jaffna was under an independent ruler on several occasions. During the Portuguese invasion in the 16th century, there was no strong central government in Lanka, and regional rulers were fighting with each other to augment their territories. However, Mahavamsa has attempted to show that the kings of the kingdoms mentioned above were “All Island Kings”, and others were regional rulers. It indicates that Mahavamsa has tried to reiterate the concept that Lanka is one nation, one country, and one state. Under this backdrop, the Portuguese could capture the entire low country from regional kings/rulers. However, they could not grab the Hill Country (Kandyan Kingdom) due to its defence-wise strategic location.
With the consolidation of power by the Portuguese, Lanka was officially divided into two states (the Coastal area of the Island as the Portuguese Colony and the Central hilly area as the Kandyan Kingdom). This two-state position of the Island lasted for about three centuries until the British captured the Kandyan Kingdom and formed a single administration for the entire Island in 1815. In addition to the ordinary laws for the Island, the British introduced specific private laws on subjects such as Land ownership and marriages. (Marriage laws for Hill country and Muslims, land laws for Jaffna, etc.)
History of Sinhala-Tamil Conflict
There are no doubts about the existence of the Tamil Community on the Island since antiquity. Throughout history, Lanka struggled to counter the Tamils and other Dravidian invaders from South India. After the Cholas invasion in the 11th century, the Tamil population in Lanka could have increased. But there is little evidence of ethnic or religious conflicts, hate, or discrimination against Tamils by the Sinhalese.
There is not much evidence to prove that the wars of Sinhala Kings were against Sri Lankan Tamils.
After the collapse of the two ancient kingdoms, those territories became a sizeable contiguous forest pushing Sinhalese to South and Hill Country and most Tamils to the Jaffna peninsula. This forest area functioned as the buffer zone to separate Sinhala and Tamil Communities from each other. As a subsistent agricultural society, there was no completion or conflict among the two communities for natural resources such as water or land.
There is not much evidence to prove that the wars of Sinhala Kings were against Sri Lankan Tamils. Those could have been against South Indian invaders such as King Elara, Cholas, Pandian, etc. Otherwise, there can’t be a substantially large Tamil community in Sri Lanka. It means Tamils were not hated or considered aliens to the Sinhala society.
Also, there was no resentment or threat to the existence of Hinduism from Buddhists. Buddhists used to respect Hindu gods without any hesitation. Muslim immigrants, mixed with Sinhalese and Tamils, filled the vacuum of national and international trade needed by both communities. Hence there was no discrimination against Muslims, either from Tamils or Sinhalese. There was no challenge to the co-existence of three significant communities or ethnic harmony in Sri Lanka until the conquering of the coastal area by the Portuguese.
Except for the Kandyan Kingdom, the rest of the Island had ruled by the Portuguese and Dutch as one state. The British conquered the Kandyan kingdom in1815 and brought the entire Island as a British colony under the name of Ceylon in English, Lanka in Sinhala, and Illankai in Tamil. In 1948 Ceylon became an independent county with Dominion status. In 1972 it became a republican under the single name of Sri Lanka in all languages.
The Soulsbury constitution and subsequent two Republican Constitutions, 1972 and 1978, did not provide a robust and broad framework to promote national integration and build Sri Lanka as a cohesive nation. Instead, it has sown the seeds of ethnic disharmony and national disintegration. Consequently, even after seven decades of independence, Sri Lanka still struggles to become a sustainable nation.
Since 1978 the constitution has amended/submitted amendments on 21 occasions. Most of these amendments are not for the broad public interest but to satisfy the little curiosity of individuals, families, and groups. The most critical issues, such as national integration, good governance, and genuine fundamental rights, have gone under the carpet. Now, the government is contemplating the 22nd Amendment.
Sri Lanka can invest many resources expecting national prosperity, but all those may become futile, as in the past, if no national integration exists. Reasons and justifications for these proposals are not given here, as the intention is to make the article brief.
But the proposed amendment also seems to be addressing the vested interest of powerful groups instead of removing major evils and incorporating elements required to heal the wound. This article will discuss many aspects that legally and constitutionally need corrections, but some of these may be politically difficult. I hope many of these proposals can be implemented by the present deformed government because the time has come to keep the personal and group agendas aside and do the neediest things to become a sustainable nation. Sri Lanka can invest many resources expecting national prosperity, but all those may become futile, as in the past, if no national integration exists. Reasons and justifications for these proposals are not given here, as the intention is to make the article brief.
Recommendation and Suggestions
All private laws providing special privileges or discriminating against any ethnic/religious group must be abolished and brought under the country’s common law. However, a citizen must be free to follow their cultural, religious practices, and customs within their societies without affecting the freedom of others.
Contesting for provincial and general elections by the priest of any religion should be banned, and religious leaders should prevent them from attending party politics.
Political parties promoting any religion, ethnic group, caste, or profession that leads to social fragmentation should not be registered as political parties. All political parties must have national objectives only in their policy. Any Existing political parties with such narrow interests and policies should be allowed to change according to the national interests.
All kinds of discriminatory activities such as hate speeches, publications, and the use of any media insulting a religion, language, ethnicity, or caste should be prohibited in the law. An Anti- Discrimination law must be introduced.
The use of ethnicity in official documents such as birth certificates, identity cards, etc., in place of ‘nationality or the nation’ should stop. Nationality/ nation in any document should be indicated as Sri Lankan, not as Sinhala, Tamil, Muslim, etc. If necessary, the parents’ religion and language may be shown on the Birth Certificate because just born child has no religion or a language
Customarily, in the Sinhala language, “JATHIYA“means Tamil, Sinhala, Muslim or Burger. It does not imply the nation or nationality (Sri Lankan). Now, this deep-rooted connotation can’t be changed. Therefore, without disturbing the Sinhala sentiments, a new Sinhala word must invent for the English word ‘nation and nationality, instead of using the word JATHIYA alternatively for both the ethnicity and the nation. If necessary, the same should do for the Tamil Language as well.
Relevant clauses of the Prevention of Terrorism Act, Public Property Act, etc., should be amended or repealed to prevent the detention of the accused for a more extended period to satisfy narrow-minded political aspirations and cover up the inefficiency of law enforcement authorities. And the maximum period for detention should be specified in the law. If the police fail to frame charges during the specified period accused must be released on bail.
Constitutionally, a system of inter-dependency between the three levels of governance (central government, provincial councils, and local authorities) shall introduce to ensure an indivisible Sri Lanka and to prevent conflicts between the representatives of the above three levels. That may avoid surfacing personal interest rather than local, provincial, and national interests, which is highly visible under the present system. For instance, instead of conducting a separate election for the provincial councils, chairpersons and mayors of local government institutions in the province can become ex-officio members of the provincial council. The governor can appoint the member who can command the support of the majority in the council as the Chief Minister. Under this arrangement, local authorities become part and parcel of the provincial council. To make the devolution more reliable and robust, provincial chief ministers shall make ex-officio members of the cabinet, representing the provincial interest. Such a system may pave the way for national integration and do away with the arguments on Federalism Vs. Unitary system. All three tires will be interwoven, and no room for separation.
While the central government is designing the national programs, implementing all divisible programs and projects should be the responsibility and right of the provincial council and local authorities. The central government must implement only the indivisible programs. To reflect these requirements, necessary amendments to the constitution and elections laws must be affected.
The establishment of new religious places such as temples, mosques, churches, kovils, planting Boo-trees, and erecting statues in public places, must regulate under the laws of environment, physical planning, archaeology, etc. according to approved parameters, with due consideration to the needs of such new places.
In addition to existing ones, Laws must introduce to prevent the introduction of new religions, which could lead to further social fragmentation.
Conversion from one religion to another through unethical inducements such as cash or material bribes and offering privileges should be banned. However, conversion with understanding, knowledge, and education must be allowed.
A national program is necessary to impart knowledge about the basic principles of all religions among priests enabling them to have an interfaith understanding.
In addition to their faith, all students must acquire general knowledge about other accepted religions in the country, enabling them to avoid religious conflicts and gain interfaith understanding. The Syllabus of religious studies must be prepared to accommodate this requirement.
A national program is necessary to impart knowledge about the basic principles of all religions among priests enabling them to have an interfaith understanding.
When the history of Sri Lanka is analysed carefully, it is possible to assume that Buddhist archaeological sites in the north and East could be Buddhist ancestral assets of the Tamilspeaking people who have been converted to Tamil Hinduism or Islam faith due to Indian invasions and co-habitation in those areas. Therefore, Sinhala Buddhists in the South should not claim those as solely Sinhala Buddhist heritage. Tamil-speaking Hindus and Muslims must encourage preserving and protecting those as their ancestral heritage.
The government should stop establishing separate ministries for each religion and culture. One ministry of Religious and cultural affairs is sufficient. Also, the government’s assistance to religions should be rationalised to prevent discriminatory service to some religions.
All religious schools must register under the Ministry of Religious Affairs and regulate (Curriculum development, examinations, certificates, teacher qualifications, physical facilities, etc.). Religious schools should not be an alternative to the general education system in the country.
The naming of government schools after religions/ethnicities/linguistics such as Sinhala College, Muslim college, Tamil College, Hindu College, Buddhist College, Catholic college, etc., should not be allowed further. Schools must name according to the school classification of the department of education. If a school is named after religion or a language, such schools must teach only that religion or language regulated by the ministry of religious affairs.
Today, English is taught as a subject from grade 3 in Swabasha (vernacular) schools. But teaching English as a subject (link language) must commence from grade 1. An adequate number of English, Tamil, or Sinhala (second language) teachers in all schools in the country are a must.
Learning English, Sinhala, and Tamil as subjects must be compulsory for all students to GCE (O/L). Today, in government schools’ a second language is taught only to read and write. Still, high priority must be given to training in speaking, which is very important for inter-lingual interaction and employability in any part of the country.
The medium of instruction should change to English from grade 5 or 6, step by step over 10-12 years, while two vernaculars are taught as subjects in secondary education and universities.
While moving for English as the medium of instruction, an attempt must be made to have classes of multi-ethnic, multi-religious students in schools.
School textbooks and curriculums must develop/revised to prevent the proliferation of antireligion and racialist views and to promote co-existence and the concept of Sri Lanka and Sri Lankans.
Along with the above-said changes in the education system, the possibility of making proficiency in the second language an entry qualification to the public service should be considered in several steps.
Accept English, Sinhala, and Tamil as national and official languages with equal status and allow national and subnational level government institutions to select two Languages (English and Sinhala or Tamil) as official languages with due consideration to social and cost factors.
However, National level agencies must be equipped to communicate in all three languages.
The appointment of an adequate number of translators, especially Sinhala- Tamil translators, to government offices is a paramount need under the prevailing circumstances. All universities in the country can introduce language streams to produce tri-lingual graduates (Sinhala, Tamil, and English) to fulfil the needs of qualified people. The government must guarantee the employment of tri-lingual graduates as translators and teachers.
The mass media has a critical role in awareness building, inspiring, and influencing the people, opinion leaders, and political leaders to achieve the goal of national integration. Owners and managers of media must realise their responsibility to the nation and society to provide clean, transparent, and reliable information. All media in the country must build up an ethical agreement and understanding to refrain from spreading news, rumours, views, and concepts counterproductive to the objective of national integration
Owners and managers of media must realise their responsibility to the nation and society to provide clean, transparent, and reliable information.
The media must be used to promote the concept of Sri Lankan and Sri Lanka, not the identity of ethnicities.
Spreading news and views detrimental to ethnic and religious co-existence must be banned in the country’s law. The media shall not use words such as ‘Sinhala man has been assaulted by a Muslim man or ‘Tamil youth has been taken into custody by the USA police,’ highlighting individuals’ ethnicity.
As we are accustomed to using the word ‘JATHIYA’ to name the ethnicity throughout history, the media should not use the same word to call the nation and the nationality. A new Sinhala word may be invented and used for nationality and the nation.’
All ethnic groups must be given the opportunity proportionate to the population in any new major irrigated settlement schemes.
In the Northern and Eastern Districts, lands occupied by security forces, which are strategically crucial for national security, must be appropriately acquired under the land acquisition Act and pay compensation to owners in line with the ‘National Involuntary Resettlement Policy. All other lands must release back to their original owners without delay.
Accommodate People who have lost their land due to the civil war or natural disasters in new settlement schemes with irrigation and other infrastructure facilities to cultivate and live on. In such events, all people who have lost lands must accommodate, disregarding the percentage of the population.
Legal and constitutional provisions must introduce to delegate a limited authority on land matters to provincial councils while keeping the prime control with the centre.
Instead of commemorating the war victory, objectives and arrangements of “Ranaviru Day’’ must be changed, enabling the whole nation to participate in commemorating the loss of the lives of their loved ones and to celebrate the bliss of defeating terrorism. It could be the day of the National Integration, not Rana Viru Day.
The Indian Tamils should not confine to estates as bonded labour. Their accessibility to the main streams of the economy should be improved and facilitated to move into other areas for employment and living through higher education and diverse skills.
Designing a program is necessary to encourage, motivate, and support people of the oppressed castes in the North for upward socio-economic mobility and migration to other areas, enabling them to establish social recognition.
Serious consideration is necessary to bring the country’s name to pre-1972 status. In Sinhala, it can continue as Sri Lanka, in English as Ceylon, and in Tamil as Illankai. Though the Tamils are more accustomed to the name Eelam, during the last three decades, it has given a bad connotation to Sinhalese.
Tamil speakers must allow using Tamil words for the National Anthem without changing the meaning and the music.
Instead of excessive interference and threats, the UNHCR and interested external parties should respect the county’s sovereignty, allow local systems to operate within the culture and ethics atmosphere, and support fill gaps. They shall encourage the Tamil diaspora and the leadership to find a solution within the country’s social, economic, political, and historical background by working with the government and the Sinhala majority instead of widening the ethnic gap and cultivating hatred. The government in power also must understand the Tamil community can’t be cheated by camouflaged proposals and promises; it should be genuine and highly committed. There must be a separate ministry or a bureau under the president with sufficient powers and strength to drive and monitor all actions for national integration.
These proposals are brought forward to benefit all stakeholders interested in unity within the religious, ethnic, and culturally diverse Sri Lanka. I propose that the mindset of all Sri Lankans should change to think, speak, and behave as Sri Lankans while maintaining their religious, ethnic, and cultural identities without affecting the freedom of others. Must respect and accept the citizens’ right to live in any part of the Island and engage in livelihood according to their wish. Sinhala, Tamil, Muslim, Burgher, and Malay are not nations. They all are ethnic groups, and “Sri Lanka” is the country and “Sri Lankan” is the nation.
Nowhere in the world has national integration been achieved by forcefully and entirely loading the culture and the interest of the majority on the minorities.
After independence, Sri Lanka has sacrificed 74 years unproductively to counter or manage internal conflicts. Otherwise, it could have been devoted to building a prosperous nation. Nowhere in the world has national integration been achieved by forcefully and entirely loading the culture and the interest of the majority on the minorities. In many countries, the majority has compromised many values and interests to allow the minorities to feel that they are equal with others. The attempt to persuade minorities to comply with Sinhala Buddhist Cultural values will make it difficult for them to live with the majority. Then, it justifies the demand for a separate country.
As such, compromise is much better for everybody, including the Sinhala majority, to avoid unwarranted external influences and stand as a sovereign nation. Also, leaders of the minorities must realise that their problems can’t be resolved only with international pressures, ignoring the government and the majority. Compromise will pave the way for a sustainable solution.
We start, though one never should, with an alternative universe: imagine if Gandhi didn’t really want freedom for India. Go further afield: imagine if Churchill didn’t want to win the war. Or if that avatar of modern evil, Adolf Hitler, was lukewarm about racial supremacy.
Now imagine if all of the above was conventional wisdom in real life. It doesn’t work, because it’s just not true — we have endless data that tells us otherwise, as well as the express words and deeds of these men, bent on doing the exact opposite.
Yet that same sense of obviousness isn’t always extended to Muhammad Ali Jinnah. Mountains of evidence are ignored, dozens of speeches forgotten, alternate universes imagined out of thin air.
But some universes merit deflating more than others — like the one where Jinnah never wanted Pakistan or Partition; where he never wanted, right to the end, a separate country at all.
Per this version, Jinnah was a poker player, with Pakistan no more than a bargaining chip — all he really wanted was a better deal for everyone in an undivided India. As explained by historian Ayesha Jalal, “Jinnah was from a province where Muslims were in a minority. He wanted to use the power of the areas where the Muslims were a majority, to create a shield of protection for where they were in a minority.”
Jalal’s theory is that the Pakistan demand was a bluff; a mere play for India as a loose federation. “The Lahore resolution should therefore be seen as a bargaining counter,” she writes, “which had the merit of being acceptable (on the face of it) to the majority-province Muslims, and of being totally unacceptable to the Congress and in the last resort to the British also. This in turn provided the best insurance that the League would not be given what it now apparently was asking for, but which Jinnah in fact did not really want.”
In sum, Jinnah didn’t want what he was saying he wanted, and wasn’t trying to do what he ended up doing. He was fighting for the rights of Muslims across India, and sought to leverage the strength of Muslim-majority areas into a kinder place for the overall minority. A string of perfect accidents later, Pakistan was born.
It’s of course true that Jinnah desired a united India as a young man, before changing his mind. But the idea that he was secretly holding out for a better, tamer deal right up until separation is neither rich enough nor coherent enough to explain the creation of a country.
This idea caters to many constituencies — it strokes the conceit that Pakistan was an amputation of Mother India; it appeals to locals who believe separation a bad business from the beginning; and it denies Jinnah any responsibility for Partition.
Yet all the facts point elsewhere. It’s of course true that Jinnah desired a united India as a young man, before changing his mind. But the idea that he was secretly holding out for a better, tamer deal right up until separation is neither rich enough nor coherent enough to explain the creation of a country.
Nor is it novel: far from the daring new revisionism it’s made out to be, the theory that Pakistan was a botched trump card is very much an old British take, first muttered by the viceroy and his lieutenants after the Lahore Resolution. Thus, just a week after Jinnah made his bid for a separate state on March 23, 1940, British officials riled themselves up into an echo chamber of their own.
In his telegram to the viceroy, the chief secretary of the United Provinces huffed that the Pakistan plan was “merely put forward as a counter-demand to that of the Congress as a bargaining move.” The governor of Bombay piled on, “The best that any Muslim has said about it is that Jinnah cannot mean it and is only using it as a bargaining weapon.”
The viceroy, Lord Linlithgow, wrote back to London on 6 April with the same conclusion as his peers, “I am myself disposed to regard Jinnah’s partition scheme as very largely in the nature of bargaining … my impression is that … there is a good deal of feeling that it is bargaining in character.”
That Linlithgow’s arguments are the same as Jalal’s is unsurprising, given how much her work relies on colonial archives. “[T]he bargaining counter theory was a favourite one with British officials,” writes Indian historian Mushirul Hasan. “In fact, one is constrained to suggest that Ayesha Jalal has probably borrowed the idea from, among other sources, the British documents at the India Office Library.”
But the context should have been considered — that these were the private papers of men that had put down Jinnah as a dangerous gambler, and who felt the sun would never set on the Raj. Described by Nehru as “heavy of body and slow of mind, solid as a rock and with almost a rock’s lack of awareness,” Linlithgow thought India wouldn’t win its freedom for another fifty years, and that thanks to the wonders of air-conditioning, it was now possible for millions of Britons to settle in places like Dehra Dun.
That same sense of permanence coloured his views. “The Hindus have made the mistake of taking Jinnah seriously about Pakistan,” wrote Linlithgow, “and as a result they have given substance to a shadow.’ This was rehashed by pro-Congress papers like the Hindustan Times, caricaturing Jinnah as a lone warrior against the world.
Yet beyond the biases of British bureaucrats, long proven incorrect, the Pakistan-as-poker-chip theory fails to find much material in its support. If Jalal’s work represents any orthodoxy, it is that of empire — the Raj trying to shrug off Pakistan as ‘bargaining in character.’
After all, when was that bargaining ever in play? As Indian critics would later shake their heads, Jinnah never once changed his mind, never once backpedalled, never once bargained away the basic point in all future negotiations — that unless the parties agreed to Pakistan in principle, there could be no further talks.
Even contemporaries like the Dalit genius B.R. Ambedkar dismissed such theories as “wishful thinking” as early as 1940. “The Mussalmans are devoted to Pakistan and are determined to have nothing else,” he wrote. The Raj wasn’t so discerning; it refused to accept Lahore for what it was — the final curtain on a united India.
How highly Jinnah himself thought of this bargaining chip theory is apparent in speech after speech. “The Hindus must give up their dream of a Hindu Raj and agree to divide India into a Hindu homeland and Muslim homeland,” he said on September 21, 1940. “Today we are prepared to take only one-fourth of India and leave three-fourths to them. Pakistan was our goal today, for which the Muslims of India will live for and, if necessary, die for. It is not a counter for bargaining.”
Some months later, he warned his party again, “It would be a great mistake to be carried away by Congress propaganda that the Pakistan demand was put forward as a counter for bargaining.”
If that wasn’t clear enough, he told the Muslim Students Federation in Lahore on March 2, 1941, almost a year after the resolution, “The only solution for the Muslims of India … is that India should be partitioned so that both communities can develop freely and fully according to their own genius … the vital contest in which we are engaged is not only for material gain, but also for the very existence of the soul of the Muslim nation. Hence I have said often that it is a matter of life and death to the Mussalmans and is not a counter for bargaining.”
When another year went by, Jinnah was asked at a press conference on September 13 whether there was still room to compromise. “If you start by asking for sixteen annas, there is room for bargaining,” he replied, “… Hindu India has got three-fourths of India in its pocket and it is Hindu India which is bargaining to see if it can get the remaining one-fourth for itself and diddle us out of it.”
Year after year, Jinnah was making the same point all the way to independence — that division was necessary, and to say he was bargaining for anything else was plain wrong.
His landmark presidential address to the Muslim League in Delhi, on April 24, 1943, put such arguments to rest for good. To the question Jalal would also raise, about the Muslims that would be stuck in India even after division, he was as clear as day:
Do not forget the minority provinces. It is they who have spread the light when there was darkness in the majority provinces. It is they who were the spearheads that the Congress wanted to crush with their overwhelming majority in the Muslim minority provinces. It is they who had suffered for you in the majority provinces, for your sake, for your benefit and for your advantage. But never mind, it is all in the role of a minority to suffer. We of the minority have suffered and are ready to face any consequences if we can liberate the 75 millions of our brethren in the north-western and eastern zones.
As Jinnah saw it, Pakistan meant the greatest good for the greatest number. To liberate the majority, the minority provinces would have to suffer. Yet his opinion, stark as it was, remained unchanged — to achieve Pakistan, they were ready to face the consequences.
Though the minority Muslims were “the pioneers and first soldiers of Pakistan”, Jinnah would say elsewhere, “we are determined that, whatever happens to us, we are not going to allow our brethren to be vassalised by the Hindu majority.”
As for the second plank of Jalal’s argument — that the Quaid’s real goal was a loose federation, with protections for the overall minority — Jinnah dismissed the idea in the same speech:
We are opposed to any scheme, nor can we agree to any proposal, which has for its basis any conception or idea of a central government – federal or confederal – for it is bound to lead in the long run to the emasculation of the entire Muslim nation, economically, socially, educationally, culturally, and politically and to the establishment of the Hindu majority raj in this subcontinent.
Therefore, remove from your mind any idea of some form of such loose federation. There is no such thing as a loose federation. Where there is a central government and provincial governments they will go on tightening, tightening and tightening until you are pulverised with regard to your powers as units.
Jinnah knew this to be true years before Indian-occupied Kashmir was turned into an open prison — the idea of a loose federation would be of no help to the Muslims, who would soon be overpowered by the logic of a brute majority.
The Ismail letters
Having seen how glaring the record is over the years, we now turn to the material favouring the other side — that is to say, the idea that Pakistan was a cunning card trick.
In her book The Sole Spokesman, Jalal’s key piece of evidence is a letter from 1941 — Jinnah’s reply to a Nawab Ismail of Patna (not to be confused with the more prominent Muslim leader, Nawab Ismail Khan of Meerut). Ismail had been pestering Jinnah for weeks to send him some notes on the Pakistan demand, so that he could prep for his meeting with H.V. Hodson, the viceroy’s adviser.
Hodson was an old-school imperialist, in the middle of writing a rather dense report on the communal problem. He’d once dubbed Jinnah “a conceited person, afraid that events may lose him the power that he craves,” and that the Muslim League was “first and foremost communal, and can never be anything else.”
The report, when it did come out, was as damp about the Pakistan demand. “The most interesting point was that every Muslim Leaguer, with but one exception, interpreted Pakistan as consistent with a confederation of India,” wrote Hodson.
Hence his conclusion: “My impression was that among the Muslim Leaguers in the provinces visited there was no genuine enthusiasm for Pakistan.” Instead, the Raj needed a new vocabulary, “one which recognises that the problem is one of sharing power rather [than of] qualifying the terms on which power is exercised by a majority.”
Here were again the same bad takes, recycled from one Raj paper-pusher to another. But all that would come later — in the run-up to the report, Nawab Ismail was in a tizzy. He was excited to see Hodson (as was usual when the landed gentry met their imperial overlords), and again asked the Quaid for instructions.
Jinnah responded to Ismail that the Lahore resolution was their guide, and refused to say more. To read Ayesha Jalal, however, that reply may have been the smoking gun:
As [Jinnah] told Nawab Ismail in November 1941, he could not openly and forcibly come out with these truths “because it is likely to be misunderstood especially at present”. In a line which reveals more than a thousand pages of research and propaganda, Jinnah admitted: “I think Mr Hodson finally understands as to what our demand is.”
To sum up, since Hodson was busy pooh-poohing the idea of Pakistan, and insisting that the real problem had to do with sharing power, Jinnah’s nod to the man — that Hodson finally understood their demand — was proof that the Quaid felt the same way.
But that’s quite the reach, especially if one were to read his actual letter to Ismail. With winter setting in, Jinnah had fallen ill and taken to his bed. His reply of November 25, 1941, was cold and curt, and is set out here in full:
Dear Nawab Sahib,
I have already written to you and explained to you the situation that we stand by the Lahore Resolution and it is quite clear to every man, who understands the constitutional problems of India, and also to every intelligent man if he applies his mind and tries to understand it.
I cannot say anything more because it is liable to be misunderstood and misrepresented, specially at present.
I think Mr. Hodson fully understands as to what our demand is.
With kind regards,
M. A. JINNAH
For starters, Jinnah had said that Hodson “fully understands” the demand, a degree less dramatic than the mistype of “finally understands” — more a passing comment than some grand admission.
At any rate, the need for such detective work disappears when one goes through his papers from that time. In late 1941, Jinnah was copying out parts of his Lahore speech to magazines: “The only course open to us all is to allow the major nations separate homelands by dividing India into autonomous national States,” he had said. (He had also sent a League paper to Linlithgow a while before, calling for the “division of India and the creation of independent Muslim states.”)
In yet another recent letter, to a Jam who had similar apprehensions as Ismail, Jinnah had again pointed to the Lahore resolution. “You will observe from this that it does not contemplate any form of central government or legislature,” Jinnah wrote on March 21, 1941, “and has for its basic principle that Muslim zones in the northwest and the east, while being vested with full responsible government, will continue in direct relationship with the British parliament as the Indian states, and the scheme will provide for the assumption, finally, by the respective regions of all powers…”
To anyone interested, Jinnah’s outbox was making as clear a case for total separation as possible. The same was true of his public messaging — just months after Jinnah’s letter to Ismail, his working committee’s resolution of April 11, 1942, spelled it out again, “So far as the Muslim League is concerned, it has finally decided that the only solution of India’s constitutional problem is the partition of India into independent zones.”
There’s also the fact that Jinnah hadn’t even read Hodson’s report — what to say of agreeing with his views — because it didn’t exist at the time; the Quaid had written to Ismail in November, when Hodson was still conducting his interviews. Even otherwise, Hodson’s eventual report may have sniggered at Pakistan as a project, but it never once doubted the Quaid’s motives.
There’s also an interesting postscript — the fact that Hodson himself would come around to the contrary. As a middling Raj official, he’d played it safe, reporting to his superiors what they were used to hearing. In retirement, however, he could afford to be more relaxed. In his own account of Partition titled The Great Divide, written over a quarter-century later, he saw the Lahore resolution for what it always was: “contiguous Muslim-majority regions were … to become fully sovereign States.” A confederal India, per Hodson, “was neither the two-nation theory nor the true idea of Pakistan.”
And that is where the matter of Henry Vincent Hodson, and Ismail’s letters to the Quaid, must rest.
Long story short, the mission’s grand plan was a three-tiered wedding cake of a country — provinces at the bottom, federations in the middle, and then a union with limited powers on top. The idea was that India would stay intact, while the middle layer — smaller sub-countries of Hindus and Muslims — would enjoy total autonomy.
Cabinet Mission finale
There’s a final climax to the poker chip theory, and it happened in the summer of 1946. Even as the Raj was getting ready to bolt for the door, it made one last effort to keep India together. Charged with the impossible were three of the empire’s old boys, each with names more elaborate than the last — A.V. Alexander, Stafford Cripps, and Frederick Pethick-Lawrence. They were dubbed the Cabinet Mission, a reference to the Attlee government’s hope that India could still remain whole.
So it was that the mission made their way south, if with issues of their own. (“Summer in New Delhi is not the best time and place for negotiations,” sniffed the posh Cripps.) They arrived in India, ironically enough, on March 23, and brought the natives to the table.
Long story short, the mission’s grand plan was a three-tiered wedding cake of a country — provinces at the bottom, federations in the middle, and then a union with limited powers on top. The idea was that India would stay intact, while the middle layer — smaller sub-countries of Hindus and Muslims — would enjoy total autonomy.
These would be the three federations, or groups, as the mission called them, roughly consisting of the areas that would later become Pakistan, India, and Bangladesh. There was also an escape hatch — if the fear of Hindu domination hadn’t died down by then, the Muslim groups were free to secede after 10 years. Much to the shock of the mission, Jinnah accepted the plan.
But Nehru didn’t. “When India is free, India will do just what it likes,” he said at a Congress meeting on July 7. “We are not bound by a single thing.” He doubled down at a press conference the next day: “It is our problem,” he said of the minorities, “… We accept no outside interference in it, certainly not the British government’s interference in it.”
Nehru’s bombshell had the intended effect — Jinnah broke out of the plan at once, while Congress leader Abdul Kalam Azad called it “one of those unfortunate events which changed the course of history … [Nehru] is at times apt to be carried away by his feelings.”
The mission failed, and would be mourned by Partition’s critics forevermore: “Arguably,” wrote one Indian scholar, “Jinnah, with his liberal, free market, pro-Western policies, would have been a far more successful Prime Minister for us than Nehru, with his socialist controls and his pro-Soviet brand of non-alignment.”
But counterfactuals are a risky business, and are best not indulged too much. We turn instead to the most popular take on the Cabinet Mission: that it was the last hurrah for a united India, as well as the final proof of the poker chip theory — since Jinnah had assented to the union, we’re told, this was clearly what he’d always wanted, not Pakistan. It was Nehru and the Congress that were to blame for Partition. Had they not behaved so badly and wiggled out of the deal, India could have come out of all this in one piece.
That, too, isn’t borne out by the facts. It’s true that Nehru did everything he could to make a mess of the plan (though Gandhi’s role as a spoiler is underrated). But to hold Nehru’s surliness, over a single summer in 1946, as one of the reasons for Partition, is to miss the forest for the trees.
First, the plan itself was some badly-drawn bureaucratese. Per one observer, “[Jinnah] sought nothing less than 50 per cent parity for 33 per cent of the populace, and a confederation of states with a weak centre. Such a heterogeneous super-state would have been unworkable in South Asia, and has never been operational in history.”
Second, the mission gave the League just about everything it wanted — parity with a far larger population (what Sardar Patel called “national suicide” for the Hindus), a button to secede, and the prospect of an undivided Punjab and Bengal to tear along with it; Jinnah had been dead set against their division, and would rue the day when they were cut in two.
Third, the poker chip theory finds no support here either — the Muslim League had always dubbed the plan a stepping stone to total separation; the text of its acceptance is grounded in “the establishment of a complete, sovereign Pakistan.” Seated before the mission’s bemused members in April, Jinnah told them that his starting point was “a Hindustan and Pakistan, each one of them a completely sovereign State.”
He was just as blunt to his party members; at a session of the Muslim League council on 5 June, he likened the mission plan to a ship, “we can work on the two decks, provincial and group, and blow up the topmast.” Having suffered the Congress regime of the 1930s, Jinnah knew just how important it was not to leave the ship deck open to his rivals.
Fourth, his plans to “blow up the topmast” went beyond mere lip-service. When the League finally did join India’s interim government that same year, Jinnah’s finance minister, Liaquat Ali Khan, moved to tax the industrial barons backing Congress, and frustrate the centre in any way he could. The implications of League control, especially over Punjab and Bengal, began dawning on Nehru with even greater force.
Fifth, the Muslim movement had already proven with its votes — in the election of 1946 — that it was Pakistan or bust, with the leadership actively working toward secession. Throw in Nehru, with his need for a strong centre, and the odds that a viable coalition could have emerged had already been blown to smithereens. Hence, ultimately, the mission’s return to London.
To lament that it all could have worked out, had Attlee’s boys succeeded, is to be looking at it backwards. The Cabinet Mission came to Delhi with a single goal — keeping India intact. By that metric, failure was only a matter of time; Nehru just sped up what Jinnah would have ensured.
A promised land?
Seventy-five years in, how we’ve dealt with the founding father is rather shoddy. This is quite aside from the facts of Pakistan itself – where generals win war after war against elected governments, where politicians think the country is an internship for their unemployable children, where judges bolt the doors to the assembly in less than a few paragraphs, and where civil servants are as redundant as their fax machines.
Amid so much waste, the least that could be done was to remain faithful to the founder’s memory. Instead, one school of thought puts him down as a bluff expert who got swept up in the tide; a premise that contradicts almost everything he said or did in the last decade of his life.
But there’s also the other school, Pak Studies lite, that’s equally rooted in unreality: that Pakistan was a successor state to the glorious Islamic lands of the past, populated by foreign races near and far: from the frozen steppes to the desert sands. And Jinnah was just one of many deliverers, as were Qasim, Ghori, and Babur.
That theory may be even less fair to the Quaid — Pakistan was the dream of Muslim modernists, against the backdrop of colonial rule and a world war. It was powered by messy mass politics, and brought to life by Jinnah. As noted by Penderel Moon, a British civil servant long critical of the Raj:
There is, I believe, no historical parallel for a single individual effecting such a political revolution; and his achievement is a striking refutation of the theory that in the making of history the individual is of little or no significance. It was Mr Jinnah who created Pakistan and undoubtedly made history.
And it is in that history where the story of Pakistan truly begins — the Arab invasion of Sindh no more foretold the republic than the Ghaznavids that defeated the Arabs, than the Ghorids that set up the sultanate, than the Mughals that defeated the sultans. As far as today’s Pakistan is concerned, those were a thousand battles that settled nothing.
Nor was Islam imposed on India battle by battle. Had the old Muslim kings wished to spread the faith, their seats of power, from Delhi to Mysore, would have had far larger Muslim populations. That their fellow believers were instead concentrated along India’s flanks — today’s Pakistan and Bangladesh — points to a more gradual process; one that hews toward the persuasive powers of Sufi saints, as well as hopes of a better life that came with converting.
Most important, though these rulers are painted as just and generous now, they rarely touched the lives of the natives that had actually embraced Islam. Over two-thirds of the imperial services under the Mughals were manned by foreign-born Muslims. The ruling class “did not merge with the local converts, rarely recruited them to higher posts, refused to marry into them, and generally looked down on them,” wrote the Pakistani historian K.K. Aziz. “… At the end of five hundred years of continuous Muslim rule, only a minimal number of local Muslims had managed to climb high on the ladder of preferment.”
Put another way, the old kings lived in an age that had yet to conceive even the outer limits of an idea like Pakistan. It thus wasn’t the sultans, or their Mughal successors, that presaged it. That honour belongs to the growing number of Muslims on the ground — the ones their kings were indifferent to, and had little hand in converting anyway. By the time Pakistan was formed, those converts had mushroomed into the largest body of Muslims in the entire world, and had greater claim on the country’s creation than any medieval sultan ever had.
Jinnah’s own view of the Muslim experience was as holistic. “The Mussalmans came to India as conquerors, traders, preachers, and teachers,” he said while marking Eid in 1942. “… Today, the hundred millions of Mussalmans of India represent the largest compact body of Muslim population in any single part of the world.”
He persisted with this bottom-up view of the Pakistan idea, centring it not on the first conqueror, but the first convert. Dawn reports from a speech he gave at Aligarh Muslim University in 1944:
Tracing the history of the beginning of Islam in India, [Mr Jinnah] proved that Pakistan started the moment the first non-Muslim was converted to Islam in India, long before the Muslims established their rule. As soon as a Hindu embraced Islam, he was outcast not only religiously but also socially, culturally, and economically … Throughout the ages, Hindus had remained Hindus and Muslims had remained Muslims, and they had not merged their entities — that was the basis for Pakistan. In a gathering of high European and American officials, he was asked as to who was the author of Pakistan. Mr Jinnah’s reply was “Every Mussalman.”
Two years later, Jinnah told the Cabinet Mission trio “that he readily admitted that 70 per cent of Muslims were converts from Hindus. A large body were converted before any Muslim conqueror arrived.”
As for the old kings, Jinnah had little patience for their lessons. When Westminster’s Leo Amery revealed he was studying the reign of Akbar, Jinnah mocked him at length, “The British government in India, too, is constituted like Akbar’s government. Akbar had Hindu ministers and Muslim ministers. Akbar knew he had to rule over both. He was eminently concerned with his own autocratic rule, and that was no rule at all.”
He would also disdain advice offered by foes as different as Lord Mountbatten and C. Rajagopalachari — that he emulate Akbar, Aurangzeb, and other Muslim icons. “These great men might have differed from one another in many respects,” said Rajagopalachari, “but they agreed in looking upon this precious land and this great nation as one and essentially indivisible.”
“Yes,” Jinnah replied, “naturally, they did so as conquerors and parental rulers. Is this the kind of government Mr Rajagopalachari does still envisage? And did the Hindus of those days willingly accept the rule of the “great men”? I may or may not be suffering from a diseased mentality, but the statement of Mr. Rajagopalachari … indicates that in him there is no mind left at all.”
Each time he was confronted with the kings of the past, Jinnah centred the Muslim in the street. In a rapidly changing India, yesterday’s rulers were no longer the yardstick; in some cases, like Akbar’s, they were self-serving autocrats.
Besides, it was the absence of Muslim kings, rather than their long-ago presence, that was in play. At the Muslim League’s inaugural session in Dhaka, its first chairman, Viqar-ul-Mulk, had said, “Woe betide the time when we become the subjects of our neighbours, and answer them for the sins, real and imaginary, of Aurangzeb, who lived and died two centuries ago, and other Mussalman conquerors and rulers who went before him.”
That Mulk’s words marked the birth of the party that created Pakistan was no coincidence — it was rooted in anxiety for the future, of an India where the locus of power had shifted dramatically from a Muslim elite to a Hindu majority. While most native Muslims fell in neither category, they could now be expected to answer to the Hindus for the sins of the past (one in which they’d been dressed up as temple-sacking jackals by British authors).
Each time he was confronted with the kings of the past, Jinnah centred the Muslim in the street. In a rapidly changing India, yesterday’s rulers were no longer the yardstick; in some cases, like Akbar’s, they were self-serving autocrats.
Hence, also, Jinnah’s own evolution. Though easy to forget now, India’s pre-eminent young nationalist in the 1910s wasn’t Gandhi; it was Jinnah, the apostle of unity. By injecting religion into the freedom movement, Gandhi stole the stage and rooted it in religious tropes — satya, ahimsa, dharma, and the majoritarian flood that came with it. Jinnah’s calls for sanity were shouted down by a populist beast he could no longer recognise. After much heartbreak, public and private, he realised there was no way out but a separate state.
And who better than the reformed Hodson to explain that voyage of discovery: “One thing is certain. It was not for any venal motive that he changed. He could be bought by no one, and for no price … He was a steadfast idealist as well as a man of scrupulous honour. The fact to be explained is that in middle life he supplanted one ideal by another, and having embraced it, clung to it with a fanatic’s grip to the end of his days.”
This also ties in with one of the most popular myths about the Quaid, if from the other end of the spectrum — that India was an ageless union where Hindus and Muslims made merry together, until the British divided them, incited them, and abandoned them. The imperial hand, in cahoots with Muslim collaborators like Jinnah, tore the union asunder.
The same line would be rehashed by liberal nostalgics, even onscreen. “My passport is Pakistani, my roots are in India,” says a teary-eyed grandmother in the superhero series Ms Marvel. “And in between all of this, there is a border … marked with blood and pain. People are claiming their identity based on an idea some old Englishmen had when they were fleeing the country.”
That myth, too, falls away fast. It’s true that the British enjoyed demonising the Muslim rulers they had torn down and replaced. And it’s also true that colonial power in India rested on the art of divide and rule. (“Strict supervision, and play them off against the other,” goes one of Rudyard Kipling’s short stories. “That is the secret of our government.”)
But tensions between Hindus and Muslims weren’t the Raj’s main means of control by the nineteenth century — if only because that risked exploding far larger religious blocs. The Raj preferred safer subdivisions, ever more fragmented along ethnic, linguistic, and political lines. Per leftist historian Perry Anderson:
For the British, the ideal arrangement was rather to be found in Punjab, the apple of the imperial eye: interconfessional unity around a strong regional identity, loyal to the Raj, against which neither Congress nor Muslim League made any headway in the interwar years. During the Second World War, when Congress came out against participation in the conflict, the League was favoured. But once the war was over, Britain sought to preserve the unity of the subcontinent as its historic creation, and when it could not, tilted towards Congress far more decisively than it ever had to the League. Popular conceptions in India blaming the creation of Pakistan on a British plot are legends.
When it came down to it, most British mandarins did everything they could to keep India in one piece — the ultimate drivers of the split were “indigenous, not imperial.”
To then argue that Pakistan was the result of butchering Mother India is equally faulty: as per the Quaid, no such realm even existed before the British showed up. “India of modern conception, with its so-called geographical unity, is entirely the creation of the British,” Jinnah would say, “… whose ultimate sanction is the sword.”
Indian nationalists rebut this with an ancient pedigree; of a Hindustan as old as the hills. But one can cut Jinnah some slack, seeing as India’s varied Hindu populations had never formed a subcontinental state of such dimensions. Jinnah put it more plainly, “When did the Hindus rule India last and what part of it? It is a historical fact that for nearly one thousand years, the Hindus have not ruled any part of India worth mentioning.”
The same can be said for coexistence — as wrong as the trope of the foreign Muslim oppressor was the rival fantasy that all was peace and love between the faiths at the local level. While hard to admit, the Hindu-Muslim problem was in evidence well before the colonisers wrote tall tales about it, and well before Jinnah arrived at the same, unmistakable conclusion.
That the subcontinent was becoming the theatre of two major, incompatible faiths finds mention over the course of a millennium. For the celebrated Persian polymath al-Biruni, writing in 1030, the real-life interaction between the two was like fire and ice. “First,” al-Biruni said of the Hindus, “they differ from us in everything which other nations have in common.”
… Secondly, they totally differ from us in religion, as we believe in nothing in which they believe, and vice versa … They … forbid having any connection with them, be it by intermarriage or any other kind of relationship, or by sitting, eating, and drinking with them, because thereby, they think, they would be polluted.
… In the third place, in all manners and usages they differ from us to such a degree as to frighten their children with us, with our dress, and our ways and customs, and as to declare us to be devil’s breed, and our doings as the very opposite of all that is good and proper.
Well before the predations of the East India Company, before Hindu sectarians like the Arya Samaj, before Muslim reactionaries boiled over with resentment, the idea of a single Indian nation — unreal at any time before 1947 — was being rebutted by the lived experience of two communities with two very different confessions.
That such deep legacies of conflict could be wished away as British intrigues or, alternatively, as Jinnah’s bargaining counters, in a secular paradise that had never once existed, soothed those denying the logic of Partition — that is, until they became willing parties to the same outcome.
In any event, the poker chip school spends most of its time on Delhi’s parlour games, and none at all on the groundswell in the distance — the ideologues of UP, the progressives of Dhaka, the vanguard of Sindh, the rebel scouts of Gilgit, and millions more. In short, it ignores the men and women who gave their heart and soul over to the idea of Pakistan.
Most importantly, it does a disservice to that movement’s core. Muhammad Ali Jinnah’s last years were so consumed by his pursuit of a new nation-state, he destroyed his lungs and lost his life. Seventy-five years later, it is unjust to continue attributing this country to a sleight of hand, rather than his supreme will.
As India is now completing 75 years after attaining independence from British rule and celebrating its 75th year of Independence on 15th August, a careful review of the scenario in 1947 and in the year 2022 in a holistic manner will certainly convince a discerning observer that India’s achievements and progress have been substantial, significant and praiseworthy.
In 1947, when India forced Britishers to give freedom, India was an underdeveloped country with a low literacy level, a high level of economic disparity and a large percentage of countrymen living below the poverty line.
All such deprived conditions have changed considerably in the last seventy-five years with significant industrial development, growth in agricultural production and productivity, significant improvement in literacy level and public health and reasonably good advancements in technology and particularly in digital media and information technology. The improved figures and data are well known and are in the public domain.
The question is that given India’s landscape, different climatic and soil conditions, irrigation potential, mineral deposits, long coastal belt and several other advantages, should one conclude that India should have done better than what it has achieved?
The best way of answering this query would be to compare India’s growth with a few other countries facing similar conditions in 1947.
Japan & Germany :
India attained independence in 1947 and during this period, Germany and Japan remained battered and virtually paralysed after facing defeat during the second world war.
Both these countries have made remarkable progress during the last seventy-five years and today remain as amongst the most developed countries in the world with a high level of prosperity index.
However, both Germany and Japan had a reasonably strong technology base before 1947 compared to India, as a result of which both these countries could take part in the second world war and they exhibited their technological and military capability of a high order.
While credit should be given to the governments and people of Germany and Japan for their remarkable progress subsequent to the second world war, India’s technological and industrial base in 1947 was at a much lower level. India had to virtually start from scratch.
Therefore, comparing the growth of Germany and Japan to that of India during the last seventy-five years may not be appropriate.
In 1947, both India and China were nearly on par as far as technology, industrial and agriculture base are concerned. In the last seventy-five years, China has grown phenomenally and is now claiming superpower status in the world.
China is only 15% of the global economy in size but now contributes 25 to 30% of global growth. Assuming that we don’t count the European Union as one economy, China is the second largest economy in the world. China’s share of world output has gone up from 6.3% in the year 1996 to 17.8%, in the year 2020. China contributed as much as around 70% of the growth in the share of developing economies in world GDP in the last two decades.
Today, the size of the Indian economy is much smaller than that of China. What is the reason for this sharp difference in the growth profile of India and China?
One can say that China is a totalitarian country and therefore, the Chinese government has been able to implement any project as it deems fit without resistance from any quarters. However, the mere totalitarian rule cannot be attributed as the reason for China’s success, since several other totalitarian countries have not progressed to any reasonable level.
The reason for China’s growth is the strong government and policy of the government to liberally cooperate with the developed countries in industrialisation and technology acquisition. Many multinational companies are now operating in China with large industrial capacities, substantially contributing to China’s technological growth and economy. Chinese companies have gained a lot by having joint ventures with multi-national companies in China.
The credit must be given to the Chinese government and the people of China for this phenomenal growth.
India could have done better in the last seventy-five years if the following issues have been tackled adequately.
India’s population in 1947 was around 347 million and the population is 1400 million at present. The mouths to be fed have multiplied several times and India’s economic growth, though impressive, has not been adequate enough to match the population growth. In the next year, India would emerge as the most populous country in the world. China too is a populous country but the Chinese government has admirably controlled the population growth by its one-child family policy, which India has not been able to do due to several reasons.
Unlike China, India is a democratic country with freedom of speech and personal freedom remaining at a very high level. As a result, several projects announced by the government have been criticised and resisted by a section of activists and several political parties with India emerging as the noisiest democracy in the world. Several well-meaning schemes could not be implemented and good projects have been forced to close down due to the protests by the so-called activists and some political parties. The latest example is that of the Sterlite Copper plant in Tamil Nadu. Due to the closure of this plant, India has become a net importer of copper, whereas India was a big exporter of copper when the Sterlite Copper plant was operating. Another example is the very important and technologically significant Neutrino project, which has been stopped by political groups. So many other examples can be readily pointed out.
Another major issue is the rapidly developing dynastic politics in India, where family groups are holding a vice-like grip over several political parties all over India. Except for BJP and communist parties, all other political parties in India today are dynastic parties under family control. In this scenario, due to the development of a situation where the family groups are ruling several states and with vested interests developing, administrative standards have deteriorated and in several states, political corruption has reached an unacceptable level. Committed people with proven competence are unable to win elections based on their merit. Such conditions have become a drag on the overall growth of the country.
What scenario for the coming years?
During the last eight years, Prime Minister Modi has elevated the quality of governance to a higher level and has introduced several imaginative schemes, keeping in view the requirement of the people at a lower economic level as well as the compulsive need to forge ahead in terms of technology and productivity. Even in the present post-COVID period where several countries in the world including developed countries are facing serious issues of inflation and recession, the Indian economy is doing much better. This fact has been recently confirmed by a report from International Monetary Fund (IMF)
Though several opposition political parties and some activists have been opposing and criticising Modi’s governance in severe terms, the overall view amongst the cross-section of the countrymen appear to be that Prime Minister Modi has done a reasonably good job and this trend should continue.
“When I first went to Hiroshima in 1967, the shadow on the steps was still there. It was an almost perfect impression of a human being at ease: legs splayed, back bent, one hand by her side as she sat waiting for a bank to open.
At a quarter past eight on the morning of August 6, 1945, she and her silhouette were burned into the granite. I stared at the shadow for an hour or more, then I walked down to the river where the survivors still lived in shanties.
I met a man called Yukio, whose chest was etched with the pattern of the shirt he was wearing when the atomic bomb was dropped.
He described a huge flash over the city, “a bluish light, something like an electrical short”, after which wind blew like a tornado and black rain fell. “I was thrown on the ground and noticed only the stalks of my flowers were left. Everything was still and quiet, and when I got up, there were people naked, not saying anything. Some of them had no skin or hair. I was certain I was dead.”
Nine years later, I returned to look for him and he was dead from leukemia.
“No Radioactivity in Hiroshima Ruin” said a New York Times headline on September 13, 1945, a classic of planted disinformation. “General Farrell,” reported William H. Lawrence, “denied categorically that [the atomic bomb] produced a dangerous, lingering radioactivity.”
Only one reporter, Wilfred Burchett, an Australian, had braved the perilous journey to Hiroshima in the immediate aftermath of the atomic bombing, in defiance of the Allied occupation authorities, which controlled the “press pack”.
“I write this as a warning to the world,” reported Burchett in the London Daily Express of September 5,1945. Sitting in the rubble with his Baby Hermes typewriter, he described hospital wards filled with people with no visible injuries who were dying from what he called “an atomic plague”.
For this, his press accreditation was withdrawn, he was pilloried and smeared. His witness to the truth was never forgiven.
The atomic bombing of Hiroshima and Nagasaki was an act of premeditated mass murder that unleashed a weapon of intrinsic criminality. It was justified by lies that form the bedrock of America’s war propaganda in the 21st century, casting a new enemy, and target – China.
During the 75 years since Hiroshima, the most enduring lie is that the atomic bomb was dropped to end the war in the Pacific and to save lives.
“Even without the atomic bombing attacks,” concluded the United States Strategic Bombing Survey of 1946, “air supremacy over Japan could have exerted sufficient pressure to bring about unconditional surrender and obviate the need for invasion. “Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved, it is the Survey’s opinion that … Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war [against Japan] and even if no invasion had been planned or contemplated.”
The National Archives in Washington contains documented Japanese peace overtures as early as 1943. None was pursued. A cable sent on May 5, 1945 by the German ambassador in Tokyo and intercepted by the U.S. made clear the Japanese were desperate to sue for peace, including “capitulation even if the terms were hard”. Nothing was done.
The U.S. Secretary of War, Henry Stimson, told President Truman he was “fearful” that the U.S. Air Force would have Japan so “bombed out” that the new weapon would not be able “to show its strength”. Stimson later admitted that “no effort was made, and none was seriously considered, to achieve surrender merely in order not to have to use the [atomic] bomb”.
Stimson’s foreign policy colleagues — looking ahead to the post-war era they were then shaping “in our image”, as Cold War planner George Kennan famously put it — made clear they were eager “to browbeat the Russians with the [atomic] bomb held rather ostentatiously on our hip”. General Leslie Groves, director of the Manhattan Project that made the atomic bomb, testified: “There was never any illusion on my part that Russia was our enemy, and that the project was conducted on that basis.”
The day after Hiroshima was obliterated, President Harry Truman voiced his satisfaction with the “overwhelming success” of “the experiment”.
The “experiment” continued long after the war was over. Between 1946 and 1958, the United States exploded 67 nuclear bombs in the Marshall Islands in the Pacific: the equivalent of more than one Hiroshima every day for 12 years.
The human and environmental consequences were catastrophic. During the filming of my documentary, The Coming War on China, I chartered a small aircraft and flew to Bikini Atoll in the Marshalls. It was here that the United States exploded the world’s first Hydrogen Bomb. It remains poisoned earth. My shoes registered “unsafe” on my Geiger counter. Palm trees stood in unworldly formations. There were no birds.
I trekked through the jungle to the concrete bunker where, at 6.45 on the morning of March 1, 1954, the button was pushed. The sun, which had risen, rose again and vaporised an entire island in the lagoon, leaving a vast black hole, which from the air is a menacing spectacle: a deathly void in a place of beauty.
The radioactive fall-out spread quickly and “unexpectedly”. The official history claims “the wind changed suddenly”. It was the first of many lies, as declassified documents and the victims’ testimony reveal.
Gene Curbow, a meteorologist assigned to monitor the test site, said, “They knew where the radioactive fall-out was going to go. Even on the day of the shot, they still had an opportunity to evacuate people, but [people] were not evacuated; I was not evacuated… The United States needed some guinea pigs to study what the effects of radiation would do.”
Marshall Islander Nerje Joseph with a photograph of her as a child soon after the H-Bomb exploded on March 1, 1954
Like Hiroshima, the secret of the Marshall Islands was a calculated experiment on the lives of large numbers of people. This was Project 4.1, which began as a scientific study of mice and became an experiment on “human beings exposed to the radiation of a nuclear weapon”.
The Marshall Islanders I met in 2015 — like the survivors of Hiroshima I interviewed in the 1960s and 70s — suffered from a range of cancers, commonly thyroid cancer; thousands had already died. Miscarriages and stillbirths were common; those babies who lived were often deformed horribly.
Unlike Bikini, nearby Rongelap atoll had not been evacuated during the H-Bomb test. Directly downwind of Bikini, Rongelap’s skies darkened and it rained what first appeared to be snowflakes. Food and water were contaminated; and the population fell victim to cancers. That is still true today.
I met Nerje Joseph, who showed me a photograph of herself as a child on Rongelap. She had terrible facial burns and much of her was hair missing. “We were bathing at the well on the day the bomb exploded,” she said. “White dust started falling from the sky. I reached to catch the powder. We used it as soap to wash our hair. A few days later, my hair started falling out.”
Lemoyo Abon said, “Some of us were in agony. Others had diarrhoea. We were terrified. We thought it must be the end of the world.”
U.S. official archive film I included in my film refers to the islanders as “amenable savages”. In the wake of the explosion, a U.S. Atomic Energy Agency official is seen boasting that Rongelap “is by far the most contaminated place on earth”, adding, “it will be interesting to get a measure of human uptake when people live in a contaminated environment.”
American scientists, including medical doctors, built distinguished careers studying the “human uptake”. There they are in flickering film, in their white coats, attentive with their clipboards. When an islander died in his teens, his family received a sympathy card from the scientist who studied him.
I have reported from five nuclear “ground zeros” throughout the world — in Japan, the Marshall Islands, Nevada, Polynesia and Maralinga in Australia. Even more than my experience as a war correspondent, this has taught me about the ruthlessness and immorality of great power: that is, imperial power, whose cynicism is the true enemy of humanity.
This struck me forcibly when I filmed at Taranaki Ground Zero at Maralinga in the Australian desert. In a dish-like crater was an obelisk on which was inscribed: “A British atomic weapon was test exploded here on 9 October 1957”. On the rim of the crater was this sign: WARNING: RADIATION HAZARD
Radiation levels for a few hundred metres around this point may be above those considered safe for permanent occupation.
For as far as the eye could see, and beyond, the ground was irradiated. Raw plutonium lay about, scattered like talcum powder: plutonium is so dangerous to humans that a third of a milligram gives a 50 percent chance of cancer.
The only people who might have seen the sign were Indigenous Australians, for whom there was no warning. According to an official account, if they were lucky “they were shooed off like rabbits”.
The Enduring Menace
Today, an unprecedented campaign of propaganda is shooing us all off like rabbits. We are not meant to question the daily torrent of anti-Chinese rhetoric, which is rapidly overtaking the torrent of anti-Russia rhetoric. Anything Chinese is bad, anathema, a threat: Wuhan …. Huawei. How confusing it is when “our” most reviled leader says so.
The current phase of this campaign began not with Trump but with Barack Obama, who in 2011 flew to Australia to declare the greatest build-up of U.S. naval forces in the Asia-Pacific region since World War Two. Suddenly, China was a “threat”. This was nonsense, of course. What was threatened was America’s unchallenged psychopathic view of itself as the richest, the most successful, the most “indispensable” nation.
What was never in dispute was its prowess as a bully — with more than 30 members of the United Nations suffering American sanctions of some kind and a trail of the blood running through defenceless countries bombed, their governments overthrown, their elections interfered with, their resources plundered.
Obama’s declaration became known as the “pivot to Asia”. One of its principal advocates was his Secretary of State, Hillary Clinton, who, as WikiLeaks revealed, wanted to rename the Pacific Ocean “the American Sea”.
Whereas Clinton never concealed her warmongering, Obama was a maestro of marketing. “I state clearly and with conviction,” said the new president in 2009, “that America’s commitment is to seek the peace and security of a world without nuclear weapons.”
Obama increased spending on nuclear warheads faster than any president since the end of the Cold War. A “usable” nuclear weapon was developed. Known as the B61 Model 12, it means, according to General James Cartwright, former vice-chair of the Joint Chiefs of Staff, that “going smaller [makes its use] more thinkable”.
The target is China. Today, more than 400 American military bases almost encircle China with missiles, bombers, warships and nuclear weapons. From Australia north through the Pacific to South-East Asia, Japan and Korea and across Eurasia to Afghanistan and India, the bases form, as one U.S. strategist told me, “the perfect noose”.
A study by the RAND Corporation – which, since Vietnam, has planned America’s wars – is entitled War with China: Thinking Through the Unthinkable. Commissioned by the U.S. Army, the authors evoke the infamous catch cry of its chief Cold War strategist, Herman Kahn – “thinking the unthinkable”. Kahn’s book, On Thermonuclear War, elaborated a plan for a “winnable” nuclear war.
Kahn’s apocalyptic view is shared by Trump’s Secretary of State Mike Pompeo, an evangelical fanatic who believes in the “rapture of the End”. He is perhaps the most dangerous man alive. “I was CIA director,” he boasted, “We lied, we cheated, we stole. It was like we had entire training courses.” Pompeo’s obsession is China.
The endgame of Pompeo’s extremism is rarely if ever discussed in the Anglo-American media, where the myths and fabrications about China are standard fare, as were the lies about Iraq. A virulent racism is the sub-text of this propaganda. Classified “yellow” even though they were white, the Chinese are the only ethnic group to have been banned by an “exclusion act” from entering the United States, because they were Chinese. Popular culture declared them sinister, untrustworthy, “sneaky”, depraved, diseased, immoral.
An Australian magazine, The Bulletin, was devoted to promoting fear of the “yellow peril” as if all of Asia was about to fall down on the whites-only colony by the force of gravity.
As the historian Martin Powers writes, acknowledging China’s modernism, its secular morality and “contributions to liberal thought threatened European face, so it became necessary to suppress China’s role in the Enlightenment debate …. For centuries, China’s threat to the myth of Western superiority has made it an easy target for race-baiting.”
In the Sydney Morning Herald, tireless China-basher Peter Hartcher described those who spread Chinese influence in Australia as “rats, flies, mosquitoes and sparrows”. Hartcher, who favourably quotes the American demagogue Steve Bannon, likes to interpret the “dreams” of the current Chinese elite, to which he is apparently privy. These are inspired by yearnings for the “Mandate of Heaven” of 2,000 years ago. Ad nausea.
To combat this “mandate”, the Australian government of Scott Morrison has committed one of the most secure countries on earth, whose major trading partner is China, to hundreds of billions of dollars’ worth of American missiles that can be fired at China.
The trickledown is already evident. In a country historically scarred by violent racism towards Asians, Australians of Chinese descent have formed a vigilante group to protect delivery riders. Phone videos show a delivery rider punched in the face and a Chinese couple racially abused in a supermarket. Between April and June, there were almost 400 racist attacks on Asian-Australians.
“We are not your enemy,” a high-ranking strategist in China told me, “but if you [in the West] decide we are, we must prepare without delay.” China’s arsenal is small compared with America’s, but it is growing fast, especially the development of maritime missiles designed to destroy fleets of ships.
“For the first time,” wrote Gregory Kulacki of the Union of Concerned Scientists, “China is discussing putting its nuclear missiles on high alert so that they can be launched quickly on warning of an attack… This would be a significant and dangerous change in Chinese policy…”
In Washington, I met Amitai Etzioni, distinguished professor of international affairs at George Washington University, who wrote that a “blinding attack on China” was planned, “with strikes that could be mistakenly perceived [by the Chinese] as pre-emptive attempts to take out its nuclear weapons, thus cornering them into a terrible use-it-or-lose-it dilemma [that would] lead to nuclear war.”
In 2019, the U.S. staged its biggest single military exercise since the Cold War, much of it in high secrecy. An armada of ships and long-range bombers rehearsed an “Air-Sea Battle Concept for China” – ASB – blocking sea lanes in the Straits of Malacca and cutting off China’s access to oil, gas and other raw materials from the Middle East and Africa.
It is fear of such a blockade that has seen China develop its Belt and Road Initiative along the old Silk Road to Europe and urgently build strategic airstrips on disputed reefs and islets in the Spratly Islands.
In Shanghai, I met Lijia Zhang, a Beijing journalist and novelist, typical of a new class of outspoken mavericks. Her best-selling book has the ironic title Socialism Is Great! Having grown up in the chaotic, brutal Cultural Revolution, she has travelled and lived in the U.S. and Europe. “Many Americans imagine,” she said, “that Chinese people live a miserable, repressed life with no freedom whatsoever. The [idea of] the yellow peril has never left them… They have no idea there are some 500 million people being lifted out of poverty, and some would say it’s 600 million.”
Modern China’s epic achievements, its defeat of mass poverty, and the pride and contentment of its people (measured forensically by American pollsters such as Pew) are wilfully unknown or misunderstood in the West. This alone is a commentary on the lamentable state of Western journalism and the abandonment of honest reporting.
China’s repressive dark side and what we like to call its “authoritarianism” are the facade we are allowed to see almost exclusively. It is as if we are fed unending tales of the evil super-villain Dr. Fu Manchu. And it is time we asked why: before it is too late to stop the next Hiroshima.
The Canadian psychologist and alt-right media fixture Jordan Peterson recently stumbled onto an important insight. In a podcast episode titled “Russia vs. Ukraine or Civil War in the West?,” he recognized a link between the war in Europe and the conflict between the liberal mainstream and the new populist right in North America and Europe.
Although Peterson initially condemns Russian President Vladimir Putin’s war of aggression, his stance gradually morphs into a kind of metaphysical defense of Russia. Referencing Dostoevsky’s Diaries, he suggests that Western European hedonist individualism is far inferior to Russian collective spirituality, before duly endorsing the Kremlin’s designation of contemporary Western liberal civilization as “degenerate.” He describes postmodernism as a transformation of Marxism that seeks to destroy the foundations of Christian civilization. Viewed in this light, the war in Ukraine is a contest between traditional Christian values and a new form of communist degeneracy.
This language will be familiar to anyone familiar with Hungarian Prime Minister Viktor Orbán’s regime, or with the January 6, 2021, insurrection at the US Capitol. As CNN’s John Blake put it, that day “marked the first time many Americans realized the US is facing a burgeoning White Christian nationalist movement,” which “uses Christian language to cloak sexism and hostility to Black people and non-White immigrants in its quest to create a White Christian America.” This worldview has now “infiltrated the religious mainstream so thoroughly that virtually any conservative Christian pastor who tries to challenge its ideology risks their career.”
The fact that Peterson has assumed a pro-Russian, anti-communist position is indicative of a broader trend. In the United States, many Republican Party lawmakers have refused to support Ukraine. J.D. Vance, a Donald Trump-backed Republican Senate candidate from Ohio, finds it “insulting and strategically stupid to devote billions of resources to Ukraine while ignoring the problems in our own country.” And Matt Gaetz, a Republican member of the House of Representatives from Florida, is committed to ending US support for Ukraine if his party wins control of the chamber this November.
But does accepting Peterson’s premise that Russia’s war and the alt-right in the US are platoons of the same global movement mean that leftists should simply take the opposite side? Here, the situation gets more complicated. Although Peterson claims to oppose communism, he is attacking a major consequence of global capitalism. As Marx and Engels wrote more than 150 years ago in the first chapter of The Communist Manifesto:
“The bourgeoisie, wherever it has got the upper hand, has put an end to all feudal, patriarchal, idyllic relations. … All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.”
This observation is studiously ignored by leftist cultural theorists who still focus their critique on patriarchal ideology and practice. Yet surely the critique of patriarchy has reached its apotheosis at precisely the historical moment when patriarchy has lost its hegemonic role – that is, when market individualism has swept it away. After all, what becomes of patriarchal family values when a child can sue her parents for neglect and abuse (implying that parenthood is just another temporary and dissolvable contract between utility-maximizing individuals)?
Of course, such “leftists” are sheep in wolves’ clothing, telling themselves that they are radical revolutionaries as they defend the reigning establishment. Today, the melting away of pre-modern social relations and forms has already gone much further than Marx could have imagined. All facets of human identity are now becoming a matter of choice; nature is becoming more and more an object of technological manipulation.
The “civil war” that Peterson sees in the developed West is thus a chimera, a conflict between two versions of the same global capitalist system: unrestrained liberal individualism versus neo-fascist conservativism, which seeks to unite capitalist dynamism with traditional values and hierarchies.
There is a double paradox here. Western political correctness (“wokeness”) has displaced class struggle, producing a liberal elite that claims to protect threatened racial and sexual minorities in order to divert attention from its members’ own economic and political power. At the same time, this lie allows alt-right populists to present themselves as defenders of “real” people against corporate and “deep state” elites, even though they, too, occupy positions at the commanding heights of economic and political power.
Ultimately, both sides are fighting over the spoils of a system in which they are wholly complicit. Neither side really stands up for the exploited or has any interest in working-class solidarity. The implication is not that “left” and “right” are outdated notions – as one often hears – but rather that culture wars have displaced class struggle as the engine of politics.
Where does that leave Europe? The Guardian’s Simon Tisdall paints a bleak but accurate picture:
“Putin’s aim is the immiseration of Europe. By weaponising energy, food, refugees and information, Russia’s leader spreads the economic and political pain, creating wartime conditions for all. A long, cold, calamity-filled European winter of power shortages and turmoil looms. … Freezing pensioners, hungry children, empty supermarket shelves, unaffordable cost of living increases, devalued wages, strikes and street protests point to Sri Lanka-style meltdowns. An exaggeration? Not really.”
To prevent a total collapse into disorder, the state apparatus, in close coordination with other states and relying on local mobilizations of people, will have to regulate the distribution of energy and food, perhaps resorting to administration by the armed forces. Europe thus has a unique chance to leave behind its charmed life of isolated welfare, a bubble in which gas and electricity prices were the biggest worries. As Ukrainian President Volodymyr Zelensky recently told Vogue, “Just try to imagine what I’m talking about happening to your home, to your country. Would you still be thinking about gas prices or electricity prices?”
He’s right. Europe is under attack, and it needs to mobilize, not just militarily but socially and economically as well. We should use the crisis to change our way of life, adopting values that will spare us from an ecological catastrophe in the coming decades. This may be our only chance.
The world is now facing a man-made food catastrophe. It is reaching crisis levels.
Current policies in many parts of the world place a priority on climate change for realizing a green new deal. Meanwhile, such policies will contribute to children dying from severe malnutrition due to broken food systems, with shortages of food and water, stress, anxiety, fear, and dangerous chemical exposure.
More negative pressure on farmers and the food system is asking for a catastrophe. The immune system of many people, especially children, has lost its resilience and has weakened too far with high risks for intoxication, infections, non-communicable and infectious diseases, deaths and infertility.
Dutch farmers, of whom many will face a cost of living crisis after 2030, have drawn the line. They are supported by an increasing number of farmers and citizens worldwide.
It’s not the farmers who are the most heavy polluters of the environment, but industries who make the products needed for a technocracy revolution to green energy, data mining, and Artificial Intelligence. As more of the WEF plans are rolled out by politicians, inequalities grow, and conflicts are rising all over the world.
The strong farmers’ revolt in the Netherlands is a call for an urgent transition to a people-oriented, free and healthy world with nutritious food cultivated and harvested in respect to natural processes. The cooperation of ordinary people worldwide is on the rise to prevent a mass famine catastrophe caused by the plan of scientism and technocracy to rule and control the world by unelected scientists and elites.
Enough food, access to food is the problem
Farmers around the world normally grow enough calories (2,800) per person (while 2,100 calories/day would be sufficient) to support a population of nine to ten billion people worldwide. But still over 828 million people have too little to eat each day. The problem is not always food; it is access. The UN which wrote in 2015 in the Sustainable Development Goals goal 2: No hunger and malnutrition for all in 2030 will not be reached.
Throughout history many times natural or manmade disasters led to food insecurities for longer periods of time, resulting in hunger, malnutrition (undernourishment) and mortality. The Covid-19 pandemic has worsened the situation. Since the global pandemic began, access to food estimates show that food insecurity has likely doubled, if not tripled in some places around the world.
Moreover, during the pandemic, global hunger rose to 150 million and is now affecting 828 million people, with 46 million at the brink of starvation facing emergency levels of hunger or worse. In the hardest hit places, this means famine or famine-like conditions. At least 45 million children are suffering from wasting, which is the most visible and severe form of malnutrition, and potentially life-threatening.
With global prices of food and fertilizers already reaching worrying highs, the continuing impacts of the pandemic, the political forces to realize climate change goals and the Russia-Ukraine war raise serious concerns for food security both in the short and the long term.
The world is facing a further spike in food shortages, pushing more families worldwide at risk for severe malnutrition. Those communities which survived former crises are left more vulnerable to a new shock than before and will accumulate the effects, diving into famine (acute starvation and a sharp increase in mortality).
Furthermore, growth of economies and development of nations are currently slowing down due to a lack of workforce due to a sharp decrease in well-being and higher mortality rates.
In the wake of new nitrogen limits that require farmers to radically curb their nitrogen emissions by up to 70 percent in the next eight years, tens of thousands of Dutch farmers have risen in protest against the government.
Farmers will be forced to use less fertilizer and even to reduce the number of their livestock, in some cases up to 95%. For smaller family-owned farms it will be impossible to reach these goals. Many will be forced to shutter, including people whose families have been farming for up to eight generations.
Moreover, a significant decrease and limitations of Dutch farmers will have huge repercussions for the global food supply chain. The Netherlands is the world’s second largest agricultural exporter after the United States. Still, the Dutch government pursues their agenda on Climate Change while there is currently no law to support the implementation, while they will not change much in the planet’s major air pollution. Models used to arrive at the decision of the Dutch government are debated by acknowledged scientists.
In no communication have Dutch politicians considered the effects of their decision on breaking a most important goal in the UN agreement: ending hunger, food insecurity and malnutrition in all in 2030.
Unfortunately, Sri Lanka, a country whose political leader introduced zero Nitrogen and CO2 emissions policy, is now facing economic problems, severe hunger, and difficulties to access food upon a political decision that farmers were not allowed to use fertilizers and pesticides. Still, politicians responsible for Nitrogen emissions/climate change in other countries pursue the same green policy.
Furthermore, experts are warning that heat, flooding, drought, wildfires, and other disasters have been wreaking economic havoc, with worse to come. Food and water shortages have been in the media.
On top of that, Australian experts announce a risk for an outbreak of a viral disease in cattle. This could cause an A$80 billion hit to the Australian economy and even more real supply chain issues. Countless businesses and producers go bankrupt. The emotional toll they are facing to euthanize their healthy herds is immense and hardly bearable. It is pushing more farmers to end their life.
Hopefully, the need for the Danish government to apologize, as an investigative report on the cull of more than 15 million minks in November 2020 criticized the action that led to the misleading of mink breeders and the public and the clearly illegal instructions to authorities, will help politicians to reconsider such drastic measures on farmers.
Worldwide, farmers’ protests are rising, supported by more and more citizens who stand up against the expensive mandates for changes to “green policies” that already brought massive miseries and instability.
At a ministerial conference for food security on June 29 2022, UN Secretary-General Antonio Guterres warned that worsening food shortages could lead to a global “catastrophe”.
Malnutrition responsible for more ill health than any other cause
The increased risk of food and water shortages the world is facing now will bring humanity to the edge. Hunger is a many-headed monster. For decades conquering world hunger has become a political issue in a way that it could not have been in the past. The use of authoritarian political power led to disastrous government policies, making it impossible for millions of people to earn a living. Chronic hunger and the recurrence of virulent famines must be seen as being morally outrageous and politically unacceptable, says Dreze and Sen in Hunger and Public Action, published in 1991.
“For those at the high end of the social ladder, ending hunger in the world would be a disaster. For those who need availability of cheap labor, hunger is the foundation of their wealth, it is an asset,” wrote Dr. George Kent in 2008 in the essay “The Benefits of World Hunger.”
Malnutrition is not only influenced by food and water shortage, but also to exposures of extreme stress, fear, insecurity of safety and food, social factors, chemicals, microplastics, toxins, and over-medicalization. No country in the world can afford to overlook this disaster in all its forms, which affects mostly children and women in reproductive age. Globally more than 3 billion people cannot afford healthy diets. And this is in contradiction to what many people think is just a low-income country problem.
Even before the Covid-19 pandemic began, about 8% of the population in North America and Europe lacked regular access to nutritious and sufficient food. A third of reproductive-age women are anemic, while 39% of the world’s adults are overweight or obese. Each year around 20 million babies are born underweight. In 2016 9.6% of the women were underweight. Globally in 2017, 22.2% of the children under the age of five were stunting, while undernutrition explains around 45% of deaths among children under five.
As stated by Lawrence Haddad, the co-chair of the Global Nutrition Report independent Expert Group, “We now live in a world where being malnourished is the new normal. It is a world we must all claim as totally unacceptable.” While malnutrition is the leading driver of disease with nearly 50% of deaths caused by nutrition related non-communicable diseases in 2014, only $50 million of donor funding was given.
Malnutrition in all its forms imposes unacceptably high costs – direct and indirect – on individuals, families and nations. The estimated impact on the global economy of the chronic undernourishment of 800 million people could be as high as $3,5 trillion per year, as was stated in a Global Nutrition Report in 2018. While child deaths, premature adult mortality and malnutrition-related infectious and non-communicable diseases are preventable with the right nutrition.
This will be much more at this precious moment, as the population sharply increases in excess mortality and non-communicable diseases among the working age people as recently shown by insurance companies.
Famines cause transgenerational effects
Famine is a widespread condition in which a large percentage of people in a country or region have little or no access to adequate food supplies. Europe and other developed parts of the world have mostly eliminated famine, though widespread famines that killed thousands and millions of people are known from history, like the Dutch Potato famine from 1846-1847, The Dutch Hunger winter 1944-1945 and a Chinese famine of 1959-1961.
The latter was the most severe famine both in terms of duration and number of people affected (600 million and around 30 million deaths) and led to a widespread undernutrition of the Chinese population in the period from 1959-1961. Currently, Sub-Saharan Africa and Yemen are countries with recognized famine.
Unfortunately, global destabilization, starvation and mass migration are increasing fast with more famines to be expected if we do not act today.
Epidemiological studies of Barker and later of Hales showed a relation between the availability of nutrition in various stages of pregnancy and the first years of life and diseases later in life. Their studies demonstrated that people with metabolic syndrome and cardiovascular diseases were often small at birth. More and more research proves the role of nutrition-related mechanisms influencing gene expression. Even the period prior to pregnancy might influence a later risk for insulin resistance or other complications of the fetus.
As demonstrated in a study with 3,000 participants in Northern China, prenatal exposure to famine significantly increased hyperglycemia in adulthood in two consecutive generations. Severity of famine during prenatal development is related to the risk for Type 2 diabetes. These findings are consistent with animal models that have shown the impact of prenatal nutritional status on neuro-endocrine changes that affect metabolism and can be programmed to transmit physiologically across multiple generations through both male and female generations. Early life Health shock conditions can cause epigenetic changes in humans that persist throughout life, affect old age mortality and have multigenerational effects. Depending on which trimester the fetus is exposed to food deprivation or even stress alone a related disease later in life may vary from schizophrenia, ADHD to renal failure and hypertension among others. Other studies of famine exposure in people have produced evidence of changes in the endocrine system and to prenatal gene expression in reproductive systems.
The effects of periods of famine or undernutrition have predominantly been seen in people with low social economic income. However, 1 in 3 persons in the world suffered from some form of malnutrition in 2016. Women and children are 70% of the hungry. There is no doubt that undernutrition increased further during the past six years. Stunting and wasting increased in the most vulnerable. Two out of three children are not fed the minimum diverse diet they need to grow and develop to their full potential.
The hungry people in countries like Sri Lanka, Haiti, Armenia ,and Panama are the tip of the iceberg, opening the eyes of many citizens worldwide to a fast-growing problem as a result of the lockdowns, mandates and coercive policies in climate change, drought and the Ukraine war.
Citizens of the world have been facing for years: excess mortality, a fast decline in infertility and childbirth with a threat to human rights for women and more diseases.
Shocking reports of the UN and WHO acknowledged the health of people and environment is declining. The world is moving backwards on eliminating hunger and malnutrition. The real danger is that these numbers will climb even higher in the months ahead.
The truth is that food innovation hubs, food flats (vertical farming), artificial meats and gene and mind manipulations will not be able to tackle the depressing state humanity is facing.
Zero-Covid policy has brought humanity at risk in its existence. Covid-19 vaccines with a risk for harmhave been rolled out even for children under five years, hardly at risk for a severe disease, but undernourishment that greatly increases susceptibility to major human infectious diseases has not been taken care of.
Conflicts are growing worldwide, increasing instability. Citizens will no longer accept policies without a clear harm-cost benefit analysis.
We need to act now to decrease food and fuel prices immediately by supporting farmers and effective food systems for nutritious food to heal the most malnourished (children and females at childbearing age) in the population.
Let us hope for a return of Hippocrates’ principle: “Let food be thy medicine and medicine be thy food.”
The Atacama salt flat in northern Chile, which stretches 1,200 square miles, is the largest source of lithium in the world. We are standing on a bluff, looking over la gran fosa, the great pit that sits at the southern end of the flat, which is shielded from public view. It is where the major Chilean corporations have set up shop to extract lithium and export it—largely unprocessed—into the global market. “Do you know whose son-in-law is the lithium king of Chile?” asks Loreto, who took us to the salt flat to view these white sands from a vantage point. His response is not so shocking; it is Julio Ponce Lerou, who is the largest stakeholder in the lithium mining company Sociedad Química y Minera de Chile (SQM) and the former son-in-law of the late military dictator Augusto Pinochet (who ruled Chile from 1973 to 1990).
SQM and Albemarle, the two major Chilean mining companies, dominate the Atacama salt flat. It is impossible to get a permit to visit the southern end of the flats, where the large corporations have set up their operations. The companies extract the lithium by pumping brine from beneath the salt flat and then letting it evaporate for months before carrying out the extraction. “SQM steals our water to extract lithium,” said the former president of the Council of Indigenous Peoples of Atacameño, Ana Ramos, in 2018, according to Deutsche Welle. The concentrate left behind after evaporation is turned into lithium carbonate and lithium hydroxide, which are then exported, and form key raw materials used in the production of lithium-ion batteries. About a third of the world’s lithium comes from Chile. According to Goldman Sachs, “lithium is the new gasoline.”
What Necessity Does
Ownership over the salt flat is contested among the state, Chile’s Indigenous communities, and private entities. But, as one member of the Lickanantay community—the Indigenous people who call the Atacama salt flat their home—told us, most of the owners of the land do not live in the area any longer. Juan, who raises horses and whose family were herders, tells us that people “live off the rents from the land. They do not care what happens to the area.” However, Juan knows that these rents are minuscule. “What they pay us as they mine our land is practically a tip,” he says. “It is nothing compared to what they earn. But it is still a lot of money.” For most Lickanantay people, Juan says, “lithium is not an issue because although it is known to damage the environment, it is providing [us with] money.” “Necessity drives people to do a lot of things,” he adds.
The negative environmental impacts of mining lithium have been widely studied by scientists and observed by tourist guides in Chile. Angelo, a guide, tells us that he worries about the water supplies getting polluted due to mining activities and the impact it has on the Atacama Desert animals, including the pink flamingos. “Every once in a while, we see a dead pink flamingo,” he says. Cristina Inés Dorador, who participated in writing Chile’s new proposed constitution, is a scientist with a PhD in natural sciences who has published about the decline of the pink flamingo population in the salt flat. However, Dorador has also said that new technologies could be used to prevent the widespread negative environmental impact. Ingrid Garcés Millas, who has a PhD in earth sciences from the University of Zaragoza and is a researcher at the University of Antofagasta, pointed out that the currently used of lithium extraction has led to the deterioration of the “ways of life of [the] Andean peoples” in an article for Le Monde Diplomatique. An example she provided was that while the underground water supply is used by the lithium industry, the “communities are supplied [with water] by cistern trucks.”
According to a report by MiningWatch Canada and the Environmental Justice Atlas, “to produce one ton of lithium in the salt flats in Atacama (Chile), 2,000 tons of water are evaporated, causing significant harm to both the availability of water and the quality of underground fresh water reserves.”
Meanwhile, there is no pressing debate in the Atacama region over the extraction of lithium. Most people seem to have accepted that lithium mining is here to stay. Among the activists, there are disagreements over how to approach the question of lithium. More radical activists believe that lithium should not be extracted, while others debate about who should benefit from the wealth generated by the mining of lithium. Still others, such as Angelo and Loreto, believe that Chile’s willingness to export the unprocessed lithium denies the country the possibility of exploring the benefits that might come from processing the metal within the country.
Before the presidential election in Chile in November 2021, we went to see Giorgio Jackson, now one of the closest advisers to Chile’s President Gabriel Boric. He told us then that Chile’s new government would look at the possibility of the nationalization of key resources, such as copper and lithium. This no longer seems to be on the government’s agenda, despite the expectation that the high prices for copper and lithium would pay for the much-needed pension reforms and the modernization of the country’s infrastructure.
The idea of nationalization was floated around the constitutional convention but did not find its way into the text of the proposed constitution, which will be put to vote on September 4. Instead, the proposed constitution builds on Article 19 of the 1980 constitution, which provides for “the right to live in an environment free from contamination.” The new constitution is expected to lay out the natural commons under which the state “has a special duty of custody, in order to ensure the rights of nature and the interest of present and future generations.”
In the waning days of the government of former President Sebastián Piñera, Chile’s Mining Ministry awarded two companies—BYD Chile SpA and Servicios y Operaciones Mineras del Norte S.A.—extraction rights for 80,000 tons of lithium each for 20 years. An appeals court in Copiapó heard a petition from the governor of Copiapó, Miguel Vargas, and from various Indigenous communities. In January 2022, the court suspended the deal; that suspension was upheld in June by the Supreme Court. This does not imply that Chile will roll back the exploitation of lithium by the major corporations, but it does suggest that a new appetite is developing against the widespread exploitation of natural resources in the country.
Until 2016, Chile produced 37 percent of the global market share of lithium, making the country the world’s largest producer of the metal. When Chile’s government increased royalty rates on the miners, several of them curtailed production and some increased their stake in Argentina (SQM, for instance, entered a joint venture with Lithium Americas Corporation to work on a project in Argentina). Chile is behind Australia in terms of lithium production in the world market presently, falling from 37 percent in 2016 to 29 percent in 2019 (with an expectation that Chile’s share will fall further to 17 percent by 2030).
Juan’s observation that “necessity drives people to do a lot of things” captures the mood among the Atacameños. The needs of the people of the region seem to only come after the needs of the large corporations. Relatives of the old dictators accumulate wealth off of the land, while the owners of the land—out of necessity—sell their land for a propina, a tip.
This article was produced by Globetrotter.
Paper presented on July 11, 2022 to The Ninth South-South Forum on Sustainability.
The greatest challenge facing societies has always been how to conduct trade and credit without letting merchants and creditors make money by exploiting their customers and debtors. All antiquity recognized that the drive to acquire money is addictive and indeed tends to be exploitative and hence socially injurious. The moral values of most societies opposed selfishness, above all in the form of avarice and wealth addiction, which the Greeks called philarguria – love of money, silver-mania. Individuals and families indulging in conspicuous consumption tended to be ostracized, because it was recognized that wealth often was obtained at the expense of others, especially the weak.
The Greek concept of hubris involved egotistic behavior causing injury to others. Avarice and greed were to be punished by the justice goddess Nemesis, who had many Near Eastern antecedents, such as Nanshe of Lagash in Sumer, protecting the weak against the powerful, the debtor against the creditor.
That protection is what rulers were expected to provide in serving the gods. That is why rulers were imbued with enough power to protect the population from being reduced to debt dependency and clientage. Chieftains, kings and temples were in charge of allocating credit and crop-land to enable smallholders to serve in the army and provide corvée labor. Rulers who behaved selfishly were liable to be unseated, or their subjects might run away, or support rebel leaders or foreign attackers promising to cancel debts and redistribute land more equitably.
The most basic function of Near Eastern kingship was to proclaim “economic order,” misharum and andurarum clean slate debt cancellations, echoed in Judaism’s Jubilee Year. There was no “democracy” in the sense of citizens electing their leaders and administrators, but “divine kingship” was obliged to achieve the implicit economic aim of democracy: “protecting the weak from the powerful.”
Royal power was backed by temples and ethical or religious systems. The major religions that emerged in the mid-first millennium BC, those of Buddha, Lao-Tzu and Zoroaster, held that personal drives should be subordinate to the promotion of overall welfare and mutual aid.
What did not seem likely 2500 years ago was that a warlord aristocracy would conquer the Western world. In creating what became the Roman Empire, an oligarchy took control of the land and, in due course, the political system. It abolished royal or civic authority, shifted the fiscal burden onto the lower classes, and ran the population and industry into debt.
This was done on a purely opportunistic basis. There was no attempt to defend this ideologically. There was no hint of an archaic Milton Friedman emerging to popularize a radical new moral order celebrating avarice by claiming that greed is what drives economies forward, not backward, convincing society to leave the distribution of land and money to “the market” controlled by private corporations and money-lenders instead of communalistic regulation by palace rulers and temples – or by extension, today’s socialism. Palaces, temples and civic governments were creditors. They were not forced to borrow to function, and so were not subjected to the policy demands of a private creditor class.
But running the population, industry and even governments into debt to an oligarchic elite is precisely what has occurred in the West, which is now trying to impose the modern variant of this debt-based economic regime – U.S.-centered neoliberal finance capitalism – on the entire world. That is what today’s New Cold War is all about.
By the traditional morality of early societies, the West – starting in classical Greece and Italy around the 8th century BC – was barbarian. The West was indeed on the periphery of the ancient world when Syrian and Phoenician traders brought the idea of interest-bearing debt from the Near East to societies that had no royal tradition of periodic debt cancellations. The absence of a strong palace power and temple administration enabled creditor oligarchies to emerge throughout the Mediterranean world.
Greece ended up being conquered first by oligarchic Sparta, then by Macedonia and finally by Rome. It is the latter’s avaricious pro-creditor legal system that has shaped subsequent Western civilization. Today, a financialized system of oligarchic control whose roots lead back to Rome is being supported and indeed imposed by U.S. New Cold War diplomacy, military force and economic sanctions on countries seeking to resist it.
Classical antiquity’s oligarchic takeover
In order to understand how Western Civilization developed in a way that contained the fatal seeds of its own economic polarization, decline and fall, it is necessary to recognize that when classical Greece and Rome appear in the historical record a Dark Age had disrupted economic life from the Near East to the eastern Mediterranean from 1200 to about 750 BC. Climate change apparently caused severe depopulation, ending Greece’s Linear B palace economies, and life reverted to the local level during this period.
Some families created mafia-like autocracies by monopolizing the land and tying labor to it by various forms of coercive clientage and debt. Above all was the problem of interest-bearing debt that the Near Eastern traders had brought to the Aegean and Mediterranean lands – without the corresponding check of royal debt cancellations.
Out of this situation Greek reformer-“tyrants” arose in the 7th and 6th centuries BC from Sparta to Corinth, Athens and Greek islands. The Cypselid dynasty in Corinth and similar new leaders in other cities are reported to have cancelled the debts that held clients in bondage on the land, redistributed this land to the citizenry, and undertaken public infrastructure spending to build up commerce, opening the way for civic development and the rudiments of democracy. Sparta enacted austere “Lycurgan” reforms against conspicuous consumption and luxury. The poetry of Archilochus on the island of Paros and Solon of Athens denounced the drive for personal wealth as addictive, leading to hubris injuring others – to be punished by the justice goddess Nemesis. The spirit was similar to Babylonian, Judaic and other moral religions.
Rome had a legendary seven kings (753-509 BC), who are said to have attracted immigrants and prevented an oligarchy from exploiting them. But wealthy families overthrew the last king. There was no religious leader to check their power, as the leading aristocratic families controlled the priesthood. There were no leaders who combined domestic economic reform with a religious school, and there was no Western tradition of debt cancellations such as Jesus would advocate in trying to restore the Jubilee Year to Judaic practice. There were many Stoic philosophers, and religious amphictyonic sites such as Delphi and Delos expressed a religion of personal morality to avoid hubris.
Rome’s aristocrats created an anti-democratic constitution and Senate, and laws that made debt bondage – and the consequent loss of land – irreversible. Although the “politically correct” ethic was to avoid engaging in commerce and moneylending, this ethic did not prevent an oligarchy from emerging to take over the land and reduce much of the population to bondage. By the 2nd century BC Rome conquered the entire Mediterranean region and Asia Minor, and the largest corporations were the publican tax collectors, who are reported to have looted Rome’s provinces.
There always have been ways for the wealthy to act sanctimoniously in harmony with altruistic ethics eschewing commercial greed while enriching themselves. Western antiquity’s wealthy were able to come to terms with such ethics by avoiding direct lending and trading themselves, assigning this “dirty work” to their slaves or freemen, and by spending the revenue from such activities on conspicuous philanthropy (which became an expected show in Rome’s election campaigns). And after Christianity became the Roman religion in the 4th century AD, money was able to buy absolution by suitably generous donations to the Church.
Rome’s legacy and the West’s financial imperialism
What distinguishes Western economies from earlier Near Eastern and most Asian societies is the absence of debt relief to restore economy-wide balance. Every Western nation has inherited from Rome the pro-creditor sanctity of debt principles that prioritize the claims of creditors and legitimize the permanent transfer to creditors of the property of defaulting debtors. From ancient Rome to Habsburg Spain, imperial Britain and the United States, Western oligarchies have appropriated the income and land of debtors, while shifting taxes off themselves onto labor and industry. This has caused domestic austerity and led oligarchies to seek prosperity through foreign conquest, to gain from foreigners what is not being produced by domestic economies driven into debt and subject to pro-creditor legal principles transferring land and other property to a rentier class.
Spain in the 16th century looted vast shiploads of silver and gold from the New World, but this wealth flowed through its hands, dissipated on war instead of being invested in domestic industry. Left with a steeply unequal and polarized economy deeply in debt, the Habsburgs lost their former possession, the Dutch Republic, which thrived as the less oligarchic society and one deriving more power as a creditor than as a debtor.
Britain followed a similar rise and fall. World War I left it with heavy arms debts owed to its own former colony, the United States. Imposing anti-labor austerity at home in seeking to pay these debts, Britain’s sterling area subsequently became a satellite of the U.S. dollar under the terms of American Lend-Lease in World War II and the 1946 British Loan. The neoliberal policies of Margaret Thatcher and Tony Blair sharply increased the cost of living by privatizing and monopolizing public housing and infrastructure, wiping out Britain’s former industrial competitiveness by raising the cost of living and hence wage levels.
The United States has followed a similar trajectory of imperial overreaching at the cost of its domestic economy. Its overseas military spending from 1950 onwards forced the dollar off gold in 1971. That shift had the unanticipated benefit of ushering in a “dollar standard” that has enabled the U.S. economy and its military diplomacy to get a free ride from the rest of the world, by running up dollar debt to other nation’s central banks without any practical constraint.
The financial colonization of the post-Soviet Union in the 1990s by the “shock therapy” of privatization giveaways, followed by China’s admission to the World Trade Organization in 2001 – with the expectation that China would, like Yeltsin’s Russia, become a U.S. financial colony – led America’s economy to deindustrialize by shifting employment to Asia. Trying to force submission to U.S. control by inaugurating today’s New Cold War has led Russia, China and other countries to break away from the dollarized trade and investment system, leaving the United States and NATO Europe to suffer austerity and deepening wealth inequality as debt ratios are soaring for individuals, corporations and government bodies.
It was only a decade ago that Senator John McCain and President Barack Obama characterized Russia as merely a gas station with atom bombs. That could now just as well be said of the United States, basing its world economic power on control of the West’s oil trade, while its main export surpluses are agricultural crops and arms. The combination of financial debt leveraging and privatization has made America a high-cost economy, losing its former industrial leadership, much like Britain did. The United States is now attempting to live mainly off financial gains (interest, profits on foreign investment and central bank credit creation to inflate capital gains) instead of creating wealth through its own labor and industry. Its Western allies seek to do the same. They euphemize this U.S.-dominated system as “globalization,” but it is simply a financial form of colonialism – backed with the usual military threat of force and covert “regime change” to prevent countries from withdrawing from the system.
This U.S. and NATO-based imperial system seeks to indebt weaker countries and force them to turn control over their policies to the International Monetary Fund and World Bank. Obeying the neoliberal anti-labor “advice” of these institutions leads to a debt crisis that forces the debtor country’s foreign-exchange rate to depreciate. The IMF then “rescues” them from insolvency on the “conditionality” that they sell off the public domain and shift taxes off the wealthy (especially foreign investors) onto labor.
Oligarchy and debt are the defining characteristics of Western economies. America’s foreign military spending and nearly constant wars have left its own Treasury deeply indebted to foreign governments and their central banks. The United States is thus following the same path by which Spain’s imperialism left the Habsburg dynasty in debt to European bankers, and Britain’s participation in two world wars in hope of maintaining its dominant world position left it in debt and ended its former industrial advantage. America’s rising foreign debt has been sustained by its “key currency” privilege of issuing its own dollar-debt under the “dollar standard” without other countries having any reasonable expectation of ever being paid – except in yet more “paper dollars.”
This monetary affluence has enabled Wall Street’s managerial elite to increase America’s rentier overhead by financialization and privatization, increasing the cost of living and doing business, much as occurred in Britain under the neoliberal policies of Margaret Thatcher and Tony Blair. Industrial companies have responded by shifting their factories to low-wage economies to maximize profits. But as America deindustrializes with rising import dependency on Asia, U.S. diplomacy is pursuing a New Cold War that is driving the world’s most productive economies to decouple from the U.S. economic orbit.
Rising debt destroys economies when it is not being used to finance new capital investment in means of production. Most Western credit today is created to inflate stock, bond and real estate prices, not to restore industrial ability. As a result of this debt-without-production approach, the U.S. domestic economy has been overwhelmed by debt owed to its own financial oligarchy. Despite America’s economy’s free lunch in the form of the continued run-up of its official debt to foreign central banks – with no visible prospect of either its international or domestic debt being paid – its debt continues to expand and the economy has become even more debt-leveraged. America has polarized with extreme wealth concentrated at the top while most of the economy is driven deeply into debt.
The failure of oligarchic democracies to protect the indebted population at large
What has made the Western economies oligarchic is their failure to protect the citizenry from being driven into dependency on a creditor property-owning class. These economies have retained Rome’s creditor-based laws of debt, most notably the priority of creditor claims over the property of debtors. The creditor One Percent has become a politically powerful oligarchy despite nominal democratic political reforms expanding voting rights. Government regulatory agencies have been captured and taxing power has been made regressive, leaving economic control and planning in the hands of a rentier elite.
Rome never was a democracy. And in any case, Aristotle recognized democracies as evolving more or less naturally into oligarchies – which claim to be democratic for public-relations purposes while pretending that their increasingly top-heavy concentration of wealth is all for the best. Today’s trickle-down rhetoric depicts banks and financial managers as steering savings in the most efficient way to produce prosperity for the entire economy, not just for themselves.
President Biden and his State Department neoliberals accuse China and any other country seeking to maintain its economic independence and self-reliance of being “autocratic.” Their rhetorical sleight of hand juxtaposes democracy to autocracy. What they call “autocracy” is a government strong enough to prevent a Western-oriented financial oligarchy from indebting the population to itself – and then prying away its land and other property into its own hands and those of its American and other foreign backers.
The Orwellian Doublethink of calling oligarchies “democracies” is followed by defining a free market as one that is free for financial rent-seeking. U.S.-backed diplomacy has indebted countries, forcing them to sell control of their public infrastructure and turn their economy’s “commanding heights” into opportunities to extract monopoly rent.
This autocracy vs. democracy rhetoric is similar to the rhetoric that Greek and Roman oligarchies used when they accused democratic reformers of seeking “tyranny” (in Greece) or “kingship” (in Rome). It was the Greek “tyrants” who overthrow mafia-like autocracies in the 7th and 6th centuries BC, paving the way for the economic and proto-democratic takeoffs of Sparta, Corinth and Athens. And it was Rome’s kings who built up their city-state by offering self-support land tenure for citizens. That policy attracted immigrants from neighboring Italian city-states whose populations were being forced into debt bondage.
The problem is that Western democracies have not proved adept at preventing oligarchies from emerging and polarizing the distribution of income and wealth. Ever since Rome, oligarchic “democracies” have not protected their citizens from creditors seeking to appropriate land, its rental yield and the public domain for themselves.
If we ask just who today is enacting and enforcing policies that seek to check oligarchy in order to protect the livelihood of citizens, the answer is that this is done by socialist states. Only a strong state has the power to check a financial and rent-seeking oligarchy. The Chinese embassy in America demonstrated this in its reply to President Biden’s description of China as an autocracy:
Clinging to a Cold War mentality and the hegemon’s logic, the US pursues bloc politics, concocts the “democracy versus authoritarianism” narrative … and ramps up bilateral military alliances, in a clear attempt at countering China.
Guided by a people-centered philosophy, since the day when it was founded … the Party has been working tirelessly for the interest of the people, and has dedicated itself to realizing people’s aspirations for a better life. China has been advancing whole-process people’s democracy, promoting legal safeguard for human rights, and upholding social equity and justice. The Chinese people now enjoy fuller and more extensive and comprehensive democratic rights.
Nearly all early non-Western societies had protections against the emergence of mercantile and rentier oligarchies. That is why it is so important to recognize that what has become Western civilization represents a break from the Near East, South and East Asia. Each of these regions had its own system of public administration to save its social balance from commercial and monetary wealth that threatened to destroy economic balance if left unchecked. But the West’s economic character was shaped by rentier oligarchies. Rome’s Republic enriched its oligarchy by stripping the wealth of the regions it conquered, leaving them impoverished. That remains the extractive strategy of subsequent European colonialism and, most recently, U.S.-centered neoliberal globalization. The aim always has been to “free” oligarchies from constraints on their self-seeking.
The great question is, “freedom” and “liberty” for whom? Classical political economy defined a free market as one free from unearned income, headed by land rent and other natural-resource rent, monopoly rent, financial interest and related creditor privileges. But by the end of the 19th century the rentier oligarchy sponsored a fiscal and ideological counter-revolution, re-defining a free market as one free for rentiers to extract economic rent – unearned income.
This rejection of the classical critique of rentier income has been accompanied by re-defining “democracy” to require having a “free market” of the anti-classical oligarchic rentier variety. Instead of the government being the economic regulator in the public interest, public regulation of credit and monopolies is dismantled. That lets companies charge whatever they want for the credit they supply and the products they sell. Privatizing the privilege of creating credit-money lets the financial sector take over the role of allocating property ownership.
The result has been to centralize economic planning in Wall Street, the City of London, the Paris Bourse and other imperial financial centers. That is what today’s New Cold War is all about: protecting this system of U.S.-centered neoliberal financial capitalism, by wrecking or isolating the alternative systems of China, Russia and their allies, while seeking to further financialize the former colonialist system sponsoring creditor power instead of protecting debtors, imposing debt-ridden austerity instead of growth, and making the loss of property through foreclosure or forced sale irreversible.
Is Western civilization a long detour from where antiquity seemed to be headed?
What is so important in Rome’s economic polarization and collapse that resulted from the dynamics of interest-bearing debt in the rapacious hands of its creditor class is how radically its oligarchic pro-creditor legal system differed from the laws of earlier societies that checked creditors and the proliferation of debt. The rise of a creditor oligarchy that used its wealth to monopolize the land and take over the government and courts (not hesitating to use force and targeted political assassination against would-be reformers) had been prevented for thousands of years throughout the Near East and other Asian lands. But the Aegean and Mediterranean periphery lacked the economic checks and balances that had provided resilience elsewhere in the Near East. What has distinguished the West from the outset has been its lack of a government strong enough to check the emergence and dominance of a creditor oligarchy.
All ancient economies operated on credit, running up crop debts during the agricultural year. Warfare, droughts or floods, disease and other disruptions often prevented the accrual of debts from being paid. But Near Eastern rulers cancelled debts under these conditions. That saved their citizen-soldiers and corvée-workers from losing their self-support land to creditors, who were recognized as being a potential rival power to the palace. By the mid-first millennium BC debt bondage had shrunk to only a marginal phenomenon in Babylonia, Persia and other Near Eastern realms. But Greece and Rome were in the midst of a half-millennium of popular revolts demanding debt cancellation and liberty from debt bondage and loss of self-support land.
It was only Roman kings and Greek tyrants who, for a while, were able to protect their subjects from debt bondage. But they ultimately lost to warlord creditor oligarchies. The lesson of history is thus that a strong government regulatory power is required to prevent oligarchies from emerging and using creditor claims and land grabbing to turn the citizenry into debtors, renters, clients and ultimately serfs.
The rise of creditor control over modern governments
Palaces and temples throughout the ancient world were creditors. Only in the West did a private creditor class emerge. A millennium after the fall of Rome, a new banking class obliged medieval kingdoms to run into debt. International banking families used their creditor power to gain control of public monopolies and natural resources, much as creditors had gained control of individual land in classical antiquity.
World War I saw the Western economies reach an unprecedented crisis as a result of Inter-Ally debts and German reparations. Trade broke down and the Western economies fell into depression. What pulled them out was World War II, and this time no reparations were imposed after the war ended. In place of war debts, England simply was obliged to open up its Sterling Area to U.S. exporters and refrain from reviving its industrial markets by devaluing sterling, under the terms of Lend-Lease and the 1946 British Loan as noted above.
The West emerged from World War II relatively free of private debt – and thoroughly under U.S. dominance. But since 1945 the volume of debt has expanded exponentially, reaching crisis proportions in 2008 as the junk-mortgage bubble, massive bank fraud and financial debt pyramiding exploded, overburdening the U.S. as well as the European and Global South economies.
The U.S. Federal Reserve Bank monetized $8 trillion to save the financial elite’s holdings of stocks, bonds and packaged real estate mortgages instead of rescuing the victims of junk mortgages and over-indebted foreign countries. The European Central Bank did much the same thing to save the wealthiest Europeans from losing the market value of their financial wealth.
But it was too late to save the U.S. and European economies. The long post-1945 debt buildup has run its course. The U.S. economy has been deindustrialized, its infrastructure is collapsing and its population is so deeply indebted that little disposable income is left to support living standards. Much as occurred with Rome’s Empire, the American response is to try to maintain the prosperity of its own financial elite by exploiting foreign countries. That is aim of today’s New Cold War diplomacy. It involves extracting economic tribute by pushing foreign economies further into dollarized debt, to be paid by imposing depression and austerity on themselves.
This subjugation is depicted by mainstream economists as a law of nature and hence as an inevitable form of equilibrium, in which each nation’s economy receives “what it is worth.” Today’s mainstream economic models are based on the unrealistic assumption that all debts can be paid, without polarizing income and wealth. All economic problems are assumed to be self-curing by “the magic of the marketplace,” without any need for civic authority to intervene. Government regulation is deemed inefficient and ineffective, and hence unnecessary. That leaves creditors, land-grabbers and privatizers with a free hand to deprive others of their freedom. This is depicted as the ultimate destiny of today’s globalization, and of history itself.
The end of history? Or just of the West’s financialization and privatization?
The neoliberal pretense is that privatizing the public domain and letting the financial sector take over economic and social planning in targeted countries will bring mutually beneficial prosperity. That is supposed to make foreign submission to the U.S.-centered world order voluntary. But the actual effect of neoliberal policy has been to polarize Global South economies and subject them to debt-ridden austerity.
American neoliberalism claims that America’s privatization, financialization and shift of economic planning from government to Wall Street and other financial centers is the result of a Darwinian victory achieving such perfection that it is “the end of history.” It is as if the rest of the world has no alternative but to accept U.S. control of the global (that is, neo-colonial) financial system, trade and social organization. And just to make sure, U.S. diplomacy seeks to back its financial and diplomatic control by military force.
The irony is that U.S. diplomacy itself has helped accelerate an international response to neoliberalism by forcing together governments strong enough to pick up the long trend of history that sees governments empowered to prevent corrosive oligarchic dynamics from derailing the progress of civilization.
The 21st century began with American neoliberals imagining that their debt-leveraged financialization and privatization would cap the long upsweep of human history as the legacy of classical Greece and Rome. The neoliberal view of ancient history echoes that of antiquity’s oligarchies, denigrating Rome’s kings and Greece’s reformer-tyrants as threatening too strong a public intervention when they aimed at keeping citizens free of debt bondage and securing self-support land tenure. What is viewed as the decisive takeoff point is the oligarchy’s “security of contracts” giving creditors the right to expropriate debtors. This indeed has remained a defining characteristic of Western legal systems for the past two thousand years.
A real end of history would mean that reform stops in every country. That dream seemed close when U.S. neoliberals were given a free hand to reshape Russia and other post-Soviet states after the Soviet Union dissolved itself in 1991, starting with shock therapy privatizing natural resources and other public assets in the hands of Western-oriented kleptocrats registering public wealth in their own names – and cashing out by selling their takings to U.S. and other Western investors.
The end of the Soviet Union’s history was supposed to consolidate America’s End of History by showing how futile it would be for nations to try to create an alternative economic order based on public control of money and banking, public health, free education and other subsidies of basic needs, free from debt financing. China’s admission into the World Trade Organization in 2001 was viewed as confirming Margaret Thatcher’s claim that There Is No Alternative (TINA) to the new neoliberal order sponsored by U.S. diplomacy.
There is an economic alternative, of course. Looking over the sweep of ancient history, we can see that the main objective of ancient rulers from Babylonia to South Asia and East Asia was to prevent a mercantile and creditor oligarchy from reducing the population at large to clientage, debt bondage and serfdom. If the non-U.S. Eurasian world now follows this basic aim, it would be restoring the flow of history to its pre-Western course. That would not be the end of history, but it would return to the non-Western world’s basic ideals of economic balance, justice and equity.
Today, China, India, Iran and other Eurasian economies have taken the first step as a precondition for a multipolar world, by rejecting America’s insistence that they join the U.S. trade and financial sanctions against Russia. These countries realize that if the United States could destroy Russia’s economy and replace its government with U.S.-oriented Yeltsin-like proxies, the remaining countries of Eurasia would be next in line.
The only possible way for history really to end would be for the American military to destroy every nation seeking an alternative to neoliberal privatization and financialization. U.S. diplomacy insists that history must not take any path that would not culminate in its own financial empire ruling through client oligarchies. American diplomats hope that their military threats and support of proxy armies will force other countries to submit to neoliberal demands – to avoid being bombed, or suffering “color revolutions,” political assassinations and army takeovers, Pinochet-style. But the only real way to bring history to an end is by atomic war to end human life on this planet.
The New Cold War is dividing the world into two contrasting economic systems
NATO’s proxy war in Ukraine against Russia is the catalyst fracturing the world into two opposing spheres with incompatible economic philosophies. China, the country growing most rapidly, treats money and credit as a public utility allocated by government instead of letting the monopoly privilege of credit creation be privatized by banks, leading to them displacing government as economic and social planner. That monetary independence, relying on its own domestic money creation instead of borrowing U.S. electronic dollars, and denominating foreign trade and investment in its own currency instead of in dollars, is seen as an existential threat to America’s control of the global economy.
U.S. neoliberal doctrine calls for history to end by “freeing” the wealthy classes from a government strong enough to prevent the polarization of wealth, and ultimate decline and fall. Imposing trade and financial sanctions against Russia, Iran, Venezuela and other countries that resist U.S. diplomacy, and ultimately military confrontation, is how America intends to “spread democracy” by NATO from Ukraine to the China Seas.
The West, in its U.S. neoliberal iteration, seems to be repeating the pattern of Rome’s decline and fall. Concentrating wealth in the hands of the One Percent has always been the trajectory of Western civilization. It is a result of classical antiquity having taken a wrong track when Greece and Rome allowed the inexorable growth of debt, leading to the expropriation of much of the citizenry and reducing it to bondage to a land-owning creditor oligarchy. That is the dynamic built into the DNA of what is called the West and its “security of contracts” without any government oversight in the public interest. By stripping away prosperity at home, this dynamic requires a constant reaching out to extract an economic affluence (literally a “flowing in”) at the expense of colonies or debtor countries.
The United States through its New Cold War is aiming at securing precisely such economic tribute from other countries. The coming conflict may last for perhaps twenty years and will determine what kind of political and economic system the world will have. At issue is more than just U.S. hegemony and its dollarized control of international finance and money creation. Politically at issue is the idea of “democracy” that has become a euphemism for an aggressive financial oligarchy seeking to impose itself globally by predatory financial, economic and political control backed by military force.
As I have sought to emphasize, oligarchic control of government has been a major distinguishing feature of Western civilization ever since classical antiquity. And the key to this control has been opposition to strong government – that is, civil government strong enough to prevent a creditor oligarchy from emerging and monopolizing control of land and wealth, making itself into a hereditary aristocracy, a rentier class living off land rents, interest and monopoly privileges that reduce the population at large to austerity.
The unipolar U.S.-centered order hoping to “end history” reflected a basic economic and political dynamic that has been a characteristic of Western civilization ever since classical Greece and Rome set off along a different track from the Near Eastern matrix in the first millennium BC.
To save themselves from being swept into the whirlpool of economic destruction now engulfing the West, countries in the world’s rapidly growing Eurasian core are developing new economic institutions based on an alternative social and economic philosophy. With China being the largest and fastest growing economy in the region, its socialist policies are likely to be influential in shaping this emerging non-Western financial and trading system.
Instead of the West’s privatization of basic economic infrastructure to create private fortunes through monopoly rent extraction, China keeps this infrastructure in public hands. Its great advantage over the West is that it treats money and credit as a public utility, to be allocated by government instead of letting private banks create credit, with debt mounting up without expanding production to raise living standards. China also is keeping health and education, transportation and communications in public hands, to be provided as basic human rights.
China’s socialist policy is in many ways a return to basic ideas of resilience that characterized most civilization before classical Greece and Rome. It has created a state strong enough to resist the emergence of a financial oligarchy gaining control of the land and rent-yielding assets. In contrast, today’s Western economies are repeating precisely that oligarchic drive that polarized and destroyed the economies of classical Greece and Rome, with the United States serving as the modern analogue for Rome.