Connect with us

Sat Mag

Colonial bourgeoisie and Sinhala cultural revival

Published

on

The Birth of Prince Siddhartha Gautama

 

By Uditha Devapriya

The colonial bourgeoisie in Sri Lanka did not form a monolithic class. They were divided horizontally as well as vertically: horizontally on the basis of income and inheritance, and vertically on the basis of primordial attachments, such as caste ideology. Various factors, mainly economic, conspired as much to unify the bourgeoisie as they did to divide them, distinguishing them by their homogeneity as much as by their heterogeneity.

Sri Lanka’s transition to a plantation economy took place under British rule (1796-1948). While it’s not really accurate to say that prior to British rule the country, especially parts of the Kandyan kingdom, remained cut off from monetary exchange (a thesis that has been questioned by S. B. D. de Silva in his work on colonial underdevelopment), the British sped up the consolidation of a plantation colony, dominated by import-export trade. The creation of a new economy facilitated the formation of a new elite that found ways of building up wealth and prestige from road toll and arrack rents, plantation profits, investments in urban property, and entry into the civil service and the professions.

This bourgeoisie differed in degree and substance from the traditional elite that hailed from the apex of the social hierarchy in the Kandyan kingdom. A two-way process followed: while the bourgeoisie gained wealth and prestige over the traditional elite, the latter either found themselves reduced to a semi-dependent elite, or adapted to a changing world.

While differences between these two elites had become pronounced by the middle of the 19th century, by the time of the Buddhist revival they were fading away. The bourgeoisie, for their part, did not completely reject the customs and habits of the old elite, as witnessed by nouveau riche govigama families marrying into the Kandyan aristocracy.

Given the all too fine distinctions which cropped up among the bourgeoisie as it grew and evolved in the 19th century, the Buddhist revival evolved in spurts and stages rather than in one giant leap. The question as to which class gave an impetus to the revival, then, is linked to the question of which class interests prevailed in the unfolding of that revival.

Different scholars have approached these issues from different, if only vaguely similar, vantage points. Thus Gananath Obeyesekere ascribes the revival to the dissemination of “Protestant Buddhist” values among the Sinhala bourgeoisie, Kumari Jayawardena to the ideology of the Sinhala petty bourgeoisie, and Michael Roberts to the adoption of Western notions of nationalism and forms of propaganda. These are important perspectives, and they shed light on the role of class interests in the unfolding of nationalist revivals in colonial society. Yet different as they may be, they are all premised on assumptions of one milieu’s (petty bourgeoisie) dependence on a dominating elite (comprador bourgeoisie), and of that dominating elite’s dependence on a colonial economic framework.

For perfectly plausible reasons, these hypotheses deny ideological autonomy on the part of both dependent and dominating classes. Thus Kumari Jayawardena distinguishes between the plantation bourgeoisie and the semi-industrial bourgeoisie, in relation to their response to the revival, on the basis of the relations between their methods of acquiring wealth and colonial economic constraints, so that elite families subscribe to a conservative reading of Buddhism moulded by their ties to plantation capital, while Anagarika Dharmapala, whose family was involved in industries “not totally dependent on colonial patronage”, espouses a more “reformist” reading in keeping with a radical approach to politics.

Simply put, to the extent that the bourgeoisie was locked into an economy dominated by colonial interests, it viewed the revival as an expression of its own ideology. The use of the plural is instructive here, in that the bourgeoisie, as Roberts notes, did not share a unifying ideology, and were in fact “more differentiated” than traditional elites.

This interpretation of the revival helps us glean the intricate links between the economic base of colonial society and the ideological superstructure of revivalist movements, avoiding the pitfalls of rationalising such movements on purely cultural grounds, as nationalists are wont to do. It also presents colonial history as a series of successive periods in which one set of class ideologies prevailed over others: a plantation bourgeois at the tail-end of the 19th century, and a petty bourgeois at the turn of the 20th.

Yet, despite the validity of these perspectives, they omit three factors pertinent to the triad of colonialism, cultural modernity, and nationalist revival: ideological agency on the part of the contending milieus (intra-class, between sections of the elite, and inter-class, between different elites), the contribution of “unrepresented” classes, most prominently the working class and peasantry, to that triad, and the part played by different artists and art forms with respect to the revival and its unfolding in the 20th century.

The latter point merits much consideration. In his study of the evolution of Sinhala music in the early 20th century, Garret Field observes that composers and playwrights were as moved by monetary reasons as by cultural ones. In Jayawardena’s view, artistes like Charles Dias and John de Silva “nibbled” at colonial rule, critiquing the decay of cultural values while paradoxically presenting a colonial reinterpretation of local history.

A good example of this would be de Silva’s Sri Wickrema. While lamenting the loss of the Kandyan kingdom to the British, it presents the last king of Kandy as a rapacious tyrant, a drunkard laggard: ironically, in line with propaganda about the monarch disseminated by colonial officials, in particular the Orientalist agent, John D’Oyly.

What is pertinent here is that the stunted ideology of nationalist elites found its expression in the stunted ideology of the objets d’art they exhibited, and that this ideology prevented these art forms from undergoing a modernist revolution which could question colonial rule without subscribing to a colonial reconstruction of culture. I posit three reasons for why the nurthi plays of John de Silva, among other objets, failed to make that important leap: their mass appeal, the high levels of capital investment they required, and the conflicting attitude of their patrons, some of whom hailed from the bourgeoisie, to colonial rule.

At the turn of the 20th century, with the bifurcation of nationalism into radical politics and cultural revival, it was possible for patrons of these arts to decry a lost heritage (Sinhala and Arya) while adhering to colonial conceptions of history. As Roberts puts it,

“The cultural awakening and the recoil against the Western world, then, took many forms. It was influenced and permeated by romanticism, populism, indigenism, and anti-Western sentiments. Its conceptual forms were more traditionalist than tradition; and more revivalist than traditionalist. It did not possess the solipsist complacency and self-confidence of those who rely on the traditional… Neither was it wholly traditionalist and restorative. Its principal activists were selective in the traditions they picked up.”

Roberts has noted elsewhere that, while calling for the end of British rule, nationalist elites resorted to Western modes of protest; thus, while nationalist liberators who sprang up in the Kandyan regions after their annexation by the British decried the Kandyan Convention as a betrayal of the Sinhala kingdom, nationalist agitators in the 20th century rationalised the Convention as a legal document which British officials had honoured more in the breach than the observation. Benedict Anderson has analysed these paradoxes in his study of what he calls the “last wave of nationalism”, which unfolded in the European colonies of Africa and Asia at the end of the 19th century. His thesis explains the paradoxical response to their own history by Sinhala nationalists; even in the act of decrying a lost pre-colonial heritage, these same nationalists subscribed to values promoted by colonisers. Hence Sri Wickrema is a plea for the restoration of a lost heritage, a condemnation of colonial “modernity”, yet it is also an indictment of a key figure associated with that heritage.

Dependent as these objets were on “colonial capital”, for a more meaningful analysis, they should be compared with art forms that were not no dependent on such capital.

In the decorative arts, breaks with the past transpired more rapidly, and thoroughly, than they did in the realm of literature and theatre. As Sunil Goonesekara has observed, by the time of the revival in the early 20th century important debates had sprung up about which mode of painting best suited the country. On the one hand, there was the studio painter, who looked up to styles established in European art academies; on the other hand, there were the traditional Kandyan painters, a vanishing group even then; on yet another hand, there were lithographers reproducing Buddhist parables, whose figurehead, Sarlis, exuded a style that was, as Goonasekera puts it, “not wholly native nor wholly other.”

Perhaps the most obvious reason why painting was able to undergo a modernist revolution faster than could theatre and literature was that it did not fit the three criteria applicable to the latter two art forms: it lacked a mass audience, it did not require high levels of capital investment, and it did not need the patronage of elites tied to colonialism.

Underscoring this was the even simpler fact that painting was a visual art, and that unlike theatre and literature it could dispense with the written word. If John Berger’s dictum that we see before we speak is indeed true, and what we see establishes our place in the world more quickly than can the printed word, modernism in art swamped Sri Lanka more rapidly than either the theatre or the press because it was cut off from print capitalism; simply put, it was easier to defy canons of taste in painting, because the painter did not have to borrow European notions of modernity that nationalists and revivalists had been innovating on from the tail-end of the 19th century. He did not need a “text.” He had frescoes, lithographs, and murals to work from. The revival may have thrived on the polemic, but it breathed through the canvas. This is, perhaps, a point seldom appreciated, if at all. Yet it is true.

Anne Blackburn has cautioned against viewing the Buddhist revival solely as a response to colonialism by nationalists. In painting, we come across a new way of viewing the revival: neither a collective rejection of the West, nor a total acceptance of colonial canons of taste and propriety, but rather a break from both. This obviously opens up new lines of discussion and interpretation as regards colonialism in Sri Lanka, a topic that for far too long has been viewed through a class, caste, or elite lens by scholars and students.

The writer can be reached at udakdev1@gmail.com



Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Sat Mag

Contagion in Ceylon: Two case studies

Published

on

By Uditha Devapriya

The medical profession in Ceylon was, by the late 19th century, following patterns set down in most of the European colonies in Asia. The Civil Medical Department was set up in 1858, the Ceylon Medical College in 1870, and the Ceylon Medical Association in 1887. As with law, the practice of medicine soon came to be monopolised by the upper middle class, predominantly Burgher and Tamil.

In her extensive study of the history of the medical profession in Ceylon, Kamalika Pieris conjectures that this may have been due to the fact that unlike the Sinhala bourgeoisie, neither the Burgher nor the Tamil middle class had extensive holdings in land. In an autobiographical piece, on the other hand, Ralph Pieris observed that in colonial society, the quickest way for the middle class to guarantee their position was by ensuring their children took to professions where the duration between education and employment was brief; to give just one example, Lucian de Zilwa, divided between law and medicine, chose the latter because “I could begin to earn something the moment I received my sheepskin.”

The entrenchment of a medical elite did not contribute to a rise in health or sanitary standards in the country. The first hospitals had been built for military garrisons in Colombo, Galle, Jaffna, and Trincomalee, while the expansion of the plantation economy necessitated the establishment of hospitals to service the needs of estate labour. In both cases the aim would be to serve imperial interests. Against such a backdrop, expansions of medical institutions, quarantine centres, and isolation camps were driven by bouts of epidemics which became part and parcel of the region in the 19th century. The aim of this essay is to compare and contrast responses by officials to two such epidemics: smallpox and cholera.

Historically, the Sinhalese are said to have feared smallpox the most. Robert Knox wrote that none of the charms “successful to them in other distempers” could cure the illness, while Ribeiro described it as “the most dreaded disease among the natives.”

There’s reason to believe that while smallpox was introduced by European adventurers, it did exist in pre-colonial societies; Asiff Hussein writes of Hugh Neville describing “a gigantic Telambu tree amidst a sacred grove on the site of the Ruwanveli dagaba” which may have figured in an outbreak of a disease at the time of Arahat Mahinda. The first real account of the disease, according to Thein (1988), comes to us from a fourth century Chinese alchemist called Ko Hung; it may have existed in a mild form in fifth century India.

John Holwell was the first British official to write on the illness in the subcontinent, in 1767 in Bengal. From 1779 to 1796, there was a severe smallpox outbreak in that region, while in 1802 we read of a Swiss doctor, Jean de Carro, sending samples of a vaccine to Bombay via Baghdad. The date is significant, for it was in 1802 also that, as Ananda Meegama has noted, vaccination against smallpox commenced in Sri Lanka.

However, owing to inconsistencies in the vaccination programme, not least of which the slipshod way medical officials despatched vaccines to poorer, far flung regions, there were severe periodic outbreaks. In 1886, vaccination was made compulsory for the first time, but even then problems in the Medical Department, set up 28 years earlier, led to the disease spreading throughout the country. To prevent it would have meant enforcing an anti-venereal programme, which transpired 11 years later with the passing of the Quarantine and Prevention of Diseases Ordinance No 3 of 1897. By the 1930s, smallpox had ceased to become a serious threat; by 1993, it had been fully eradicated.

No less feared than smallpox was cholera, which like smallpox goes back many centuries in the country and the subcontinent. The Tamil goddess, Mariamman, is considered symbolic of both smallpox and cholera, while a reference in the Mahavamsa to a yakinni who in the reign of Sirisangabo caused those who came under her curse to have red eyes, and die the moment they came into contact with a patient, may indicate an outbreak in as early as third century Sri Lanka. I have noted this in my SAT MAG column last week.

Historical sources tell us of a reference in ninth century Tibet to a particularly violent illness (the first signs of which included “violent purging and vomiting”), although scholars doubt the authenticity of these texts. In any case, not for nothing did colonial officials call it the Asiatic cholera, given its ancient origins in the lower Bengal where, it must be said, we come across our earliest sources for the cult of a cholera goddess in the subcontinent. On the other hand, the destruction of the local economy, and unnatural changes in agriculture forced on farmers by British authorities, had an impact on these outbreaks: with irrigation schemes destroyed and neglected, water scarcities could only result in pestilence.

There were six cholera pandemics between 1817 and 1917. All of them were aggravated by disruptive economic and social changes unfolding in each of the periods they transpired in. All six originated from the Bengal, inevitable given the genesis of the disease in tropical, depressed climates, and they diffused beyond the region to as far as Russia, the US, and Latin America. Leaving behind heavy fatalities, each was deadlier than the last.

The first case in Sri Lanka was reported in 1818 in Jaffna. However, as Ananda Meegama has noted, the disease never became endemic in the island. It turned into a serious issue only after British colonisers, having appropriated Kandyan lands under the Crown Lands Encroachment Ordinance and Waste Lands Ordinance of 1840, imported labour from South India. The latter led to a widespread contagion: case numbers jumped from 16,869 between 1841 and 1850 to 35,811 in the following decade, with fatality rates rising from 61% to 68%.

Until the passing of the Quarantine Ordinance in 1897 — by which time five of the six cholera epidemics had passed through and devastated the country — officials took haphazard measures to control immigration of South Indian labour. Estate workers, on their way to the plantations, took the Great North Road to the hill country, which brought them into contact with locals in Jaffna and Mannar; this led to several epidemics between 1840 and 1880, after which the number of cases and fatalities began to reduce.

Still, officials remained oblivious to the need to quarantine estate workers until much later, a problem compounded by the fact that many such workers, even after restrictions had been imposed, came disguised as traders, an issue attributable as much to the immigrants as it was to “the partial manner” in which officials enforced the quarantine.

Two years after the passage of the Ordinance, colonial administrators shut down the Great North Road to estate labour. Those who arrived in the country were taken to Ragama. Yet despite such measures, other problems cropped up: as Kamalika Pieris has pointed out, there was widespread hostility to Western medicine, particularly in economically backward regions which doctors never attempted to reach. Contaminated water became a serious issue even in Colombo, as did unplanned urbanisation, while quarantines led those fearing for their lives to flee to other districts, spreading the disease even more.

De Silva and Gomez (1994) have argued that one of the contributing factors to the recession of these diseases were the advances made among the local population in sanitation. The first soap was imported to the island in 1850; until then, only vegetable oils had been widely used. The setting up of various Committees to probe into these outbreaks would have been another factor. Despite this, though, the fact remains that while it had the wherewithal to expand into less well off regions, the government chose for crude reasons of economy not to.

By the 1930s, at which point cholera had been significantly contained — in 1946, two years before independence, only two cases cropped up — the rise in a radical left movement among professionals, including doctors and lawyers, led to a more proactive approach being taken to pandemics and contagions. The apogee of this trend, no doubt, was the malaria epidemic of the 1930s, in which the work of volunteers was done, not by the English speaking medical elite, but by a group of young leftists, among them perhaps the first Western doctor to don the national dress, S. A. Wickramasinghe. But that’s another story.

The writings of S. A. Meegama, Asiff Hussein, Kamalika and Ralph Pieris, Robert Pollitzer, Ariyaratne de Silva, Michael G. Gomez, and M. M. Thein were used for this article.

The writer can be reached at udakdev1@gmail.com

Continue Reading

Sat Mag

The Remedy. Remedy?

Published

on

By Dr F E Dias Introduction

A solution to the Covidian problem was eagerly awaited – and emerged in a legion of vaccines. The development of vaccines to coronaviruses has been difficult due to vaccine-induced enhanced disease responses evident in animal studies. Antibody-dependent enhancement may be involved in the clinical observation of increased severity of symptoms associated with early high levels of SARS-CoV-2 related antibodies in patients. The gene-based vaccines dispense of the need of the wild virus and instead get the inoculated persons to produce multi-trillions of its fusogenic spike protein which by itself is a pathogen that biodistributes itself and potentially harms other parts of the body away from where the injection occurred. And yet, many experimental vaccine products were hastily unleased globally without the pivotal Phase 3 trials concluded or validated.

Front Runners

Israel and the Seychelles had high proportions of their populations vaccinated before other nations caught up. Let us compare sets of data that are not usually presented synchronously, such as in Figures 1 & 2, to observe what happened and is happening in Israel and Seychelles.

EU

The 450 million strong EU consisting of 60% of Europe’s estimated 750 million inhabitants, reported administering 522.4 million doses of vaccines as of August 14th 2021, with over 75% of its citizens receiving at least one dose of an experimental CoViD vaccine.

The EnduraVigilance system is the EU-wide database for recording vaccine injury reports, as well as other medicine-induced injuries, corresponding to the US Vaccine Adverse Events Reporting System (VAERS). EnduraVigilance data indicate, since beginning of the vaccination campaign last year through to August 14th, over two million (2,074,410) reports of vaccine-related injuries, including 21,766 deaths across the 27 member

states.

Approximately half of all reports (1,021,867) were of serious injuries, classified by this EU agency as corresponding to “a medical occurrence that results in death, is life-threatening, requires inpatient hospitalisation, results in another medically important condition, or prolongation of existing hospitalisation, results in persistent or significant disability or incapacity, or is a congenital anomaly/birth defect.”  The highest rate of problematic developments among their four emergency-authorised vaccines occurred, following the use of the Oxford-AstraZeneca product, which is reportedly linked to 947,675 injuries within the EU.

Reports of thrombotic and thromboembolic post-vaccination events among the vaccinated triggered concern regarding post-vaccination donation of substances of human origin (SoHO), such of blood, plasma, organs and tissues that may contain the pathogenic spike protein of recently vaccinated donors.

UK

A groundbreaking paper by the prestigious Oxford University Clinical Research Group, published on August 10th in The Lancet found that vaccinated individuals more than 250 times the load of Covid-19 viruses in their nostrils compared to the unvaccinated. The authors suggest that while moderating the symptoms of infection, the vaccine allows vaccinated individuals to carry unusually high viral loads without becoming ill at first, potentially transforming them into pre-symptomatic super-spreaders, and they suggest that this may contribute to the post-vaccination surges in heavily vaccinated populations globally.

Public Health England (PHE), England’s public health policy department released a report on August 6th detailing the spread of the Delta variant of the virus which includes hospitalisations and deaths where Covid-19 was a factor, between February 1st and August 2nd 2021.

It shows that 65% of hospitalisations and deaths involving Covid-19 are among those who have had at least one dose of the experimental vaccines.  Though the unvaccinated category accounts for around half of overall Delta Covid-19 infections in England, the rate of death in this group is lower than among those who received vaccines.

Considering the fully vaccinated group on its own, the PHE data show 1,355 of 47,008 identified infections were admitted to hospital, which is 2.9%, suggesting that the double-jabbed face a nearly 50% greater chance of being hospitalised if they contract CoViD-19, compared with those who have not been vaccinated.  Further, those who contracted the virus within 21 days of their first shot demonstrated a 0.97% hospitalisation rate, and those who tested positive after three weeks from their first shot demonstrated a 1.14% hospitalisation rate, indicating that the likelihood of hospitalisation is greater for the double-jabbed when compared with the single-jabbed.

North America

The VAERS, jointly run by both the Food and Drug Administration and the Centers for Disease Control and Prevention is the primary US government-funded system for reporting adverse vaccine reactions in the US.  Not all adverse events are reported into it by physicians and the VAERS report is understood to be a substantial underestimate. While 328.9 million Covid vaccine doses had been administered as of July 2nd 2021, between December 14th 2020 and July 2nd 2021, a total of 438,441 adverse events were reported to VAERS, including 9,048 deaths, 22% of which occurred within 48 hours of vaccination, and 37% occurred in people who became ill within 48 hours of being vaccinated.

2,678 pregnant women reported adverse events related to CoViD vaccines, including 994 reports of miscarriage or premature birth. VAERS also reports the deaths of two breast-feeding babies in March and July 2021 due to blood clots subsequent to the mother’s reception of the vaccine. Of the 4,456 cases of Bell’s Palsy reported, 398 reports of Guillain-Barré Syndrome. There were also 121,092 reports of anaphylaxis, 8,256 of blood clotting disorders and 1,796 cases of myocarditis and pericarditis. The VAERS data showed that 22% of deaths were related to cardiac disorders.

By the end of August 2021, the total deaths reported had exceeded 13,000 and reports of harm exceeded 600,000. With over 5000 reports of myocarditis as of August 20th, Pfizer added to their product fact sheet that “post-marketing data demonstrate increased risks of myocarditis and pericarditis, particularly within seven days following the second dose. The observed risk is higher among males under 40 years of age than among females and older males. The observed risk is highest in males 12 through 17 years of age”, admitting also that potential long-term sequalae are unknown.

The Public Health Agency of Canada (PHAC) in July estimated the rate of vaccine-related blood clotting in Canadians who have received the AstraZeneca vaccine, and said there have been 27 confirmed cases to date in Canada, with five deaths among those cases.

Blood-clotting events that are reported are the larger ones that can be detected using MRI or CT scans. However, with the RNA and viral vector vaccines, there is a new phenomenon of micro blood clots, diagnosable via D-dimer tests. These microscopic blood clots could be caused by the vaccine-generated spike proteins altering the vascular epithelia, particularly affecting their interactions with platelets in capillaries.  Some parts of the body like the brain, spinal cord, heart and lungs cannot re-generate when their tissues are damaged by blood clots. Particularly in the lungs, this may cause pulmonary hypertension that may lead to heart failure years later.

In Brief:

Fatality, Natural Immunity and Treatment

The fatality rate in the age-group up to 20, among those who actually become infected is 0.003% which suggests a five times greater likelihood of death in a road traffic accident further to an infection of CoViD, and that for those 20-40 is 0.02%. A study published on August 24th 2021 by Israeli researchers looking at over 670,000 vaccinated and unvaccinated individuals conclude that “that natural immunity confers longer lasting and stronger protection against infection, symptomatic disease and hospitalisation caused by the Delta variant of SARS-CoV-2, compared to the BNT162b2 two-dose vaccine-induced immunity”. Dr Robert Malone the virologist and immunologist credited with pioneering RNA transfection has stated that natural immunity is twenty times more protective than the vaccines. India achieved success by encouraging anti-viral medication.

Sri Lanka

Sri Lanka’s vaccinations and deaths data is shown in Figures 4 and 5.

Conclusion

Correlations in Sri Lanka’s data are apparent. Even if the preceding database reports and scientific theory are disregarded, and the global experience is ignored, it is not unreasonable to suppose that the relationships between daily deaths and daily vaccinations are causal, subject to the observed time lag. It is remarkable that the spike in total number of deaths corresponds to the spike in the total number vaccinations. It is contrary to the expert wisdom that among the Covid deaths in Sri Lanka, the proportion of deaths of vaccinated people is steadily increasing as the vaccination campaign progresses.

There is evidence that the immediate and potential sequential harm due to these vaccines may exceed the risks associated with the disease.

Cytokine storm model of Castelli et al, Front. Immunol, 2020 (above)

Continue Reading

Sat Mag

Brief history of plagues and pandemics

Published

on

Uditha Devapriya

By the 14th century, trade routes between the East and West had made it easier for pandemics to spread, while conquests by the Spanish and the Portuguese in the 15th and 16th centuries would introduce several diseases to the New World. Trade and colonialism hence became, by the end of the Renaissance, the main causes of plague, which scientific advancement did little to combat, much less eliminate: a physician in the 17th century would have been as baffled or helpless as a physician in the 14th or 15th in the face of an outbreak.

No doubt rapid urbanisation and gentrification had a prominent say in the proliferation of such outbreaks, but among more relevant reasons would have been poor sanitary conditions, lack of communication and accessibility, and class stratifications which excluded the lower orders – the working class as well as peasants in the colonies – from a healthcare system that pandered to an elite minority. By 1805, the only hospitals built in Ceylon were those serving military garrisons in places like Colombo, Galle, and Trincomalee.

Among the more virulent epidemics, of course, was the notorious plague. Various studies have tried to chart the origins and the trajectory of the disease. There were two outbreaks in Rome: the Antonine Plague in 165 AD and the Justinian Plague in 541 AD. With a lack of proper inscriptional evidence, we must look at literary sources: the physician Galen for the Antonine, and Procopius and John of Ephesus for the Justinian.

Predating both these was an outbreak reported by the historian Thucydides in 430 BC Rome, but scholars have ascertained that this was less a plague than a smallpox contagion. In any case, by 541 AD plague had become a fact of life in the region, and not only in Pagan Rome; within the next few years, it had spread to the Arabic world, where scholars, physicians, and theologians tried to diagnose it. Commentaries from this period tell us of theologians tackling a religious crisis borne out of pestilence: in the beginning, Islamic theology had laid down a prohibition against Muslims “either entering or fleeing a plague-stricken land”, and yet by the time these epidemics ravaged their land, fleeing an epidemic was reinterpreted to mean acting in line with God’s wishes: “Whichever side you let loose your camels,” Umar I, the founder of the Umayyad Caliphate, told Abu Ubaidah, “it would be the will of God.” As with all such religious injunctions, this changed in the light of an urgent material need: the prevention of an outbreak. We see similar modifications in other religious texts as well.

Plagues and pandemics also feature in the Bible. One frequently referred to story is that of the Philistines, having taken away the Ark of the Covenant from the Israelites, being struck by a disease by God which “smote them with emerods” (1 Samuel 5:6). J. F. D. Shrewsbury noted down three clues for the identification of the illness: that it spread from an army in the field to a civilian population, that it involved the spread of emeroids in the “secret part” of the body, and that it compelled the making of “seats of skin.” The conventional wisdom for a long time had been that this was, as with 541 AD Rome, the outbreak of the plague, but Shrewsbury on the basis of the three clues ascertained that it was more plausibly a reference to an outbreak of haemorrhoids. On the other hand, the state of medicine being what it would have been in Philistine and Israel, lesions in the “secret part” (the anus) may have been construed as a sign of divine retribution in line with a pestilence: to a civilisation of prophets, even haemorrhoids and piles would have been comparable to plague sent from God.

Estimates for population loss from these pandemics are notoriously difficult to determine. On the one hand, being the only sources we have as of now, literary texts accurately record how civilians conducted their daily lives despite the pestilence, while on the other, writers of these texts resorted to occasional if not infrequent exaggeration to emphasise the magnitude of the disease. Both Procopius and John of Ephesus are agreed on the point, for instance, that the Justinian Plague was preceded by hallucinations, which then spread to fever, languor, and on the second or third day to bubonic swelling “in the groin or armpit, beside the ears or on the thighs.” However, there is another account, by Evagrius Scholasticus, whose record of the outbreak in his hometown Antioch was informed by a personal experience with a disease he contracted as a schoolboy and to which he later lost a wife, children, grandchildren, servants and, presumably, friends. It has been pointed out that this may have injected a subjective bias to his account, but at the same time, given that Procopius and John followed a model of the plague narrative laid down by Thucydides centuries before, we can consider Evagrius’s as a more original if not more accurate record, despite prejudices typical of writers of his time: for instance, his (unfounded) claim that the plague originated in Ethiopia.

Much water has flowed through the debate over where the plague originated. A study in 2010 concluded that the bacterium Yersinia pestis evolved in, or near, China. Historical evidence marshalled for this theory points at the fact that by the time of the Justinian plague the Roman government had solidified links with China over the trade of silk. Popular historians contend that the Silk Road, and the Zheng He expeditions, may have spread the contagion through the Middle East to southern Europe, a line of thinking even the French historian Fernand Braudel subscribed to in his work on the history of the Mediterranean. However, as Ole Benedictow in his response to the 2010 study points out, “references to bubonic plague in Chinese sources are both late and sparse”, a criticism made earlier, in 1977, by John Norris, who observed that it is likely that literary references to the Chinese origin of the plague were informed by ethnic and racial prejudices; a similar animus prevailed among the early Western chroniclers against what they perceived as the “moral laxity” of non-believers.

A more plausible thesis is that the bacterium had its origins around 5,000 or 6,000 years ago during the Neolithic era. A study conducted two years ago (Rascovan 2019) posits an original theory: that the genome for Yersinia pestis emerged as the first discovered and documented case of plague 4,900 years ago in Sweden, “potentially contributing” to the Neolithic decline the reasons for which “are still largely debated.” However, like the 2010 study this too has its pitfalls, among them a lack of the sort of literary sources which, however biased they may be, we have for the Chinese genesis thesis. It is clear, nevertheless, that the plague was never at home in a specific territory, and that despite the length and breadth of the Silk Road it could not have made inroads to Europe through the Mongol steppes. To contend otherwise is to not only rebel against geography, but also ignore pandemics the origins of which were limited to neither East and Central Asia nor the Middle East.

Such outbreaks, moreover, were not unheard of in the Indian subcontinent, even if we do not have enough evidence for when, where, and how they occurred. The cult of Mariammam in Tamil Nadu, for instance, points at cholera as well as smallpox epidemics in the region, given that she is venerated for both. “In India, a cholera-like diarrheal disease known as Visucika was prevalent from the time of the Susruta“, an Indian medicinal tract that has the following passage the illness to which reference is made seems to be the plague:

Kakshabhageshu je sfota ayante mansadarunah
Antardaha jwarkara diptapapakasannivas
Saptahadwa dasahadwa pakshadwa ghnonti manavam
Tamagnirohinim vidyat asadyam sannipatatas

Or in English, “Deep, hard swellings appear in the armpit, giving rise to violent fever, like a burning fire, and a burning, swelling sensation inside. It kills the patient within seven, 10, or 15 days. It is called Agnirohini. It is due to sannipata or a deranged condition of all the three humours, vata, pitta, and kapha, and is incurable.”

The symptoms no doubt point at plague, even if we can’t immediately jump to such a conclusion. The reference to a week or 15 days is indicative of modern bubonic plague, while the burning sensation and violent fever shows an illness that rapidly terminates in death. The Susruta Samhita, from which this reference is taken, was written in the ninth century AD. We do not have a similar tract in Sri Lanka from that time, but the Mahavamsa tells us that in the third century AD, during the reign of Sirisangabo, there was an outbreak of a disease the symptoms of which included the reddening of the eyes. Mahanama thera, no doubt attributing it to the wrath of divine entities, personified the pandemic in a yakinni called Rattakkhi (or Red Eye). Very possibly the illness was a cholera epidemic, or even the plague.

China, India, and Medieval Europe aside, the second major wave of pandemics came about a while after the Middle Ages and Black Death, and during the Renaissance, when conquerors from Spain and Portugal, having divided the world between the two countries, introduced and spread diseases to which they had become immune among the natives of the lands they sailed to. Debates over the extent to which Old World civilisations were destroyed and decimated by these diseases continue to rage. The first attempts to determine pre-colonial populations in the New World were made in the early part of the 20th century. The physiologist S. F. Cook published his research on the intrusions of diseases from the Old World to the Americas from 1937. In 1966, the anthropologist Henry F. Dobyns argued that most studies understated the numbers. In the 1930s when research on the topic began, conservative estimates put the North American pre-Columbine population at one million. Dobyns upped it to 10 million and, later, 18 million; most of them, he concluded, were wiped out by the epidemics.

And it didn’t stop at that. These were followed by outbreaks of diseases associated with the “white man”, including yaws and cholera. Between 1817 and 1917, for instance, no fewer than six cholera epidemics devastated the subcontinent. Medical authorities were slow to act, even in Ceylon, for the simple reason that by the time of the British conquest, filtration theory in the colonies had deemed it prudent that health, as with education, be catered to a minority. Doctors thus did not find their way to far flung places suffering the most from cholera, while epidemics were fanned even more by the influx of South Indian plantation workers after the 1830s. Not until the 1930s could authorities respond properly to the pandemic; by then, the whole of the conquered world, from Asia all the way to Africa, had turned into a beleaguered and diseased patient, not unlike Europe in the 14th century.

The writer can be reached at udakdev1@gmail.com

Continue Reading

Trending