Features
Atomic Bombing of Hiroshima: 6th August Seventy-five years ago
By Kirthi Tennakone
The dwellers of the Japanese city Hiroshima resorted to their routine on 6th August 1945. That day the morning sky had been clear – people observed three air planes and descending parachutes. These are happenings to be expected at the time of a war and largely ignored.
Around 8.15 am a flash of light intensely brighter than the sun and a burning sense of heat terrified the population. The noiseless instant effect reacted more severely over a circular area of radius approximately 1 kilometer. Men, women and children exposed to the flash were incinerated to ash or fatally burnt. Cloths crumbled to pieces or spontaneously ignited – particularly if the shade is darker. A man dressed in white was burnt lightly, but his wife in black beside him died as a result of harsh burning. Most people within the range succumbed immediately- very few shielded by thick concrete survived. After fraction of a second a blast wave flattened almost every building in an area of nearly 40 square kilometers. A fireball formed created in the atmosphere expanded rapidly blowing a horrendously hot wind – setting fires everywhere up to a distance of about 4.5 kilometers from the centre. The pressure of the blast wave and heat of the rushing wind killed or wounded many more people. Expanding and a rising fireball created a white plume extending to the atmosphere up to a height of 6100 meters darkening the city as if night has befallen. Around 9 am a black toxic rain poured over a large area, sickening those who got wet. The death toll in the day of the incident exceeded 40,000 and subsequent mortality resulting from injuries was estimated to be more than 100,000.
ATOMIC BOMB
The Japanese government and most of the world at large could not immediately fathom how a ferocious calamity unheard previously was inflicted. A devastation of such magnitude would require dropping thousands of most powerful conventional bombs simultaneously – a technical impossibility. On August 7th, the American President Harry Truman announced ‘It is an atomic bomb. It is a harnessing of the basic power of the universe. The force from which the sun draws its power has been loosed against those who brought war to the Far East’. He further stated that the bomb had more power than 20000 tons TNT- more than 2000 times the blast power of British Grand Slam which is the largest bomb ever used in history of warfare.
The bombs based on detonators such as tri-nitro toluene (TNT) derive energy by breaking of the molecules of this substance into lighter more stable fragments. In contrast atomic energy is released when the nucleus of the uranium atom disintegrates into lighter nuclei – a process referred to as nuclear fission. A calculation based on Einstein’s theory of relativity revealed that the energy liberated in fission of uranium is about one million times the equivalent weight of ordinary explosives. Fission is triggered by hitting the uranium nucleus with a neutron. When the nucleus breaks-up several additional neutrons are emitted. Hungarian-American physicist Leo Szilard speculated extra neutrons might disrupt other uranium nuclei causing an explosive chain reaction–a possibility of making a dangerous weapon. In 1939 he persuaded Albert Einstein to write a letter to President Franklin Roosevelt pointing out the urgency of the United States engaging in this effort – otherwise the consequences could be disastrous if Adolf Hitler develop a nuclear weapon. United States Intelligence found Germany had already started to work on the problem, hastening President Roosevelt to appoint a committee replying Albert Einstein. Soon the research work aimed to develop a nuclear bomb was commissioned as the Manhattan Project – under scientific leadership of the Robert Oppenheimer and a team of several other eminent physicists excluding Einstein. Perhaps Einstein was considered too much even for a project of this nature because of his extreme radicalism and pacifist views.
Leo Szilard with Albert Einstein
Despite theoretical soundness of the argument of achieving an explosive nuclear chain reaction, the Manhattan project encountered many astounding practical challenges. Natural uranium occurs in two forms named as isotopes U(238) and U(235). A chain reaction is feasible only with U(235) occurring as 0.7 percent of the metal found in the uranium ores. Furthermore to initiate a chain reaction at least a critical mass of about 60 kilograms of U(235) is required. Refining the ore to obtain this amount was an arduous costly task. Another option explored has been to use plutonium instead of uranium. The advantage of the latter is the smaller critical mass 5-10 kilograms. Plutonium is not found in nature can be synthesized – again a time consuming costly affair. Expenses of the project ran to 100 million dollars a month!
The other hurdle was assembling of the critical mass.
The requisite amount of uranium or plutonium cannot be simply cast as an ingot. Moment the critical mass which depend on shape and density of the sample is reached. The chain reaction propagate emitting radiation, because even one neutron is sufficient for triggering. Some neutrons always exist in the environment and also produced by spontaneous fission uranium. A method planned was to collide two pieces of uranium in a gun-like device using dynamite so that their union creates the critical mass. Another method considered was casting uranium or plutonium into a sphere of calculated size and implode it to increase the density by firing an appendage ordinary explosives. These methods needed to be secured foolproof and tested.
TESTING THE BOMB
After three years of intensive activity, scientists and engineers at the Los Alamos Laboratory assembled an atom bomb on 13th July 1945. It was a plutonium device containing around 6 kilograms of this metal in the form of a sphere. Why was a plutonium bomb instead of uranium chosen for testing? The amount of weapons grade uranium available at that time was sufficient to make just one bomb, planned to be fired by the gun mechanism. Plutonium of much lower critical mass, adequate for several bombs was ready in the processing line. Furthermore, the implosion firing mechanism worked out for plutonium bombs demanded experimental confirmation.
Including accessories the bomb nicknamed ‘Gadget’ weighed nearly 5 metric tons. Gadget was transported to the testing site in the New Mexico desert and hoisted to a 100 m high steel tower. The bomb was scheduled to be exploded at 4 am 16th July 1945. However because of bad weather the time was pushed forward to 5.30 am. Scientists stationed 10 km away eagerly awaiting to watch the test were concerned. Some doubted whether the bomb would turnout to be a dud. Other pointed its power might exceed the expectation and pose danger to observers and community in the neighbourhood. Emphasizing this point, Edward Teller who later came to be known as the father of the hydrogen bomb distributed suntan cream.
When the trigger was switched-on at 5.30 am, the whole landscape was instantly lighted many times brighter than sunlight and a rising vividly coloured fireball appeared in the sky. The test was a success and a moment that changed the world forever. Seconds later the bang was felt, following a gush of wind. Physicist Enrico Fermi floated pieces of paper, timed their motion and quickly calculated the strength of the bomb, saying it is equivalent to 10 kilotons of TNT. More precise calculations carried out later revealed that strength was 22 kilotons.
BOMBING HIROSHIMA
The success of the atom bomb test was conveyed to President Truman but not publicly announced. general public inquisitive of the blinding flash and the bang were told an explosion occurred in an ammunition storage. President was planning to visit Germany to attend Potsdam conference – the famous big three Truman–Stalin–Churchill meeting. At the proceedings he hinted new development but did not elaborate. On 24th July Truman met Stalin casually and told him the United States has developed a weapon of unprecedented strength. Stalin did not react with excitement or interest and said ‘I hope the United States would make good use of it ‘. The reason for Stalin’s indifference became clear later. Soviet intelligence had been aware of the achievements in the Manhattan project.
Potsdam deceleration warned Japan to surrender unconditionally or suffer utter destruction – which Japan did not accept. Immediately the decision to drop atomic bombs on Japan was confirmed. The directive was said to be – hit Hiroshima, Kokura, Niigata or Nagasaki after 3rd August as weather permitted.
The bombing operation was assigned to Colonel Tibbets of the US Air Force. On August 6th early morning he took-off from Tinian Island air base in the Pacific carrying the bomb. Two other planes accompanied the B-29 bomber to monitor weather and parachute instruments to record the physical effects of the explosion. At about 8.15 am the pilot released the bomb from an altitude of 9.5 kilometers. The bomb fell down for 47 seconds and exploded at a height 600 meters above the ground – the triggering mechanism designed to explode the bomb in mid-air for the purpose of maximizing the destructive power. Tibbets who hurried away was at a safe distance of 18 kilometers when he observed the flash and the fireball.
The bomb aimed to the Aioi Bridge missed the target by 250 meters and detonated overhead Shima Hospital flattening it instantly. Amazingly the structure of the Hiroshima Industrial Promotion Hall almost at the epicenter did not collapse. A temperature exceeding 4,000 degrees Celsius burnt the roof killing everybody inside, but the peculiar way in which the shockwave approached, left the structural shell largely intact. This landmark ruin named Atomic Bomb Dome serve as a memorial for lives lost and a reminder for peace.
I (the author of this article) visited Hiroshima in the year 2000. My daughter then a high school student posed in front of the dome for a photograph and smiled. When I said this not a place to smile. A group of Japanese visitors at the site understood what I meant and emotionally expressed appreciation of my remark.
WORLD AFTERMATH HIROSHIMA
Even after Hiroshima attack, Japan did not surrender but vowed to fight. Soviet Union declaring war on Japan 8th August 1945 and United States dropping of a second atomic bomb on Nagasaki next day changed the situation. On 15th August 1945 the Emperor Hirohito agreed unconditional surrender effectively ending Second World War. Some celebrated the bombings implicating it’s a lesson to warmongering and crimes committed, but those died were innocent civilians. The horror atomic bombings particularly the late effects of radiation continued to uncover as days and months passed. Nevertheless there were glorifications of nuclear weapons. Many nations strived hard acquire them, boost their destructive power and develop strategic methods of delivery. Human desire for increasing power of self-destructive weapons did not end with atom bomb. In 1952 United States tested first hydrogen bomb or the thermonuclear device based on nuclear fusion – the opposite of fission where lighter nuclei similar to hydrogen fuse together to yield heavier nuclei liberating extra-large quantity of energy – thousands of times stronger than the Hiroshima bomb. In the following year, the Soviet Union exploded a similar weapon. Between 1950 -1962 the competition of super powers in detonating nuclear bombs polluted the atmosphere- increasing the incidence of cancer.
The Limited Test Ban Treaty of 1963 forbid atmospheric tests. However, underground tests continued and the 1996 Comprehensive Nuclear Test Ban Treaty of the United Nations could not be strictly enforced as some nations avoided the agreement. On July 2017, United Nations proposed the resolution – The Treaty on the Prohibitions of Nuclear Weapons. The enforcement of the agreement require signature and ratification of 50 states. To date of the 82 countries, who have signed the treaty, only 40 have ratified it. Some countries seem to abstain from signing and ratifying the accord on the presumption that those who might not agree will pose a threat – vicious circle contradicting attitudes. Global citizens worldwide and a number organisations advocating peace, campaign to prohibit nuclear weapons. Most vociferous among them are the survived victims of Hiroshima and Nagasaki- popularly known as ‘hibakusha’. Their pledge is ‘so that the people of future generations will not have to experience hell on earth, we want to realise a world free of nuclear weapons while we are still alive’.
The first sitting US President to visit Hiroshima was Barak Obama. On May 26th , 2016, talking to a gathering there, Obama said ‘Technological progress without an equivalent progress in human institutions can doom us. The scientific revolution that led to the splitting of an atom require moral revolution as well ’.
Human greed – the limitless urge to acquire material possessions – is blind to dangers ensuing in the horizon, threatening their own existence. Nuclear weapons and excessive burning of fossil fuels are two examples.
The author Prof.Kirthi Tennakone, National Institute of Fundamental Studies can be reached via ktenna@yahoo.co.uk
Features
So, who is going to tell the rest of the world?
Series: The greatest digital rethink, Part V of V – Series conclusion
Five instalments. Five levels of education. One recurring pattern: the countries that ran the experiment are retreating, the countries that watched them are still paying the entry price. This final column asks the question the international education community has been carefully avoiding: does anyone actually learn from anyone else, or do we just take turns making the same expensive mistakes?
What five parts told us
Let us briefly take stock. In Part I of this series, we traced the arc of three decades of digital enthusiasm in education, from the early computer labs of the 1990s through the tablet explosion of the 2010s, to the pandemic acceleration and the emerging backlash that defines the present moment. In Part II, we watched Sweden take tablets away from preschoolers who should never have been given them in the first place, and Finland legislate to return the pencil to its rightful place in the primary classroom. In Part III, we confronted the paradox at the heart of secondary school de-digitalisation: governments triumphantly banning the phone in the student’s pocket while quietly expanding the data systems that monitor their every digital interaction. In Part IV, we sat in the university exam hall, a room that had been pronounced redundant 20 years ago, and watched it fill up again with students writing with pens, because the large language models (LLM) like Chat GPT, had made every other form of assessment untrustworthy.
The inconvenient asymmetry
There is a concept in international education research, ‘asymmetric correction’, that describes this phenomenon with academic precision. It means, in plain language, that the systems with enough money, data and institutional capacity to discover that an experiment has gone wrong can afford to correct it. The systems without those resources cannot, and often do not even know the correction is needed until the damage is visible in their own classrooms and their own assessment results.
This is not merely an abstract inequity. It has a specific mechanism. The countries now de-digitalising, Finland, Sweden, Australia, France, the UK, have had 20 or 30 years of experience with school digitalisation. They have run multiple cycles of national assessments. They have PISA data going back decades. They have teacher unions vocal enough to flag classroom deterioration before it becomes a crisis. They have the research infrastructure to connect a policy change to an outcome measure and draw a conclusion. When their scores drop, they investigate. When the investigation points at screens, they act.
The evidence that was always there
One of the more unsettling conclusions of this series is that much of the evidence driving the current de-digitalisation wave was available considerably earlier than the policies it has inspired. The finding that handwritten notes produce better conceptual understanding than typed ones was published in 2014. The OECD’s analysis showing that more computers do not produce better learning outcomes appeared in 2015. UNESCO’s concerns about platform power and datafication in education have been articulated consistently for years. The distraction research, documenting that students with open laptops in lecture halls perform worse, and drag their neighbours down with them, has been accumulating for well over a decade.
None of this stopped the rollout. The tablets arrived in the Swedish preschools. The 1:1 device programmes expanded. The learning management systems embedded themselves. The AI proctoring tools were procured and deployed. Evidence that gave pause was routinely absorbed into a narrative about implementation, the problem was not the technology, it was how it was being used; give us better training, better platforms, better connectivity, and the results will follow. The results, in many cases, did not follow. But by the time that was clear, the infrastructure was in place, the contracts were running, and the political cost of admitting the bet had been wrong was prohibitive.
What changed was not the evidence, it was the political permission to act on it. PISA 2022 delivered declines dramatic enough to be impossible to attribute to anything other than something systemic. UNESCO issued what amounted to an institutional mea culpa. And a sufficient number of teachers, in a sufficient number of countries, were by then willing to say publicly what they had been saying in staffrooms for years: that the screens were not helping, and in many cases were actively in the way.
What a responsible global policy would look like
This series is not a manifesto against technology in education. It has never argued that. Screens are indispensable tools, for accessing information, for enabling collaboration across distance, for serving students whose accessibility needs require digital solutions, for supporting the administrative and logistical complexity of modern educational institutions. The argument is not against technology. It is against the thoughtless, evidence-free, vendor-driven acceleration of technology in contexts where it undermines the very foundations it is supposed to strengthen.
A responsible global education policy would, at minimum, do several things that the current system conspicuously fails to do. It would require that the evidence base for large-scale digital procurement be genuinely independent of the vendors supplying the technology. It would insist that the learning from early-adopter systems, including the learning about what went wrong, be actively communicated to late-adopter systems before, not after, they make the same investments. It would treat the question of appropriate technology use at different ages and in different pedagogical contexts as a matter of ongoing empirical inquiry, not a settled ideological commitment to ‘more is better.’ And it would hold to account the international organisations and development banks that have promoted digital solutions to educational problems without adequate attention to long-term cognitive and social outcomes.
None of this is technically difficult. The knowledge exists. The research is available. The lesson is sitting there in the PISA data, in the Swedish preschool curriculum reversal, in the UK university exam halls filling up with students holding pens. The question is purely one of political will, and of whether the global education community considers it acceptable to keep selling a model it is quietly dismantling at home.
Who decides what technology is for?
Beneath all the policy detail in this series lies a question that is fundamentally political rather than technical: who gets to decide what role technology plays in education, and in whose interest do those decisions get made? The answer, across the period this series has covered, has too often been: vendors, with governments following at a respectful distance and parents and teachers arriving to the conversation after the contract is signed.
De-digitalisation, for all its imperfections, its occasional moral panic, its selective use of evidence and its tendency to become a political signalling exercise, represents something important: a reassertion that educational technology is a means, not an end, and that the people who should determine how much of it to use are educators, researchers and communities, not quarterly earnings reports. The fact that Finland chose to legislate, that Sweden chose to buy books instead of tablets, that Queensland schools now require phones to be away for the day, often collected, or switched off, from the moment students arrive and found their playgrounds transformed, these are acts of pedagogical agency. They are an insistence that schools are for children, not for platforms.
A final word
There is nothing wrong with technology in education. There is something very wrong with the assumption that more technology is always better, and something worse with the global system that allows wealthy nations to learn that lesson expensively, correct it quietly, and then export the uncorrected version to everyone else.
The pencil did not disappear because it failed. It was sidelined because screens arrived with better marketing. It is coming back, in Finnish classrooms, in Swedish preschools, in Australian playgrounds, in university exam halls, not out of nostalgia, but because 30 years of evidence have converged on an uncomfortable truth: some things, it turns out, require your full attention, your physical hand, and the irreplaceable cognitive effort of a human being working without a shortcut.
That is not a retreat. That is a reckoning. And the only question left worth asking is whether the rest of the world will get to benefit from it before they have to discover it for themselves.
SERIES COMPLETE
Part I: From Ed-Tech Enthusiasm to De-Digitalisation | Part II: Phones, Pens & Early Literacy | Part III: Attention, Algorithms & Adolescents | Part IV: Universities, AI & the Handwritten Exam | Part V: Who Is Going to Tell the Rest of the World?
Features
New kid on the block – AI drug prescriber from the US
Artificial intelligence (AI) in healthcare has come to stay and is a well-recognised development over the last decade or so. AI has now progressed on to even the ability to execute quite a few tasks and manoeuvres that were once the sole duties of doctors. Certain AI programmes are now designed to make tricky diagnoses, offer mental counselling, detect drug interactions, read and diagnose images, forecast results, and review scientific articles, to name a few amongst other capabilities. As the aptitudes of AI increase, the roles of doctors are likely to change. In the future, there is a real possibility that physicians would increasingly be placed in supervisory roles in semiautonomous systems, while retaining responsibility but with reduced independence.
Philosopher Walter Benjamin, in the 1930s, wrote that photography and cinema would have a telling effect on paintings and painters. It was argued that the introduction of visual images would render painting and painters quite obsolete. Many belittled the artistic value of photographs, just as today, many ask whether AI can truly understand illness or empathise with discomfort. The opponents of photography theorised that original works of art, such as paintings, had a so-called aura and that there was something special about an original artwork compared to a reproduction as a photo image, and that the painting echoed its singular history and unique trajectory through time, space, and social meaning.
Today’s doctors have something comparable. Their professional authority was grounded in their unique training, the practical wisdom that they had accrued, their face-to-face presence with patients, and their nuanced clinical judgment. Like an original painting, medical expertise appeared singular and inseparable from the clinician who exercised it rather than from the tools or institutions that supported the physician’s practice.
Now enters the latest AI initiative in healthcare. As documented in the Journal of the American Medical Association (JAMA) on the 13th of April 2026, it is the very first AI DRUG PRESCRIBER. It originated in the state of Utah of the United States of America, which is the 45th state admitted to the Union on the 4th of January 1896, and is well-known for its unique geography, including the Great Salt Lake and its “Mighty 5” national parks: Zion, Bryce Canyon, Arches, Capitol Reef, and Canyonlands.
In January 2026, the State of Utah publicised a first-of-its-kind partnership with an AI company to develop an AI-based programme to prescribe medications without physician involvement. The AI prescriber package sold by the company Doctronic is claimed to conduct a “comprehensive medical assessment” that “mirrors the clinical decision-making process a licensed physician would follow“. Originally, it was intended to focus on prescription renewals, and the software is designed to prescribe almost 200 drugs, including corticosteroids, statins, antidepressants, hormones, and anticoagulant agents. It has the potential to develop into an autonomous system that could even provide original prescriptions without the involvement of doctors.
There are perceived advantages to AI prescribing in a world facing shortages of primary care physicians, as well as certain specialists. The public health goal is to make sure that patients have access to safe, effective drugs and continue receiving them for as long as it is appropriate. There are documented scientific studies in Western countries on non-adherence, failure to take the drugs of a first prescription, and failure to get refill prescriptions. True enough, AI could reduce pervasive medication errors, enhance process efficiency, and free physicians to focus on complex diagnostic tasks or human-to-human interactions.
Yet for all that, technology-driven revolutions can also cause damage, create waste, and even destabilise the medical connection. They could reduce the patient-clinician encounters and substantially reduce the prospects for physicians to spot other problems and for patients to raise anxieties and ask questions. Doctors have to go through a rigorous process of training and demonstration of clinical fitness to be allowed to practice medicine. AI prescribers face no equivalent safety process. AI companies generally do not openly reveal the precise operational details of the software’s abilities to make medical decisions. In the Utah deal, generalisations were offered, including that the AI prescriber is “trained on established medical protocols,” and that its algorithm continues to progress through “feedback loops.” However, they are far from the absolute detailed guarantees that training of a physician offers.
In the American System of Governance, most states have long maintained foundational laws for dispensing medicines, positioning licensed physicians and pharmacists as essential caretakers and even as gatekeepers. Federal Law requires that any drug that “is not safe for use except under the supervision of a practitioner licensed by law” must be dispensed only “upon a written prescription of a practitioner licensed by law“. AI prescribers are not licensed “practitioners” of medicine, and here, Utah has waived state requirements. It has waived State Laws for businesses with novel ideas deemed potentially beneficial to consumers.
Under the main FDA statute, an AI prescriber comes under an “instrument, apparatus, implement, or machine clearly intended for use in the cure, mitigation, treatment, or prevention of disease,” which makes it an FDA-regulated medical device. The 21st Century Cures Act of 2016 created exemptions for software involving administrative support, general wellness, or electronic record storage. For clinical software, the FDA has generally exercised enforcement discretion only for tools that aid physician decisions. By design, AI prescribers remove the physician, meaning that FDA oversight is required.
However, in the Utah deal, the company has apparently not attempted to approach the FDA about the technology, thereby working on the presumption that the FDA does not regulate the practice of medicine. True enough, Federal Law and the FDA itself express that the FDA does not regulate the practice of medicine. However, Federal Law also emphasises that medical devices and drugs must be legally sold and used within a legitimate patient-clinician relationship. Federal Law does not permit the replacement of physicians with unlicensed computers.
The scientific aspects of the conundrum imply that the current political administration appears to be disregarding some of the federal oversight. Since its 2025 inauguration, the executive branch of the current administration has rescinded previous AI governance orders, encouraged the removal of policies that might impair innovation, and issued an executive order aimed at reducing federal funds for states that strictly regulate AI. The USA Commissioner of Food and Drugs has clearly emphasised the need for AI innovation. Given this antiregulatory environment for AI, the prospect of federal intervention against initiatives like AI prescribers appears to be quite slim.
As federal and state regulators retreat, private parties have stepped in. The Joint Commission (TJC), a private, non-profit organisation that functions as the primary accrediting body for healthcare organisations, recently released non-binding guidance urging healthcare organisations to establish internal AI governance structures and rigorously measure outcomes. The success of AI prescribers will ultimately depend on the acceptance of health systems, which should demand robust evidence of safety and effectiveness, optimally in the form of clinical trials.
Tort law, a branch of civil law that deals with public wrongs such as situations where one person’s behaviour causes some form of harm or loss to another, remains a potential avenue for addressing patient harm because Utah’s agreement leaves such remedies intact. However, injured patients face significant hurdles. Courts will have to determine whether AI could be held to the same standard of care as a human physician. A product liability lawsuit would typically require a plaintiff to show that there was a reasonable alternative design, a challenge for AI black-box technologies. Furthermore, companies might argue that patients “assumed the risk” of using the AI prescriber. However, that is not a complete defence.
AI prescribing would be safest under concurrent state and federal oversight. Yet Utah has granted a state waiver, and FDA compliance has not been demonstrated. Other companies may take the lesson that they can bypass federal safety standards, and they may race into the market to ensure they are not left behind.
Some examples beg for caution. The FDA fell behind in regulating flavoured e-cigarettes, which are now ubiquitous and have contributed to a youth e-cigarette epidemic, which has even reached Sri Lanka. The sheer scale of the unauthorised market and the subsequent legal tactics used by tobacco companies turned premarket requirements into a mere technicality. If AI prescribing becomes the industry standard before safety and liability frameworks are established, the power problem may render future regulation infeasible.
Although AI offers the promise of increased efficiency and expanded access, the evasion of legal obligations by early movers raises profound concerns. The company that is marketing the AI Prescriber is operating in a unique legal “grey zone” that has sparked intense debate among regulators and medical associations.
Incorporating AI into modern health care must be evidence-based and responsible. Physicians and health systems should insist that AI technologies should not be allowed to bypass long-standing and proven legal guardrails governing medical products. That needs to be the axiom that should apply not only to the Western nations but to the whole wide world.
by Dr B. J. C. Perera
MBBS(Cey), DCH(Cey), DCH(Eng), MD(Paediatrics), MRCP(UK), FRCP(Edin), FRCP(Lond), FRCPCH(UK), FSLCPaed, FCCP, Hony. FRCPCH(UK), Hony. FCGP(SL)
Specialist Consultant Paediatrician and Honorary Senior Fellow, Postgraduate Institute of Medicine, University of Colombo, Sri Lanka.
An Independent Freelance Correspondent.
Features
From the Handbook for Bad Political Appointments
The Geathiswaran Chapter:
Dr. Ganesanathan Geathiswaran, Sri Lanka’s Deputy High Commissioner in Chennai is in hot water, dragging in with him the Foreign Ministry as well as the Sri Lanka government into a worthless controversy. It stands as a classic example of a misplaced political appointment to a sensitive public position paid for by hapless Sri Lankan taxpayers. And that too by a government that came to power promising not to politicise appointments.
Why would a meeting between a Sri Lankan diplomat and a group of fishermen in South India in the last week of March 2026 be controversial? After all, illegal fishing in Sri Lankan waters by South Indian fishermen from the Tamil Nadu area, which negatively impacts the livelihoods of mostly Tamil-speaking Sri Lankan fishing communities, is a perennial problem that neither Sri Lankan nor Indian governments have been able to resolve. This is also a consistent political issue in Tamil Nadu politics. In this context, a Sri Lankan diplomat meeting local fishermen might well be within his job description. But the issue is how and where such a meeting should take place. The bottom line is that it should not be a public event.
Speaking to The Hindu on 5April 2026, Geathiswaran insisted his presence in the meeting was a “routine visit” and that the event was not organised by any political party. He also said, “I’m not here to do politics” and “I have nothing to do with politics.” He further insisted, “I did not take part in any political campaign. It was in an open area along the seashore. The meeting was not on a stage and in a public area.” These utterances show both Geathiswaran’s naivety, woeful lack of experience and understanding of the nature of politics in the region where he is our country’s chief diplomat.
Be that as it may, let us look at the optics and substance of the said event. According to information circulating in the media in both Sri Lanka and India, the Deputy High Commissioner attended a meeting with local fishermen in Puducherry. It was not a closed-door meeting. It appears, the Sri Lankan diplomat was invited to the event or it was coordinated by Jose Charles Martin, the leader of the newly formed political party, Latchiya Jananayaga Katchi (LJK). Though launched only in 2025, the LJK has been making inroads into Tamil Nadu politics mostly funded by the business interests and funds of Martin’s father, the well-known lottery tycoon, Santiago Martin. LJK joined the BJP-led NDA in the ongoing Puducherry Assembly Elections of 2026. Moreover, as indicated in the photographs in circulation, one can easily see the presence of several BJP politicians including V. P. Ramalingam, BJP’s Puducherry president and a candidate in the Raj Bhavan constituency.
Members of Martin’s family are craftily aligned with different Tamil Nadu political formations. Jose Charles Martin himself is contesting the Puducherry electoral area as a BJP ally, while his mother is contesting from the AIADMK, and his brother-in-law is contesting as a candidate of the Tamilaga Vettri Kazhagam (TVK) party.
Therefore, Geathiswaran’s assertion that the event was not organised by a political party is blatantly false. Further, the event does not become non-political just because of the absence of a stage just as much as a stage does not provide political attributes merely because of its higher elevation. It is unacceptable that a diplomat hand-picked by the Sri Lankan President for the important station of Chennai, thereby depriving the appointment of a senior career diplomat with years of work experience and awareness of political nuance and optics, can be allowed to be this naïve.
It is in this context that Pawan Khera, a senior leader of the Indian National Congress, complained in an X post on 4 April tagging the Indian External Affairs Minister noting that Geathiswaran’s participation in the meeting was “a gross violation of the 1961 Vienna Convention on Diplomatic Relations”, according to which “diplomats ‘have a duty not to interfere in the internal affairs of that State.’” He also noted in his post that the diplomat was invited by the leader of the LJK and also referred to the presence of senior BJP politicians. Leaving aside the overemphasis of the Vienna Convention, which in this instance makes no sense, the issue at hand is the complete lack of common sense on the part of the Sri Lankan diplomat that allowed this controversy to arise in the first place. Despite his insistence on not engaging in politics, which in the case is likely true, this was very clearly a political event, politically conceived, perceived and packaged, organised by a political party, and conducted in the presence of allied politicians who were contesting in a local election. As a foreign diplomatic representative, Geathiswaran should have the cerebral wherewithal to make the distinction or at least seek guidance from his superiors at the Foreign Ministry in Colombo.
Diplomats need not shy away from controversy if it makes sense and benefits the nation. But the incident under reference is purely nonsensical from any perspective. This brings me back to Geathiswaran’s appointment as Sri Lanka’s Deputy High Commissioner in Chennai, itself. What unique experiences did he bring to the post? Of course, he is Tamil-speaking. So are hundreds of thousands of other citizens in the country including potentially competent, well-trained, intelligent and experienced career diplomats. I am not saying that political appointments are necessarily unfavourable, though not ideal unless they bring to the service expertise that the Foreign Service does not have. But what quality and qualification does Geathiswaran possess for the position that is lacking in a career foreign service officer?
Does he bring in access to the different segments of Tamil Nadu political landscape that no one else has? If so, should this controversy not have arisen in the first place, owing to the good connections to the entire political spectrum? In short, he brings absolutely nothing to his office and the country he represents. He also does not have any diplomatic or any other public or private sector experience that would have injected sense and nuance into the present posting. His only qualification is the close political connection to the NPP through family.
This fiasco brings to mind some ideas I presented in 2024 in the government’s own newspaper, the Observer two weeks before the NPP government was established and about one month after President Dissanayake assumed office. Since those conditions still remain valid and the present incident raises the same alarm I raised then, I think it is worth reflecting on them yet again:
“During the last three decades, particularly during the Rajapaksa administration, Sri Lanka’s Foreign Service saw a significant nosedive … In real terms what this means is, the Foreign Service has been encroached by individuals purely based on their political and nepotistic connections, with little or no regard for requisite qualifications, expertise or experience. This is observed not only at ambassadorial level, but also right down to the junior levels in our overseas missions … The main reason for the sorry state of the Sri Lanka Foreign Service is that it has been problematically and parochially politicised over a long period of time, without any pushback … Political appointments are a serious problem. Due to the appointment of completely unqualified individuals on political patronage, there are very few intelligent and well-trained personnel in our embassies in the major cities of the world who are able to proactively work in the country’s interest, when problems arise at the global level. Furthermore, it is also not apparent if there are officials in the Ministry who can advise their unenlightened political superiors without fear and stand their ground on principle. This situation has come about as a matter of simple personal survival and bread-and-butter purposes, owing to which both the larger interest of the Service and self-respect of officers have been clearly compromised.”
Is this not what the Chennai incident also indicates? Geathiswaran being a wrongful appointment is one matter. But it also appears that he did not even have the common sense to seek advice before the meeting in Puducherry or such advice was simply not forthcoming or heeded, as political appointees are generally considered a know-it-all bunch who have the ears of the political hierarchy, and therefore above the norms and regulations that apply to mere career officials.
For many of us the advent of the NPP to power signified the dismantling of the culture of political patronage in which diplomatic postings were rewards for loyalty and friendships. It took less time for the present government than others to go against its own repeatedly stated pre-election positions and to stuff the Foreign Service with incompetent individuals. The present fiasco authored by one of these appointees exemplifies the consequences of this continuing malpractice.
Let me leave readers and government apologists with the words of Tom Nichols, former professor at the U.S. Naval War College about Trumpian ambassadorial appointments, as this applies to our country too: “[With some of his ambassador choices], Trump has elevated diplomatic incompetence to an art.”
Sri Lanka just might outdo the mighty US President on this score.
-
News7 days agoCEB orders temporary shutdown of large rooftop solar systems
-
Features7 days agoFrom Royal College Platoon to National Cadet Corps: 145 years of discipline, leadership, and modern challenges
-
Latest News6 days agoPNS TAIMUR & ASLAT arrive in Colombo
-
Features7 days agoCIA’s hidden weapon in Iran
-
Latest News6 days agoPrasidh, Buttler set up comfortable win for Gujarat Titans
-
News3 days agoPNS TAIMUR & ASLAT set sail from Colombo
-
Features7 days agoA Fragile Ceasefire: Pakistan’s Glory and Israel’s Sabotage
-
Latest News7 days agoHeat index likely to increase up to ‘Caution level’ at some places in the Northern, North-central, North-western, Western, Sabaragamuwa, Southern and Eastern provinces and Monaragala district
