Connect with us

News

FUTA demands abolition of KNDU Bill

Published

on

The Federation of University Teachers Associations (FUTA) has urged the government to drop plans to introduce the General Sir John Kotelawala National Defence University (KNDU) Bill in the guise of an ‘Amended Bill.’

The FUTA alleged that the changes made were cosmetic and there was no change to the substance of the Bill and it posed a grave threat of militarising higher education in Sri Lanka.

The FUTA statement: “We also note that the government is attempting to push through this highly controversial bill at a time when the country is facing its gravest economic and governance crisis since independence and call upon the government to immediately halt this process and withdraw the Bill. The KNDU Bill is hardly a policy priority at this moment when the country’s economic life has all but ground to a halt and people and industries are struggling with 8-hour power cuts, fuel and gas shortages and the concomitant disruption of livelihoods.

 The amended Bill continues to allow civilian education at KNDU, and it allows education in all disciplines without restricting the functions of the University to defence and military studies. It also facilitates a fee levying higher education space that can be expand limitlessly and create a parallel highly unregulated higher education system outside the state university system in which student admission is not based on merit but the financial strength of the students’ social background. The so-called ‘amended Bill’ retains a highly militarized governance structure from top to bottom and the Board of Governors, the top governing body of the university, is full of military officers including the top most military officers of the country, Secretary to the Ministry of Defence, Chief of Defence Staff, Commander of the Sri Lankan Army, Commander of the Sri Lanka Navy and the Commander of the  Air Force. The Vice Chancellor also remains a senior officer of the armed forces. Under the ‘amended bill’ there continues to be a Head Quarters, the high level operational body of the KNDU, filled with military personnel. This is a body placed above the Council which in a normal civilian university is the supreme administrative forum..

The ‘amended bill’ has further introduced a highly controversial amendment to include the Chairman of the UGC as the member of the Board of Governors (prior to the amendment it was restricted to a nominee of the UGC). While already the representation of the UGC – the institute that carries the prime responsibility of regulating and safeguarding the interests of the state university system – within the Board of Governors of the KNDU leads to a conflict of interest, naming the Chairman of the UGC as a member of the Board of Governors at the KNDU exacerbates this conflict of interest. As those who are familiar with the issues within the existing KDU are aware, the presence of the Chairman of the UGC at the existing Board of Management of the KDU has made him complicit in decisions that seriously undermine the state university system which has even led to court cases where he is a respondent. His presence within the Board of Governors of the proposed KNDU – with greater powers to dominate and expand this military-led system of higher education – can pose a serious threat to the interests of the existing state university system. As past experience has shown the UGC chairman has been unable to represent the interests of the state university system within the Board of Management of the KDU, but has only served to undermine the interests of the state university system while facilitating the expansion of KDU and its military model of education.

FUTA therefore urges the government to unconditionally withdraw the KNDU Bill immediately. As we have explained in detail in a number of our previous communications, FUTA unreservedly rejects civilian education within a highly militarized structure such as the KNDU. Allow civilian education be given within the state university system and reallocate the vast amount of money channeled to KDU/KNDU to the state university system to facilitate its expansion, so that civilian student earmarked for KDU/KNDU can be absorbed into the existing state university structure. University education is a civilian affair and all across the democratic world universities are spaces that produce independent, free-thinking and creative citizens. A military-led education model will never achieve this and will only further contribute to tarnishing Sri Lanka’s already battered democratic credentials in the global community.”

We would like to remind the government that the country is in a serious multidimensional crisis and on the the verge of collapse unless urgent remedial action is taken. Rather than trying to re-package controversial bills that were soundly rejected by a vast cross-section of the people of this country, what the government should do is to focus on the multiple crisis faced by the country and find immediate solutions. FUTA is determined to defeat the KNDU Bill and will take all possible measures to prevent militarization of high education in the country and mobilize broad social and political support against this ‘amended KNDU Bill’ unless it is withdrawn immediately.



Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Foreign News

AI chatbots could be making you stupider

Published

on

By

As large language models take over more and more cognitive tasks, researchers are warning this mental outsourcing comes with a cost.

When research scientist Nataliya Kosmyna was looking for interns, she noticed that cover letters she received were suspiciously similar. They were long, polished and after introductions would often jump to an abstract and arbitrary connection to her work.

It was obvious to her that applicants were using large language models (LLMs) – a form of artificial intelligence that powers chatbots such as ChatGPT, Google Gemini and Claude – to write the letters.

At the same time, during lessons on campus at Massachusetts Institute of Technology (MIT), Kosmyna, who studies the interaction between humans and computers, noticed that numerous students were forgetting content more easily compared to a few years ago.

With the increasing reliance on LLMs, she had a hunch that this could be affecting her students’ cognition and sought to understand more.

The concern that researchers like Kosmyna have is that if we become too reliant on AI, it could affect the language we use and even our ability to do basic cognitive tasks. There is now a growing body of research suggesting that this “cognitive offloading” to AI can have a corrosive effect on our mental abilities. The consequences could be alarming and may even contribute to cognitive decline.

“The ChatGPT group showed notably less brain activity – it was reduced by up to 55%”

It’s well known that the tools we use can change how we think. With the advent of the internet for instance, tasks that once required deep research could be found by plugging a simple query into a search box. As the use of search engines increased, research found we became less likely to remember details, something dubbed “the Google effect”. (Some argue, however, the internet also serves as an external memory system that frees up our brain to do other tasks)

But there is now growing alarm that as we offload even more of our thinking to LLMs and other forms of AI, the effects on our memories and ability to solve problems could get worse. Artificial intelligence tools can write convincing poetry, give financial advice and provide companionship.  Students are increasingly outsourcing their own work to AI tools as well.

Studies have already shown that young people might be particularly vulnerable to the negative effects that using AI can have on key cognitive skills like critical thinking.  Kosmyna, however, wanted to dig deeper into the potential effects.

Reduced mental effort

She and her colleagues at MIT Media Lab recruited 54 students to write short essays and split them into three groups. One was instructed to use ChatGPT. A second could use Google search, with AI-generated summaries turned off. The third didn’t use technology. Each student’s brainwaves were measured while they worked.

The essay topics were deliberately open-ended, meaning little research was needed for the task, with prompts including questions around loyalty, happiness or our daily life choices.

The results haven’t been published in a scientific journal yet, but they were none-the-less eye-opening, according to Kosmyna. Those who used their own minds had a brain that was “on fire”, showing widespread activity across many parts of the brain, she says. The search engine-only group still showed strong activity in the visual parts of the brain, but the ChatGPT group showed notably less brain activity – it was reduced by up to 55%.

“The brain didn’t fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information,” says Kosmyna.

ChatGPT also affected people’s memories. After submitting their essays, people in the AI group were unable to quote from their essays, and several felt they had no ownership over the work. Other studies have also shown that people become less able to retain and recall information when they use AI tools such as ChatGPT.

While the findings are still undergoing peer review, they echo those from other studies. One study by researchers at the University of Pennsylvania suggests that some people undergo something they term “cognitive surrender” when using generative AI chatbots. This means they tend to accept what the AI tells them with minimal scrutiny and even allow it to override their own intuition.

Similar effects can be found outside the world of AI chatbots too – even in life-or-death situations. A recent multinational study team found that medical professionals who used an AI tool to screen for colon cancer for three months were subsequently worse at spotting the tumours without it.

Getty Images Researchers have growing concerns about the harms that rapid adoption of AI might be causing (Credit: Getty Images)
Researchers have growing concerns about the harms that rapid adoption of AI might be causing (BBC)

 

Outsourcing work to AI also risks losing much of the creativity that produces original work, warns Kosmyna. The essays that students in her study wrote with ChatGPT looked very similar and were described by the teachers marking them as “soulless”, lacking originality and depth, Kosmyna says. “One of the teachers asked if students were sitting next to each other because the essays were so similar.”

While studies such as these illustrate the short-term effects LLMs can have on the brain, the long-term impacts are far less clear. The study by Kosmyna and her colleagues provides a glimpse. Four months after the initial study they asked the students to write another essay, but this time those who had used ChatGPT were told to work without LLM support. The neural connectivity in their brains was lower than those who switched the opposite way, perhaps indicating that they had not engaged with the topics properly in the first place.

Cognitive decline

Yet, LLMs can be a positive tool to aid thinking, but only if we don’t rely on them by outsourcing our mental tasks in the process, says computational neuroscientist Vivienne Ming, author of Robot Proof. She’s concerned though that this is not how most people interact with this technology.

Her reasoning comes from research she conducted for her book, during which Ming asked a group of students at the University of Berkeley to predict real-world outcomes, such as the price of oil. She found that the majority of participants simply asked AI and copied the answer.

She measured their brains’ gamma wave activity – a marker of cognitive effort – finding it showed very little activation. Again her research is yet to published, but Ming worries that if her findings are borne out in further studies it could have long-term implications. Other research, for example, has linked weak gamma wave activity to cognitive decline later in life.

“That’s really worrying,” Ming says. “If that is a natural mode for people to interact with these systems – and these are smart kids – that’s bad.” Deep thinking, she says, is our superpower. “If we don’t use it, the long-term implications for cognitive health are pretty strong.”

That’s because when we rely on LLMs it requires very little cognitive effort, Ming adds, which is exactly what’s needed for a healthy brain.

A small subset of participants though – less than 10% – worked differently and used AI as a tool to gather data that they then analysed themselves. These individuals made more accurate predictions than others participants and showed stronger brain activation too.

For long-term brain health we need to continue to challenge ourselves

Almost two decades ago, Ming predicted that within 20 to 30 years we would see a statistically meaningful rise in dementia rates directly related to our overreliance on Google Maps. “I meant it to be provocative,” Ming says. “If you don’t have to think about navigating then there’ll be some detectable effect.”

While we don’t have data on this exact prediction, the increased use of GPS has been linked to worse spatial memory over time, according to one study of 13 people conducted over three years. And poor spatial navigation may be a potential predictor of Alzheimer’s Disease,  according to another study.

It’s clear that the more active our brain is, the more protected it is from cognitive decline. LLMs then, Ming says, could not only reduce creativity but could harm cognition and potentially increase the risk of dementia.

As AI tool use increases, we need to work with it in a way that benefits us rather than harms us. Ming suggests that ultimately, the goal could be a form of “hybrid intelligence” where humans and machines “do the hard stuff” together. By this she means we need to think first and use tools to challenge us later, rather than simply letting them answer questions for us. Kosmyna agrees and suggests learning subjects without AI tools first to build a foundation and then think about using LLMs.

Ming recommends using what she calls the “nemesis prompt” to challenge your own thinking. It works by prompting an AI to act as a “lifelong enemy” or nemesis, then ask it to explain in detail why your ideas are wrong and how you can fix them, forcing you to defend and refine your arguments rather than simply accepting the answers it provides.

Another technique she suggest is prioritising “productive friction” and asking the AI to only provide context and ask you questions, rather than supplying answers. When she tested this by fine-tuning an AI bot not to give answers, she found that more individuals were more engaged.

Ultimately, we should all be wary of cognitive shortcuts, which is something “our brains love”, Kosmyna says. Clearly, for long-term brain health we need to continue to challenge ourselves. Our minds, creativity and cognitive health will benefit in the process.

[BBC]

Continue Reading

Latest News

Heat Index at Caution Level in the Northern, North-central, North-western, Western, Sabaragamuwa and  Eastern provinces during the day time

Published

on

By

Warm Weather Advisory
Issued by the Natural Hazards Early Warning Centre
Issued at 3.30 p.m. on 24 April 2026, valid for 25 April 2026.

Heat index, the temperature felt on human body is likely to increase up to ‘Caution level’ at some places in Northern, North-central, North-western, Western, Sabaragamuwa and
Eastern provinces during the day time.

The Heat Index Forecast is calculated by using relative humidity and maximum temperature and this is the condition that is felt on your body. This is not the forecast of maximum temperature. It is generated by the Department of Meteorology for the next day period and prepared by using global numerical weather prediction model data.


Effect of the heat index on human body is mentioned in the above table and it is prepared on the advice of the Ministry of Health and Indigenous Medical Services.

ACTION REQUIRED
Job sites: Stay hydrated and takes breaks in the shade as often as possible.
Indoors: Check up on the elderly and the sick.
Vehicles: Never leave children unattended.
Outdoors: Limit strenuous outdoor activities, find shade and stay hydrated.
Dress: Wear lightweight and white or light-colored clothing.

Note:
In addition, please refer to advisories issued by the Disaster Preparedness & Response Division, Ministry of Health in this regard as well. For further clarifications please contact 011-7446491.

Continue Reading

News

Opposition holds NPP Cabinet responsible for coal scam, three times bigger than bond fraud

Published

on

Prof. G. L. Peiris

The Opposition yesterday called for the entire Cabinet-of-Ministers to accept responsibility for the coal scam. Addressing the media at the Flower Road Office of UNP leader Ranil Wickremesinghe, former Foreign Minister Prof. G. L. Peiris emphasised that Energy Minister Kumara Jayakody’s resignation, in the wake of the damning report issued by the National Audit Office (NAO), has now implicated the entire Cabinet-of-Ministers.

Prof. Peiris asserted that Jayakody, who had been indicted in the Colombo High Court over alleged corruption, during the Yahapalana administration, stepped down after the NPP failed to suppress the truth on the coal scam.

The ex-Minister declared that Jayakody’s resignation, the first since the formation of new government, with a super majority in Parliament, was a devastating setback for the current dispensation.

The internationally recognised legal scholar said that a future government would move courts against the entire NPP Cabinet. Referring to the NAO report submitted to Parliament, Prof. Peiris emphasised that there was absolutely no ambiguity regards allegations directed at the Energy Ministry. The NAO report proved that the Indian company, Trident Champhar, that won the major contract, didn’t even have the required registration.

Prof. Peiris said that the coal scam was three times bigger than the Treasury bond scams, perpetrated during the Yahapalana time (SF)

Continue Reading

Trending