Connect with us

Foreign News

Belgian prince loses bid for benefits on top of £300k royal allowance

Published

on

Prince Laurent and his British-born wife, Princess Claire of Belgium [BBC]

A Belgian prince’s attempt to claim social security benefits on top of his six-figure royal allowance has been rejected by a court.

Prince Laurent – the younger brother of King Philippe – received €388,000 (£295,850; $376,000) from state funds last year but said that his work entitles him and his family to social security.

He had argued that he was partly self-employed because of the duties he carries out as a royal, as well as running an animal welfare charity for the past decade.

Laurent, 61, said he was acting out of “principle” rather than for money. The court disagreed.

“When a migrant comes here, he registers, he has a right to social security,” he told Belgian broadcaster RTBF.

“I may be a migrant too, but one whose family established the state in place.”

But on Monday a court in Brussels turned down Laurent’s request on the grounds that the prince can be considered neither self-employed nor an employee.

However, according to broadcaster VTM the judge acknowledged that the prince should actually be entitled to a pension – but said gaps in legislation made that impossible and called for the law to be amended.

His lawyer, Olivier Rijckaert, told Belgian newspaper Le Soir that Laurent’s request had not been based on a “whim” and insisted on its symbolism, saying that social security is “granted by Belgian law to all residents, from the most deprived to the richest”.

Mr Rijckaert also said that most of the prince’s allowance is spent on his assistant’s salary and various travel expenses.

This means Laurent is left with about €5000 (£4300; $5500) a month but no social security benefits, such as the right to claim back some medical expenses.

The prince – who has three adult children with British-born wife Claire Coombs – has also expressed his concerns over his family’s wellbeing since the royal allowance will be cut when he dies.

Laurent took legal action against the Belgian state after his application for social security was refused. A first hearing was held in November 2024.

According to RTBF, the prince and his legal counsel have not yet decided whether to appeal the court’s decision.

Laurentm who is the 15th in the Belgian line of succession, is no stranger to controversy and is sometimes termed the prince maudit – the “cursed prince” – in Belgium.

In 2018, the Belgian federal parliament voted to dock his monthly allowance for a year after he attended a Chinese embassy reception without government permission, in full naval uniform.

He has also racked up several speeding fines and has been criticised for attending meetings in Libya when the late Muammar Gaddafi was still in power.

[BBC]



Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Foreign News

Trump cancels US envoys’ trip to Pakistan for talks on Iran war

Published

on

By

President Donald Trump cancelled a planned trip by US officials to Pakistan for talks on the Iran war on Saturday, shortly after Tehran’s delegation had left Islamabad.

The US president said special envoy Steve Witkoff and son-in-law Jared Kushner would be wasting “too much time”, adding that if Iran wanted to talk “all they have to do is call”.

Earlier, Iranian Foreign Minister Abbas Aragchi held talks with mediator Pakistan, saying afterwards he had shared Iran’s position on ending the war but was yet to see whether the US was “truly serious about diplomacy”.

Diplomatic efforts have stalled despite Trump’s extension of a ceasefire that had been due to expire on 22 April to allow talks to continue.

Both sides have been locked in a standoff over the Strait of Hormuz, with Iran restricting passage through the key shipping route in the wake of the US and Israel commencing strikes in February, as well as over Tehran’s nuclear ambitions.

The US has since increased its naval presence in the strait – through which roughly a fifth of the world’s oil supply passes – to block Iranian oil exports.

The White House had said the Iranians “want to talk” when the trip was announced on Friday, but Iran said there were no plans for a direct meeting.

Trump said the ceasefire would hold on Saturday despite hopes of another round of face-to-face talks fading.

[BBC]

Continue Reading

Foreign News

AI chatbots could be making you stupider

Published

on

By

As large language models take over more and more cognitive tasks, researchers are warning this mental outsourcing comes with a cost.

When research scientist Nataliya Kosmyna was looking for interns, she noticed that cover letters she received were suspiciously similar. They were long, polished and after introductions would often jump to an abstract and arbitrary connection to her work.

It was obvious to her that applicants were using large language models (LLMs) – a form of artificial intelligence that powers chatbots such as ChatGPT, Google Gemini and Claude – to write the letters.

At the same time, during lessons on campus at Massachusetts Institute of Technology (MIT), Kosmyna, who studies the interaction between humans and computers, noticed that numerous students were forgetting content more easily compared to a few years ago.

With the increasing reliance on LLMs, she had a hunch that this could be affecting her students’ cognition and sought to understand more.

The concern that researchers like Kosmyna have is that if we become too reliant on AI, it could affect the language we use and even our ability to do basic cognitive tasks. There is now a growing body of research suggesting that this “cognitive offloading” to AI can have a corrosive effect on our mental abilities. The consequences could be alarming and may even contribute to cognitive decline.

“The ChatGPT group showed notably less brain activity – it was reduced by up to 55%”

It’s well known that the tools we use can change how we think. With the advent of the internet for instance, tasks that once required deep research could be found by plugging a simple query into a search box. As the use of search engines increased, research found we became less likely to remember details, something dubbed “the Google effect”. (Some argue, however, the internet also serves as an external memory system that frees up our brain to do other tasks)

But there is now growing alarm that as we offload even more of our thinking to LLMs and other forms of AI, the effects on our memories and ability to solve problems could get worse. Artificial intelligence tools can write convincing poetry, give financial advice and provide companionship.  Students are increasingly outsourcing their own work to AI tools as well.

Studies have already shown that young people might be particularly vulnerable to the negative effects that using AI can have on key cognitive skills like critical thinking.  Kosmyna, however, wanted to dig deeper into the potential effects.

Reduced mental effort

She and her colleagues at MIT Media Lab recruited 54 students to write short essays and split them into three groups. One was instructed to use ChatGPT. A second could use Google search, with AI-generated summaries turned off. The third didn’t use technology. Each student’s brainwaves were measured while they worked.

The essay topics were deliberately open-ended, meaning little research was needed for the task, with prompts including questions around loyalty, happiness or our daily life choices.

The results haven’t been published in a scientific journal yet, but they were none-the-less eye-opening, according to Kosmyna. Those who used their own minds had a brain that was “on fire”, showing widespread activity across many parts of the brain, she says. The search engine-only group still showed strong activity in the visual parts of the brain, but the ChatGPT group showed notably less brain activity – it was reduced by up to 55%.

“The brain didn’t fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information,” says Kosmyna.

ChatGPT also affected people’s memories. After submitting their essays, people in the AI group were unable to quote from their essays, and several felt they had no ownership over the work. Other studies have also shown that people become less able to retain and recall information when they use AI tools such as ChatGPT.

While the findings are still undergoing peer review, they echo those from other studies. One study by researchers at the University of Pennsylvania suggests that some people undergo something they term “cognitive surrender” when using generative AI chatbots. This means they tend to accept what the AI tells them with minimal scrutiny and even allow it to override their own intuition.

Similar effects can be found outside the world of AI chatbots too – even in life-or-death situations. A recent multinational study team found that medical professionals who used an AI tool to screen for colon cancer for three months were subsequently worse at spotting the tumours without it.

Getty Images Researchers have growing concerns about the harms that rapid adoption of AI might be causing (Credit: Getty Images)
Researchers have growing concerns about the harms that rapid adoption of AI might be causing (BBC)

 

Outsourcing work to AI also risks losing much of the creativity that produces original work, warns Kosmyna. The essays that students in her study wrote with ChatGPT looked very similar and were described by the teachers marking them as “soulless”, lacking originality and depth, Kosmyna says. “One of the teachers asked if students were sitting next to each other because the essays were so similar.”

While studies such as these illustrate the short-term effects LLMs can have on the brain, the long-term impacts are far less clear. The study by Kosmyna and her colleagues provides a glimpse. Four months after the initial study they asked the students to write another essay, but this time those who had used ChatGPT were told to work without LLM support. The neural connectivity in their brains was lower than those who switched the opposite way, perhaps indicating that they had not engaged with the topics properly in the first place.

Cognitive decline

Yet, LLMs can be a positive tool to aid thinking, but only if we don’t rely on them by outsourcing our mental tasks in the process, says computational neuroscientist Vivienne Ming, author of Robot Proof. She’s concerned though that this is not how most people interact with this technology.

Her reasoning comes from research she conducted for her book, during which Ming asked a group of students at the University of Berkeley to predict real-world outcomes, such as the price of oil. She found that the majority of participants simply asked AI and copied the answer.

She measured their brains’ gamma wave activity – a marker of cognitive effort – finding it showed very little activation. Again her research is yet to published, but Ming worries that if her findings are borne out in further studies it could have long-term implications. Other research, for example, has linked weak gamma wave activity to cognitive decline later in life.

“That’s really worrying,” Ming says. “If that is a natural mode for people to interact with these systems – and these are smart kids – that’s bad.” Deep thinking, she says, is our superpower. “If we don’t use it, the long-term implications for cognitive health are pretty strong.”

That’s because when we rely on LLMs it requires very little cognitive effort, Ming adds, which is exactly what’s needed for a healthy brain.

A small subset of participants though – less than 10% – worked differently and used AI as a tool to gather data that they then analysed themselves. These individuals made more accurate predictions than others participants and showed stronger brain activation too.

For long-term brain health we need to continue to challenge ourselves

Almost two decades ago, Ming predicted that within 20 to 30 years we would see a statistically meaningful rise in dementia rates directly related to our overreliance on Google Maps. “I meant it to be provocative,” Ming says. “If you don’t have to think about navigating then there’ll be some detectable effect.”

While we don’t have data on this exact prediction, the increased use of GPS has been linked to worse spatial memory over time, according to one study of 13 people conducted over three years. And poor spatial navigation may be a potential predictor of Alzheimer’s Disease,  according to another study.

It’s clear that the more active our brain is, the more protected it is from cognitive decline. LLMs then, Ming says, could not only reduce creativity but could harm cognition and potentially increase the risk of dementia.

As AI tool use increases, we need to work with it in a way that benefits us rather than harms us. Ming suggests that ultimately, the goal could be a form of “hybrid intelligence” where humans and machines “do the hard stuff” together. By this she means we need to think first and use tools to challenge us later, rather than simply letting them answer questions for us. Kosmyna agrees and suggests learning subjects without AI tools first to build a foundation and then think about using LLMs.

Ming recommends using what she calls the “nemesis prompt” to challenge your own thinking. It works by prompting an AI to act as a “lifelong enemy” or nemesis, then ask it to explain in detail why your ideas are wrong and how you can fix them, forcing you to defend and refine your arguments rather than simply accepting the answers it provides.

Another technique she suggest is prioritising “productive friction” and asking the AI to only provide context and ask you questions, rather than supplying answers. When she tested this by fine-tuning an AI bot not to give answers, she found that more individuals were more engaged.

Ultimately, we should all be wary of cognitive shortcuts, which is something “our brains love”, Kosmyna says. Clearly, for long-term brain health we need to continue to challenge ourselves. Our minds, creativity and cognitive health will benefit in the process.

[BBC]

Continue Reading

Foreign News

Trump tells BBC that King’s visit could ‘absolutely’ help repair relations with UK

Published

on

By

The King and Queen will travel to the US for a four-day visit beginning on Monday (BBC)

US President Donald Trump has said next week’s state visit from King Charles and Queen Camilla could help repair relations with the UK.

When asked in a phone interview with the BBC whether the visit could help repair the relationship, Trump said: “Absolutely. He’s fantastic. He’s a fantastic man. Absolutely the answer is yes.”

“I know him well, I’ve known him for years,” he said. “He’s a brave man, and he’s a great man. They would absolutely be a positive.”

The president also spoke about his relationship with UK Prime Minister Sir Keir Starmer,  who he said could only “recover” if he changed course on immigration.

The King and Queen will travel to the US for a four-day visit beginning on Monday, and will meet with Trump at the White House.

The King will have a private meeting with the president and also deliver an address to Congress.

After two days in Washington DC, they will travel to New York, Virginia and Bermuda before returning to the UK.

The Foreign Office said the trip would mark the 250th anniversary of US independence, and would celebrate a partnership of “shared prosperity, security and history”.

In the five-minute interview on Thursday, Trump was also asked about his relationship with Sir Keir.

The two leaders have appeared at odds over the war in Iran, and the prime minister has faced mounting pressure over his decision to appoint Lord Mandelson as UK ambassador to the US.

In a post on Truth Social on Monday, Trump said Lord Mandelson was “a really bad pick” but the prime minister had “plenty of time to recover”.

When asked what he meant by that post, Trump said: “If he opened the North Sea and if his immigration policies became strong, which right now they’re not, he can recover, but if he doesn’t, I don’t think he has a chance.

Trump has repeatedly called on the UK to increase oil and gas extraction in the North Sea.

“I make my decisions based on what’s in the British national interest and not what other people say or do,” Sir Keir said while talking to broadcasters  about the president’s comments on Thursday.

“That is why I took the decision that we would not be dragged into the war in Iran,” he said. “I’m not going to be diverted or deflected from that by what anybody else says.”

(BBC)

Continue Reading

Trending