Opinion
Stemming tide of misinformation
by Ifham Nizam
In an era where misinformation spreads at an unprecedented rate, organisations like DataLEADS are taking proactive steps to address this growing challenge, particularly on social media platforms. Sonia Bhaskar, Programme Head at DataLEADS, an organisation based in India, speaks to The Island about the organisation’s initiatives to strengthen the fight against disinformation and empower communities with accurate information.
“At DataLEADS, we are committed to tackling misinformation and disinformation through a combination of technology, training, and grassroots initiatives,” says Bhaskar. “We believe that authentic information is essential for empowering individuals and protecting the integrity of democratic processes.”
Excerpts of the interview:
Q: At DataLeads, what are the most effective tools and strategies you employ to tackle the growing issue of misinformation and disinformation, particularly on social media platforms?
A: DataLEADS is a globally recognised award-winning digital media and tech company, leading conversations on Information, and AI ecosystem globally. At the core of our work lies a profound belief that authentic information is central to human empowerment. In this direction there are numerous programmes and key interventions we have initiated.
1. Building Fact-Checking Capacities in India
In partnership with Google News Initiative, we run one of the world’s biggest fact-checking and training networks the Google News Initiative-India Training Network, which has benefitted hundreds of organisations, local governments, newsrooms, universities and local communities in India. This initiative adopted the Training-of-trainers (ToT) model to initially train about 250 journalists, who in turn trained not only journalists in their newsrooms but also other newsrooms and students of mass communication and journalism all across India. So far as part of this initiative over 70,000 journalists and media students at over 25,000 newsrooms and media schools based in 28 states of India have been trained.
2. Building India’s Largest Media Literacy Network
The problem of misinformation/disinformation is not just a journalism problem but it affects all sections of society and has larger ramifications on democracy and what sources of information people tap into and trust. This prompted us to create Factshala – a network of trainers from different walks of lives, who in turn undertook training in their networks and communities and reached millions of people across the country from Tier-2, Tier-3 cities and villages to build community surveillance and intelligence against misinformation. The initiative has reached more than 66 million people across India in the last five years.
3. Strengthening the fact-checking Ecosystem to tackle online election related misinformation and deepfakes
We are also currently running the Shakti Collective initiative which has brought fact-checkers and publishers from across India together to address election-related misinformation and deepfakes. It is the biggest collaboration between fact-checkers and newsrooms in India to protect elections from misinformation. Together, this consortium between March and June 2024, distributed 6,600+ fact-checks during the world’s biggest elections, the General Election in India. This was a 92% increase in number of fact-checks published, 180% increase in regional language fact-checks, which were amplified in 10+ languages covered. This effort amounted to 4x increase in teams actively engaged in countering election-related misinformation.
As part of the Collective we also had an advisory council for AI and Deepfake detection. It had the best tech minds and academicians in the country, a Supreme Court lawyer and also international tech partners with access to tools to facilitate deepfake detection and also conduct masterclasses and trainings for the Collective members.
Over the years, we have also run specially designed visual workshops and boot camps for media colleagues and newsrooms in India. We are committed to building new competencies, collaborations and networks across the globe to strengthen information resilience and integrity and helping communities unleash their creativity at work. With Asian Dispatch, Global Data Dialogue, and the Shakti Collective we are building new networks and platforms to engage different stakeholders to build new conversations and scale the impact of our work.
AI is often touted as a solution to detecting and combating misinformation. What role do you see AI playing in identifying fake news and deepfakes, and how reliable are these tools in the fight against digital deception?
There are no tools, AI driven or otherwise, where you can feed in information and it can declare it true or false. Tools are to be applied to facilitate investigation and then fact-checkers and journalists need to follow due process to verify the sources, ask the right questions and if need be pick up the phone and make calls. Good old journalism practices are needed more than ever before and the essence of journalism, which is defined by the need to verify everything, needs to be followed. This is irrespective of the advent and rise of AI or any other technology in future.
There are tools that are being developed as deepfake detection tools. But these tools cannot be relied up on completely for accurate results. They have been known to give inaccurate results, and sometimes can falter when parts of real images are mixed with AI generated components. The reason for these errors could range from limited datasets, lack of properly trained data, lack diversity in data in terms of languages, race, ethnicity or just inherent biases. The fact is also that these tools are built by and large by tech companies but detection tools are playing catch up to the advancements in tools to create AI generated content, since more money is being invested by big tech companies to develop AI tools rather than build guardrails and tools to detect misuse of these tools.
Q: What role do you think digital literacy plays in addressing the problem of misinformation? How can organisations, governments, and educational institutions better equip individuals to navigate the digital world responsibly?
A: Misinformation, disinformation, propaganda and false claims and so on cannot be abolished. They have existed in the past and will always be there. What has changed is the ease of creating and disseminating these materials, thanks to social media and its ubiquitous presence in everyone’s hands thanks to the proliferation of mobile phones with internet access. So any effort to combat misinformation will not succeed without a robust media literacy plan for the masses belonging to different age, gender, ethnicity, covering as many languages, regions and socio-economic backgrounds.
The first step to fighting misinformation is the need to assess the content being consumed, apply critical thinking and verify the information. Given the sheer volume of the content being generated online, across so many varied platforms, media literacy assumes greater significance, today everyone with a phone is a content creator but more importantly there is more content available but quality check is missing. The rise of social media has come at a time when traditional sources of credible information are crumbling due to faulty financial models, ownership issues and diminishing freedom of press. The erosion of trust in mainstream media is too real and increasingly proving to be problematic in a world where misinformation and disinformation not only spreads faster but it is getting easier to produce with AI generated tools. As AI tools evolve, it will get increasingly difficult to distinguish between what is real and what is fake.
Awareness among people to not just identify misinformation and disinformation but also verify and stop its spread will assume importance.
Tackling a problem of this magnitude requires a 360° degree approach and effort from all stakeholders – in developing curriculum and in implementing it in a manner that bridges the digital divide to reach all, down to the last mile.
Q: Fact-checking has become a vital part of journalism today. What unique challenges do fact-checkers face when dealing with the sheer volume of content online, and how can AI help or hinder their work?
A: Fact-checkers face a problem of reach. They depend on the same platforms for distribution of fact-check, which are spreaders of misinformation. They also face the issue of scale, and may lack the resources to scale up operations in different languages and establish presence in the various platforms, past and present. There is also the challenge of making fact-checks available in different formats from articles to vertical videos like Youtube shorts or Instagram reels.
The other big challenge is that of ability to cover all the misinformation that is floating and priortising what to fact-check. Currently, most fact-checkers in India, especially the independent ones that are not part of a larger newsroom or organisation, struggle for financial avenues to sustain and grow operations and currently lack the monetary muscle to invest in R&D and even AI to increase their productivity and efficiencies to scale up their fact-checking and verification work.
Q: What do you consider the biggest strengths of AI when it comes to improving the efficiency and accuracy of journalism? Many people still fear the potential of AI to replace human jobs or make unethical decisions. What do you think are the biggest misconceptions people have about AI, and how can we educate the public on its potential benefits and risks?
A: In an era of resource crunch that most newsrooms face, AI can help free up resources by taking over repetitive, mundane tasks that currently need manpower, to reduce time taken for production of news. These could be functions that can be templatised – like stock market reports, weather reports, game scores etc.
AI can also facilitate distribution of news by personalising the dissemination based on preferences of readers (for example, creation of personalised newsletters) or even maximise ad revenues through contextualising ad placements. It can also be used to scrape comments and ease the work of sorting and replying to comments. It can facilitate SEO functionalities, transcriptions, subtitling, translations (dependent on the tool’s language capabilities).
AI tools that can generate images or videos based on text prompts can also be deployed strategically for innovative storytelling. But Newsrooms need to have guidelines specifying dos and don’ts and ethical and responsible use of AI. The most important factor to keep in mind is ensuring that no step in the workflow that involves taking decisions or publishing news to the public domain is taken by the machine, steps where human intervention will be crucial needs to be well defined and critical for responsible deployment of AI. So, in that sense, training and upskilling of newsroom staff needs to be undertaken to ensure that we have a future proof newsroom where staff is ready for the new jobs that are created while some of the old functions get taken over by machines.