Features

Possible role of Artificial Intelligence in elections

Published

on

Image courtesy the USC Annenberg Center on Communication Leadership and Policy

by Prof. Janendra De Costa,

University of Peradeniya
(janendrad@gmail.com).

Elections are an essential component of a functioning democracy. It is presidential election time in the US as it is in Sri Lanka. This week, the candidate of the Republican Party and the former President, Donald Trump alleged that the campaign of Kamala Harris, the candidate of the Democratic Party and the current Vice-President, had falsely inflated the crowd size (reported to be around 15,000) at the Detroit Airport when she arrived there for an election rally in the State of Michigan, by using artificial intelligence (AI) tools. However, the media quickly fact-checked the allegation using video footage and on-site reporters and reported that the allegation was false and that the reported crowd size was true. This incident brings into sharp focus the role of AI in contemporary elections, which is raising concerns in many parts of the world, especially in those countries which are due to hold elections in the near future. It is estimated that about four billion people will go to the polls during this year.

Like any good technology, AI is a powerful tool, which can be used for the good as well as for the bad. Recently, there have been a few articles, and an editorial, in the prestigious science journal Nature on how the use of AI could influence the outcome of elections. For example, the subtle use of AI on images (softfakes) can alter the expressions of a candidate to make him/her less or more likeable to the voters. AI can generate images of promised infrastructure projects which appeal to the perceived expectations of the electorate. Less subtle use of AI (deepfakes) could generate authentic-looking speeches, videos and quotes by election campaigns, often distributed widely via social media, to attract or dissuade voters from voting for a particular candidate. Campaign content disseminated using social media platforms is particularly vulnerable to AI-generated fake news and misinformation, especially when the voters are less-knowledgeable about such things. This is particularly true in countries such as Sri Lanka and India, where a substantial percentage of the voters are from remote rural areas. While these voters may have access to social media platforms such as WhatsApp, YouTube, TikTok and X (former Twitter), they would not, in all probability, be able to distinguish between AI-generated fake or altered content and genuine, authentic content.

Fact-checking of speeches made during election rallies, debates and discussions is not strong in countries such as Sri Lanka as it is in countries such as the US. When the mechanisms for independent verification are weak, the tendency for the politicians to make false claims, either about their own achievements or about blunders and shortcomings of their opponents, increases. This means that a strong ‘free’ and ‘independent’ media is an essential requirement for a democracy to function effectively. A mature democracy requires that people are given correct facts and figures by politicians who seek their mandate for the people to make informed decisions. Even with a strong media, in past elections in the US, voters have been swayed and outcomes have been decided by not-so-accurate statements from candidates and their campaigns on their rivals. What chance would the voters in a country like Sri Lanka, where the media is not so strong and fact-checking politicians’ statements and campaign content is weak to non-existent, have in recognizing AI-generated misinformation and fakes from authentic content? Even when misinformation and fake news are recognised for what they are, the intended damage may have been done. Recognizing the AI-generated fakes often requires technical know-how which may be lacking among a large section of the media. When such content is spread through social media, there are very few barriers, checks and balances in the system to prevent their damage. Apart from resorting to costly and lengthy litigation, there are few mechanisms of accountability to punish the wrong doers.

One of the few scientific studies which quantified the prevalence of AI misinformation in the recently held elections in India was published a few weeks ago in Nature and can be accessed at https://www.nature.com/articles/d41586-024-01588-2.

When trying to obtain a scientifically valid assessment of the extent of AI-generated misinformation and its possible impact, the researchers are faced with three major challenges. First and foremost, the technical know-how to identify AI-generated or altered content is still in its infancy. Currently, the researchers must manually look for known signs and discrepancies that are characteristic of AI intervention. Secondly, quantifying the percentage of fake content out of the total disseminated content requires examination of messages which are protected by privacy laws and restrictions. Therefore, the researchers must depend on users who volunteer to donate their social media content to be used as data for research purposes. The third challenge is assessing the extent to which fake content has swayed the opinions and perceptions of their receivers. The study published in Nature on 06 June 2024 examined 1,858 viral WhatsApp group messages (i.e. those that had been forwarded more than five times) received by a representative (in terms of age, religion and cast) sample of 500 voluntary users in the state of Uttar Pradesh during the three-month period prior to state elections in 2023 and found that only 1% of the examined messages contained clear evidence of AI intervention. It remained around the same value during the subsequent general election as well. While acknowledging that the problem may not be as widespread as previously thought, the researchers caution that this is a very limited study which may have underestimated the extent of the issue. The researchers further caution that AI technology continues to evolve rapidly so that detection of its intervention may become increasingly difficult, thus advocating increased and continued vigilance while developing counter measures. These include creating public awareness and capability to detect AI-generated content, especially among the inexperienced users, strategies to watermark such content and, most-importantly, investment in research and development to generate technological solutions so that countermeasures keep pace with the rapid development of AI technologies that could be used to spread seemingly authentic fake content and misinformation. Apart from the technological solutions and public institutional countermeasures, the media (both print and electronic) has a vital role to play in exposing fake content and misinformation, generated using AI as well as other methods, within the shortest possible time after its dissemination.

Suggested Reading:

What we do — and don’t — know about how misinformation spreads online. (Editorial) Nature 630, 7-8. https://www.nature.com/articles/d41586-024-01618-z.

Chowdhury R (2023) AI-fuelled election campaigns are here — where are the rules? Nature 628, 237. https://www.nature.com/articles/d41586-024-00995-9.

Ecker U et al. (2024) Misinformation poses a bigger threat to democracy than you might think. Nature 630, 29-32. https://www.nature.com/articles/d41586-024-01587-3.

Garimella K, Chauchard S (2024) How prevalent is AI misinformation? What our studies in India show so far. Nature 630, 32-34. https://www.nature.com/articles/d41586-024-01588-2.

(The author is currently based at the University of Florida, Gainesville, USA, and is a Fellow of the National Academy of Sciences of Sri Lanka.)

Click to comment

Trending

Exit mobile version