Midweek Review
Artificial Intelligence: Are we getting into it with our eyes open?
by Prof. Janendra De Costa
Senior Professor and Chair of Crop Science, Faculty of Agriculture, University of Peradeniya
Artificial intelligence (AI) seems to be the ‘in-thing’ these days, especially for the President of Sri Lanka, who keeps mentioning it in his speeches as a key ingredient for Sri Lanka to achieve prosperity, both economic and otherwise. Taking the President’s cue, the Minister of Education and Higher Education recently went on record saying that AI will be taught in schools from the lower secondary grades upwards in the near future. The potential of AI for improving the efficiency and effectiveness of activities in a wide range of areas that contribute to overall national development, prosperity and well-being is undisputed. However, to treat AI as a ‘silver bullet’ which would cure Sri Lanka from all the complex issues that it is mired in and propel its economy towards development and prosperity is a fallacy that we would do well to avoid. I do not have any claim to be an expert in AI and I welcome its introduction to our curricula, at secondary and tertiary levels, just as I would welcome any other modern advance in Science and Technology (S & T). Nevertheless, the purpose of this article is to focus the readers’ attention on concerns raised by experts on the potential limitations and pitfalls of adoption of AI without being fully aware of its inherent limitations and potential threats. This would be especially relevant for Sri Lanka, which has a history of adopting (and failing) new technologies rather ‘blindly’ and without developing a strong foundation to sustain them. In writing this cautionary note, I have drawn heavily from a recent editorial of the prestigious science journal Nature and some recently published papers, views and opinions in highly recognized S & T research journals, which indicate that this is a global issue, likely to influence both the developed and the developing world.
What is artificial intelligence?
In its simplest sense, artificial intelligence employs a computer, or a robot fed with a series of instructions, to carry out tasks that are normally performed by humans. These tasks can range from simple ones such as writing a letter to complex functions such as designing proteins, pharmaceutical drugs or whole experiments and running laboratories. The capability of AI tools and methodologies to process a quantity of information, which is substantially larger than what an individual human brain (or mind) is capable of processing and finding the best solution in a given situation (called ‘optimization’ in AI terminology) is claimed as a major advantage of AI. The AI tools run on algorithms (series of specific instructions) which are designed to make decisions and carry out functions as done by humans, but with substantially greater effectiveness and efficiency because of their capacity to overcome limitations of an individual human brain (e.g. analyzing the outcomes from a wider range of possible scenarios). To enable them to do this, the AI tools and their algorithms are ‘trained’ on a sufficiently large set of data (often called ‘big data’), supposedly representing all possible scenarios. For example, by being trained on the past data on auction sales of tea in global markets, AI could be used to predict the future market trends for Ceylon Tea. This is an example of what is called ‘Predictive AI’. While a competent economist or a statistician could do the same task using a reasonably large data set, the argument for using an AI instrument would be that it is able to process a much larger and a more varied and complex data set and come up with more precise predictions for a wider range of future scenarios. Recently, a final year undergraduate of my faculty, under the supervision of one of my colleagues, developed an AI tool to grade big onions into categories with greater precision and efficiency than is currently done by traders. In developing the AI tool, it was trained on a wide range of images of onions linked to their physical characteristics such as size, shape and surface properties. Perhaps the best illustration of the power of AI is the computer trained on a multitude of chess moves beating the World Champion in chess.
Potential for AI applications in Sri Lanka
As identified by the President, there is potential for application of AI to improve the efficiency of many activities in a range of sectors in Sri Lanka. Decision-making has been a particularly weak link in the administrative structure of Sri Lanka at all levels, from the President, Cabinet and Ministerial down to the lowest levels of governance in almost all institutions across all sectors. Key decisions on policy and action are often taken without proper consideration and analysis of relevant facts and figures, with personal bias coming into decision-making most of the time. Even when the so-called experts are employed as advisors, their capacity to analyse all relevant information and provide unbiased advice and guidance in decision-making has been questionable at best, and woefully inadequate at worst. The decision to convert Sri Lankan agriculture to 100% organic overnight is a clear recent case in point, which illustrates the inherent weaknesses in the decision-making process at the highest level of governance in Sri Lanka. Apart from its capacity to process a large amount of varied information, a perceived advantage of AI is its impartiality and hence the avoidance of personal bias, which is inherent in human decision-making. In a future ideal Sri Lanka where AI tools in all important sectors abound, perhaps the people in key governance positions (if they ever become sufficiently mature and S & T savvy) could rely on AI to provide sound, evidence-based, unbiased advice during decision-making on key policies and actions.
Similarly, one can ask whether AI can provide solutions to some of the critical issues and improve efficiency in key areas related to economic development. Collection of taxes and government revenue, identification of effective measures of poverty alleviation, land use planning, agriculture and natural resource management, medical supplies and health care, policy and planning on education reforms and management of educational resources, innovations in developing globally competitive products, goods and services and research in all key sectors related to national development are just a collection of areas (by no means exhaustive) which appear to be having limited efficiency when handled by humans so that appropriate AI tools and technologies could make a significant positive impact on the national economy. Furthermore, ideally, the AI tools should be able to make more accurate predictions than those that are currently available about short-term weather, long-term future climate and the occurrence of extreme climatic events such as floods, landslides, droughts and heatwaves. National issues of equal significance such as prediction of outbreaks of climate-related diseases such as dengue could benefit from the greater predictive power offered by the AI tools.
Potential pitfalls and inherent limitations of AI
International research literature abounds with recent advances in the development and application of AI in a wide range of disciplines and activities, almost all demonstrating greater competence and efficiency than the existing technologies and practices. However, there are also a lesser number of papers which focus on the inherent limitations of AI and potential risks of its increased adoption. A few of the key issues are outlined below.
Fundamentally, an AI tool is dependent on the algorithm and the set of source data on which the algorithm is ‘trained’. Absence of adequate amounts source data which is sufficiently comprehensive is likely to be a major drawback when developing AI tools to improve the efficiency of any given sector in Sri Lanka. Here, the natural tendency and the pathway of least resistance, especially for Sri Lankan officials and experts, would be to use AI tools developed in and trained on source data from other countries. While it could be argued that such AI tools are ‘trained’ on source data which are sufficiently extensive, there will always be the question whether the source data adequately captured the whole gamut of conditions, that may be specific, and in some cases unique, to Sri Lanka. Consequently, an AI tool trained on inadequate or poorly representative source data, when used without adequate knowledge and understanding of the underlying mechanisms and processes on which the AI tool is developed, could provide solutions that may not be the best (or optimum) despite conveying the illusory promise of being the best. As a solution to the inadequacy of source data on which to train AI tools, AI, itself, can expand its source database by identifying underlying patterns and the distribution of the existing data and subsequently generate new data. This is part of ‘Generative AI’, which has developed to such an extent that AI can generate ‘respondents’ for (socioeconomic) surveys who would respond to questionnaires in the same way that human respondents would respond. Nevertheless, the fundamental limitation of inadequate source data is likely to remain in many key sectors in Sri Lanka because successive Sri Lankan governments have never invested enough on gathering sufficient and comprehensive information and quantitative data on which to base its policy formulation and decision-making.
A key advantage of the use of AI in decision-making is its perceived absence of personal bias. However, it has been observed that this perceived absence of bias is not always true when AI is applied. When developing the AI algorithms and training them on source data, the developer makes a number of decisions and choices, which inevitably introduces personal bias into the AI tool. When such AI tools are used by end-users who are not familiar with the process through which the model was developed (which is highly likely to be the case in Sri Lanka), the bias inherent in the model leads to outcomes and decisions which favour some views, groups and outcomes while marginalising the alternative, sometimes more valid and inclusive, views and outcomes.
The greater computational power of an AI model trained on ‘big-data’ and providing an output which is more comprehensive than a human-generated output could create an illusion that that AI provides a solution with a superior understanding of the whole scope of the problem. However, the decisions and choices made during the process of algorithm development imposes a limit to the scope of understanding of the AI tool and the solutions provided by it.
Generative AI tools using Large Language Models (LLMs) such as GPTs have already become a common tool among Sri Lankan university students who use it for writing tasks ranging from an email to a report that is submitted for evaluation. This has created a dilemma in the academia on how to evaluate the true competence and the learning outcomes of a student. The capacity of students to synthesise by integrating information from different sources, a key competence that we as academics try to inculcate in our students, is taken away when he/she takes the easy route of using a generative AI tool such as ChatGPT. In an on-going curriculum revision in my faculty, there are colleagues who argue that subject content that can be learnt via generative chatbots such as ChatGPT need not be included in the curriculum. This is a clear example of the illusion of complete understanding that is created by AI tools, which engenders complete trust and reliance on them. The LLMs are trained on increasingly large sets of words and expressions and are increasing their capacity to capture human capabilities. However, even though the creators of AI tools may argue to the contrary, it is doubtful whether generative AI tools, however advanced, could replicate the creativity of the human mind. On the other hand, students hooked on to generative AI tools could create a future generation and a nation with diminished creativity, which would be counterproductive to the very objective of introducing AI to bring about national development and prosperity. There is evidence that students in Sri Lankan universities, both state and non-state, are already hooked on these generative AI tools for producing their take-home assignments and reports. It can be argued that such AI tools ‘levels the playing field’ for those students who are dis-advantaged when they enter a higher education institution due to lower competence in the English language. However, an equally valid counter argument would be that the availability of AI tools is likely to hinder the development of the skill of English usage.
On the global stage, risks posed by some of the latest developments in AI have been recognized and articulated. For example, the potential threats to biosecurity posed by AI-designed proteins and drugs by causing more potent diseases have been recognized. There is the possibility of algorithms that are developed initially for a legitimate purpose being adapted (‘repurposed’) for an alternative not-so-legitimate purpose. The newly developed text-to-video AI tool can create fake videos, which can be used for many harmful purposes. For example, such fake videos of key public figures could shift public opinion in crucial events such as elections. A recent research study has shown that chatbots based on Large Language Models (LLMs) show clear inherent racial bias because of the way the algorithm has been trained to recognize words, phrases and dialects used by specific ethnic or demographic groups and link them to a range of characteristics of those groups, as perceived by the developers of the AI tool.
An important social issue that is inherent when AI gains recognition and trust as a superior partner in generating solutions is the creation of a favoured group of professionals and scientists, especially when it comes to allocation of limiting state resources such as funding for Research and Development (R & D). Creation of such favoured ‘monocultures’ of professionals was evident in Sri Lanka during periods when specific disciplines were earmarked by those who were in power and had the authority to decide on who gets the resources on a priority basis. Clear cases in point were the scientists engaged in nanotechnology, and to a lesser extent biotechnology in the 2000s and the so-called experts in organic agriculture in the recent past. Creation of such favoured monocultures have adverse long-term consequences on national development as it leads, inevitably, to marginalisation and detriment of R & D in other disciplines and demotivation of their practitioners. Looking at what happened in the past, there is a clear and present danger of this history repeating itself in the next few years when AI is viewed as the ticket to economic development and prosperity. The multi-faceted and holistic nature of the development of any nation, irrespective of its present economic status, requires a reasonably adequate allocation of its limited resources across all disciplines of S & T even when a greater proportion of the resources are allocated to a few favoured disciplines which are perceived as having a greater potential to contribute to national development. (To be continued)
Additional Reading
1. Why scientists trust AI too much – and what to do about it. (Editorial). Nature, 627: 243. 14 March 2024. https://doi.org/10.1038/s41586-023-06221-2.
2. Alvarado, R. (2023). What kind of trust does AI deserve, if any?. AI and Ethics, 3(4): 1169-1183. https://doi.org/10.1007/s43681-022-00224-x.
3. Carroll, J. M. (2022). Why should humans trust AI?. Interactions, 29(4), 73-77. https://doi.org/10.1145/3538392.
4. Krenn, M. et al. (2022). On scientific understanding with artificial intelligence. Nature Reviews Physics, 4(12): 761-769. https://doi.org/10.1038/s42254-022-00518-3.
5. Messeri, L. & Crockett, M.J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627: 49-58. https://doi.org/10.1038/s41586-024-07146-0.
6. von Eschenbach, W.J. (2021). Transparency and the Black Box problem: Why we do not trust AI. Philosophy & Technology, 34: 1607–1622. https://doi.org/10.1007/s13347-021-00477-0.
7. Wang, H. et al. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620: 47-60. https://doi.org/10.1038/s41586-023-06221-2.
The writer is a Fellow of the National Academy of Sciences of Sri Lanka and has been an academic and a research scientist in Agriculture and Natural Sciences for over three decades while being based in Sri Lanka.