Features
Restaging Z-Score Debacle
The release of the Z scores for university admission by University Grants Commission (UGC) has brought thousands of complaints and issues from the students, parents, teachers and other stakeholders into discussion. Some others have vowed to take the issue to courts, expecting a fair and just solution.
Most of the complaints are related to the issue of two different cutoff Z scores for new syllabus and old syllabus and comparisons thereof. It is understood that the solutions proposed to resolve a crisis must not only be fair and legal but also seen to be reasonable for the stakeholders. Print media during the last couple of weeks published several opinions on this issue and the use of Z score for university admission is often blamed as the reasons for these discrepancies. It is time we had a closer look at the problem and baseless allegations against the use of Z-score for university admission. The use of Z-score in place of aggregated marks of the three subjects is, by all means, a better method, which has been amply demonstrated with data and examples and subjected to discussion since its introduction for university admission in 2003.
Today, the issue is somewhat different and it is not possible to assume that the students who sat for two different papers in the same subject are having a more or less equal level of competence or they are random picked from the larger population. The first group has sat for the examination for the first time and the other group has at least attempted to pass the examination on one or more previous occasions. This is evident from the number of the first, second and third attempt students in different disciplines. In exploring a solution to this present problem, the underlying assumptions made in calculating the Z score rankings need to be reexamined. One of the assumptions is that the aptitude of knowledge and skills of the student population opting for different subjects or subject streams are not significantly different. This assumption should be validated now because almost all the students who sat the examination under the old syllabus except those with valid medical or other reasons have made a failed attempt to enter university earlier.
Those who qualify from the new syllabus are a group of students who took the examination for the first time. The assumption of equal knowledge and similar competency level and skill background for these two diverse groups introduces an error into the fair selection process. It is not possible to issue a single series of Z-scores for these two groups. Before processing marks, the hypothesis of statistically non-significant difference between these two student groups needs to be validated with at least past data for a period of 5 years. Since this hypothesis cannot be proved, then the degree of dissimilarity needs to be assessed to determine a fair ratio for university admission from these two groups of students and design a quota for these two groups. This is what the UGC has attempted to employ in the selection process, however, higher differences in the cut-off Z scores due to the impact of quota applied for the two groups has led to serious doubts in the mind of those without an in-depth knowledge of the method.
The second issue is whether the relative difficulty of two questions papers in both old and new syllabi is significantly different. The expert opinion is the only choice we have in this decision. If experts believe that the two sets of questions papers are of more or less a same level of standard, then the two results could have been combined into a single data series with only one set of Z scores and cut-off marks. Unfortunately, there is no evidence to show that this option had been explored by the UGC. If the question papers are at significantly different standards, then it is not possible to combine the two results series. Then the question must be asked from the Department of Examinations why the standards were made to be different and a strong justification for such an action. We always advise our students that they should propose a valid statistical methodology before they collect their data for research to ensure the compatibility of data with the statistical techniques to be employed in the analysis. Unfortunately, the Department of Examinations could not stick to this advice and only seeks a statistically valid solution after collecting the data from student examinations.
Without understanding the pertinent facts, some argue that the mistakes made in 2011/12 university admission have been repeated in releasing the Z-scores for 2019/20 university admission. A similar problem indeed arose in 2011/12 admission due to the release of Z scores, combining the results of students who followed the GCE Advanced level under old and new syllabi. However, in 2011/12 university admission, the major issue was the errors made in the calculation of Z scores by the Department of Examinations. That mistake was further complicated due to the release of a single Z score combining the two distinctly different student populations of old and new syllabi as a single group by the UGC. A presidential expert committee was appointed to look into the problem and the mistake in the calculation was quickly identified and corrected. Besides, the need to consider the two student populations of old and new syllabi separately for university admission was proposed and it was later approved by a Supreme Court judgment in a case filed by a group of aggrieved students.
There are few solutions which could have been adopted to avoid the complexity of the problem and misunderstanding among the students, parents and teachers:
a) The easiest approach would have been the designing one question paper for both groups of students sitting the examination under both old and new syllabi giving options to select questions from the areas that had been revised or amended in the new curriculum. It is observed that there have been no major deviations and only minor changes have been made as regards Physics and Chemistry. The relative difficulty level needs to be maintained across all optional questions. Then, it is a single question paper and a single Z score series generated from the results. This should have been achieved at the point of setting of question papers by the examiners with clear instructions.
b) if the two questions papers, although structurally different, are of the same level of difficulty, the results of the two examinations could have been combined and Z scores computed as a single series.
This solution could have been explained to the Supreme Court and concurrence on this approach sought.
It could have been assumed that although the student populations were different, the same level of tests had been administered for both groups and therefore,
c) The most reasonable solution for the problem is to determine the ratios of students admitted to universities for each subject stream and each degree programme separately for old and new syllabus students, and then for each case use the five-year maximum proportion to admit students to universities. Then the total would exceed one hundred since the maximum of both ratios would exceed 100 and a small proportion of students needs to be admitted to each degree programme to avoid any obvious injustice. The use of median value can also be adopted unless there is a positive or negative trend in the ratios. If the five-year values are stationary and have a low variability among years, this becomes a fair solution. In case the five-year values are having a higher degree of variability and an obvious trend, the most recent value should have been the choice. This method of median value was adopted this year and due to the quota being significantly different for different disciplines and degree programmes, the cut-off Z scores have shown many differences for new and old syllabi students. Also, the cut-off Z-scores are significantly high compared to the figures of the previous years due to the application of quota which sometimes 25% of the original 100. This is the reason for the complaints although the method used for selection is in keeping with the ruling of the Supreme Court in 2012.
However, this problem could have been avoided if the two series of Z scores were adjusted as per the quota granted for each series. A simple mathematical computation could have brought the two distributions on a comparable model. The adjusted Z scores are comparable with the figures of the previous years and it would not produce cut-off Z scores which are much different for students in two groups. The confusion created due to the releasing Z scores indicating a different level of access to university entrance in the two different groups could have been avoided with the help of such an approach.
The writer is Former Vice-Chancellor of the Uva Wellassa University and Chairman of the UGC University Admission Committee in 2013/14 and member of Presidential Committee on Z score in 2012 .