Updated on: Monday, March 08, 2010
The exam was marred by glitches, and there has been little transparency on the marking procedure
It has been a week since the results of the Common Admission Test, the qualifying examination for admissions to the Indian Institutes of Management, were declared. However, online forums and websites are still abuzz with discussions on CAT scores, the even-handedness (or lack of it) of the testing process and the methodologies used by the IIMs in shortlisting candidates.
To begin with, the testing process was marred by technical glitches. However, even after providing nearly 10,000 of the 2.47 lakh applicants with a second chance in January, ambiguity prevails. Even as the IIM shortlists were published over the week, several students raised questions about the validity of the exam through the media, and many have expressed a desire to file for information under the Right To Information Act, or even seek legal recourse.
Meanwhile, Prometric, the American firm that conducted the exam on behalf of the IIMs, has been non-committal on the issue. Media queries too have only resulted in vague statements, and a general announcement on the CAT website. According to an announcement on the CAT official website, Prometric maintains that the test development process was conducted in alignment with the Standards for Educational and Psychological Testing.
Having maintained the three sections that have traditionally been part of the IIMs' testing pattern — Verbal Ability, Quantitative Ability and Data Interpretation/Logic — the content was developed by experts in this domain. Students too, after taking the CAT, had maintained that there were no changes in questions, difficulty level and pattern of testing.
Shedding little light on the highly ambiguous post-test administration process, Prometric states that “credentialed psychometricians” have analysed the process to confirm the validity of the examination scores. On scoring, it reiterated that it has an industry-standard, psychometrically-sound approach to the scoring process for all IIM candidates.
Three-step process
The marking process has been described as a three-step process. First, raw scores are calculated depending on the number of questions answered right, wrong or those omitted (+ 3 for correct answers, -1 for incorrect). Previous CAT exams have also had negative marking. Secondly, the raw score is “equated” and “scaled.”
Herein lies the ambiguity, and the brief note has not been able to clarify how exactly this equating is done. Till last year, the CAT marking was simple. You get a score in accordance with the number of correct and wrong answers. This score is then normalised. Therefore your final percentile was a simple comparative number of your performance compared with that of all candidates who have appeared. This time, the exam being conducted first over a window of 10 days in November-December, and then in January-end over two days, complicates this equation. Students testified to having received different sets of questions on each day. Ankur Chattopadhyay, a student from Bangalore, feels that that those who took the exam during the weekends may have been at a disadvantage, compared to those who wrote during the weekdays, when fewer people had opted for testing. “We assumed that there will be a scientific process that ensures parity. Even if we ignore the fact that the second testing window (in January) got a good one-and-a-half month's extra time, that they have not been upfront about how they calculated our percentiles leaves room for lot of speculation.”
According to the Prometric statement, “equated score” is arrived at by using a statistical process used to adjust scores on two or more alternate forms of an assessment so that the scores may be used interchangeably. Then this score is scaled on a common metric, using linear transformation, also in accordance with industry standard practice. An overall scaled score and three separate scaled scores for each section are used. Percentile rankings are provided for each individual section as well as for the overall exam score.
So what does this mean for the future of online testing? Does the use of ambiguous jargon like “psychometric analysis” and “equated score” mean that online testing requires tolerating or allowing for these vagaries? Ajay Arora of TIME, a leading coaching centre chain, says that these things should have been sorted out by exhaustive testing. For instance, what will the IIMs do when a student says “I answered 15 questions, but my score accounts for only six?” “Even applying for RTI cannot help such a situation. What we do in our internal test is provide students with a summary at the end of the test. We kept a recording (at the back-end) of how students are marking their answers). If students have a problem with the summary they receive, invigilators can make a manual record of this back-end summary. This provides transparency and reassures students that they are in safe hands.”
The CAT fiasco, and the continuing discontent among candidates, proves the need for more accountability. A new level of transparency and, more importantly, contingency planning, is required to make online testing credible.
The IIMs have two options: either they can revert back to the paper-and-pencil format that was more credible but is certainly a step backward in this digital era, or ensure that the necessary checks and balances are in place.