This week, I had an interesting discussion with a few colleagues. This was about the effectiveness of interview as opposed to the marks in an objective type test. The context was MTech admission. The CSE department at IIT Kanpur admits a few top GATE rankers directly to the MTech program, and invites several more for another test/interview. It was felt that students whom we admit through interview process generally perform better in the program compared to those who are admitted directly on the basis of GATE score, even though the former have lower GATE scores.
My explanation for this apparent anomaly is this. In an interview, we value approximate answers. In GATE, we strongly discourage approximate answers. Let me give a hypothetical example. Suppose the question is: What is the minimum size of Internet Protocol header. The right answer is 20 bytes. If this question is ever asked in GATE, the likely choices are 16, 20, 24, and 28. The GATE wants to make sure that guesswork is strongly punished, and hence all the answers are such that they are close enough to the right answer. Either you know the answer, or you don't. But if the same question is asked in an interview, and the candidate does not know the exact answer, and says that it is one of 16, 20, and 24, we will give him a chance to explain. If he tells us that the size of the header is a multiple of 32 bits so that the header can be processed fast by CPUs having 32-bit architecture, and therefore the header size is a multiple of 4 bytes. Then he explains that the IP address is 32 bits, and the header has both source and destination IP addresses, and hence 8 bytes of addresses. There are other fields that he does not remember, but certainly all the remaining fields cannot fit in 4 bytes, so the minimum size cannot be 12. If someone knows so much about IP, we will say that we don't care whether he knows the exact number or not. So, in our interview process, we are trying to find such students who know a lot, can reason about it, but don't remember specific details to be the topper in GATE.
So, if we value guesswork in an interview, what can be done in an objective test to allow similar guesswork. Going back to the same question about IP header size, if I were to set this question in GATE, the choices I would give are: 12, 20, 30, and 40. And now, one can argue that 30 is not a multiple of 4, that 12 is too small, that 40 is too large, and hence "guesses" it to be 20. If someone can reasonably remove other options, it is a sign that the student knows something about the topic. But we frown upon guesswork in objective tests.
If you look at JEE, it is argued that the only thing coaching classes do is to give training about how to guess or how to eliminate. And that is why the paper setter keeps coming up with questions and potential answers where without solving the question fully, it is very difficult to mark the right answer. Now, the JEE papers are so difficult that it is not possible by a single Mathematics faculty member of an IIT to completely solve the paper in stipulated time. But should we get into this race at all. My own view is that ability to guess is often based on a decent understanding of the subject, and therefore, it is alright to prepare an objective type test which allows guesswork.
An extension of guesswork is approximate answers. The subjective tests were great because one could assign partial credits to someone who had partial knowledge, who could do some steps, but not solve it fully. Can we incorporate that property in an objective type test. Prof. Rajeev Kumar of IIT Kharagpur sent me a writeup explaining how this could be done. I am paraphrasing what I could understand from his scheme. The method is actually quite simple. When we have 4 choices in an objective test, we should select these choices in a way that one is absolutely correct, another one is wrong but a plausible answer, and the remaining two are completely wrong. Now, instead of assigning +1 for right answer, and -0.33 for the wrong answers, we should assign +1 for the right answer, 0.5 for the plausible answer, and 0 for the remaining two answers. There is no need to penalize guess work through negative marks.
This will encourage people to intelligently remove options, think of ways to quickly get approximate answers, and even do the guesswork amongst the remaining options.
It is rather interesting that while most faculty members I talk to admit that interviews are great because we encourage approximate answers and guesswork. They admit that subjective tests are great because the student gets to write whatever he feels like and if the examiner feels that the student is partially correct, he may give partial credit (and of course, there is no negative marks in a subjective test). But when it comes to objective tests, all our energies are spent in finding out ways to stop guesswork and partial/approximate answers.
What's the buzz?
4 days ago