My previous post on the ranking framework has led to disappointment among several of my friends whose views I take seriously. And the common argument is that this is far superior to anything that we have, and yet I have chosen to criticize it, which is not quite right on my part. Fair enough. I do accept that the ranking framework is far superior to what various business houses have been doing over the last many years.
However, we started off this path by saying that QS/THE do not understand our universities and we need to somehow showcase the quality of our top institutions which we believe is significantly better than the ranks they are being given in those rankings. I have never understood how an India-specific ranking will showcase the quality to the rest of the world. How we can claim that we deserve to be in top 100 by having an India ranking. Yes, I have seen the argument. A good Indian ranking will spur the competition to get better. We will no longer be able to get a higher ranking by giving false data or by bribing the reporter or by buying ads in the private ranking. And this improvement will help us get into top 100. But notice that this argument actually admits that we need to be in top 100 based on what QS and THE decide as ranking parameters, and if that is the goal, then there are better ways to spur that competition than a government ranking.
Ranking are useful information for the stake holders, and I have written many blog articles in the past recommending that our institutions should take ranking seriously. However, I am scared of a situation where ranking is the only information that a stake holder uses for taking important decisions. There are serious limitations of ranking (or most measurements of quality) and it is difficult to assume that a common man would understand those limitations. So far, the common man was taking multiple inputs not because they understood the limitations of ranking, but they had an inherent mistrust in a ranking by private business house. But if we now have a government ranking, and granted that it will give inherently better information than private sector ranking, the common man is likely to use this information as the only or primary information, since it does not understand the limitation of the ranking. And while the current decision making has serious flaws, the new decision making can only have bigger flaws.
The Framework talks about ranking to be available before April 2016 so that these can be used by students and parents to take better admission decision. So the most important stake holder for these rankings are the prospective under-graduate students. What is the ideal decision for such a prospective student, and let us see how close the advice is to that ideal decision of the student.
Let us assume that the student is interested in getting education which will provide for a good career and a resultant happy life. And the admission decision is that among all the options for higher studies available to me, which one should I choose that is more likely to give me a good career and a happy life.
First of all, asking that question at the end of 12th class is quite ridiculous. If the same question was asked (or rather was allowed to be asked) a few years earlier, the decision may have been completely different. Note that the ranking can only tell you whose graduates are having a good career (even that, as we will see below, is not being said by the ranking, but let us not get ahead of ourselves). It can not say what would be good for an individual student. Success in career does not depend solely on the alma mater. It also depends on, to just give one example, whether you have an interest and passion for the kind of job you are working in. By suggesting that X is number one engineering college, you are really suggesting that for all disciplines this is a better college. So for all students, irrespective of their passion, should prefer this college. This is clearly nonsensical. No one could be best in everything. Of course, the framework says that they will use this framework for ranking not just colleges but also individual disciplines.
But even discipline based ranking does not help beyond a point. A student who is deeply passionate about research versus a student who is deeply passionate about entrepreneurship in largely the same discipline should probably go to different institutes. A student who is studying CS as a tool to be applied eventually to another discipline would probably require a different kind of program than someone who is studying CS to get a technical job in CS area, who would probably require a different kind of program than someone who is studying CS only because it hones your skills of abstraction, analysis, etc., which he wants to apply in management, finance, and other "non core" areas. Some locations may be more conducive for some and less conducive for others based on factors such as language of discourse in the hostels. Discipline based ranking will not help here. One has to look into the programs more deeply. What courses are on offer. What flexibility the program offers. What kind of culture and environment is there.
We already have a problem on hand. A very large number of students and parents only look at "placement statistics" to decide and then suffer. But there are many who still look for options, ask questions. That number will dwindle further if there is a government ranking out there.
Many of the readers will argue that most of what I have written above is not relevant because a 12th class student is not going to do research on colleges, does not know his passions, does not know whether would want to become a manager or a scientist, and is only looking at information on where would a typical student more likely to succeed. So individual differences are not important. And those few who know their passions so well are also aware of the limitations of the rankings and will do their own research.
Fair enough. But does ranking even give that statistical information. First of all, we will need to define "success in career and happiness in life" to say which institute is really causing more of its graduates to get there. The problem is not just in the definition of success, but how do we get data, and whether data is relevant for this batch. Assuming ideal information of all kinds, one could possibly look at alumni who graduated 15-30 years ago and have some way to figure out what percentage of them are successful. (Can we really have a binary decision here.) Assume you can have this information. But is that information relevant today. A college may be doing some magic 20 years ago which has caused great success to the alumni in their careers, but may have gone downhill since then. There may be new institutes who may turn out to be better 20 years from now in this regard.
And, therefore, we look at not how successful the alumni have been but assume certain parameters either correlate very highly with that success or cause that success, both without any scientific study. So the prospective students and parents assume that last year's placement statistics is the best indicator of future career success for a student who is joining this year and will graduate after 4 years. And a few lone voices like me would claim that quality of education causes that success, and therefore, a prospective student should look at not the placement data but do research on various colleges about things that affect quality of education.
And, of course, I would claim in support of my view that even if placement of 2014 was a good predictor of success in 2050, most colleges would give out wrong information, and most students and parents would look at wrong data (like top placement and average placement rather than percentage of students placed and median placement) and these two wrongs combined would ensure that you are really taking a lottery ticket.
Since the government ranking is created by professors, you would of course see a bias and they are closer to my views than the views of the students/parents. Quality of education is important. But of course, we will also give some decent weight to post-program outcomes (including placement, but as professors we would also like to see how many of graduates do well in exams like GATE, and go for higher education). So the placement per se will be a small factor.
The problem is then how do you judge the quality of education. Again, we don't know how to define quality of education, and we will assume that certain proxies for them will somehow be good predictors. So, a faculty-student ratio would be a great proxy for quality. Number of faculty members with PhDs would be a great proxy for quality. Why is faculty-student ratio a great proxy. Well, it is likely to result in smaller class sizes and it is assumed that smaller class sizes result in better delivery of education. Then why not just look at class sizes. Would a system (like in IIT Kanpur) where one faculty teaches 400+ students while a large number of faculty members teach less than 10 students a better model than a system where everyone teaches a 50-60 students. Would a system where a faculty student ratio is 1:15 but students do 6 courses a semester better compared to a system where the faculty-student ratio is 1:16 but students do only 5 courses a semester or a system where the faculty-student ratio is 1:17 but students do only 4 courses a semester.
Is it really true that in the Indian context, PhD faculty teaches better than non-PhD faculty on an average. Remember, this ranking is supposed to reflect Indian realities. The Indian reality is that the quality of PhD sucks big time. Only a few top institutes are able to find PhD faculty from good places. Others are hiring PhD faculty who know much less than BTechs from good places.
Is it really true that citation index is a good predictor of quality of education. Is it even true that a good researcher will be statistically a better teacher. May be it happens in IITs, but it does not seem to be happening across the country. And I hope this ranking, though created by IIT professors, is not meant for IITs alone.
How does inclusiveness improve quality of education. Inclusivity is a great social and national goal. I must applaud all those who care for inclusivity, but not all national goals imply improvement in quality of education. My fear is that this is beginning of politicization of ranking, even before they start. The same argument can now be extended to cover other national and social goals as well. Are you actively participating in Swatch Bharat Abhiyan. Nobody can deny that improving cleanliness should not be applauded.
In general, the education experts tell us that having diversity inside a class improves the quality of education. And it is good that the framework looks at diversity. But there are three kinds of diversity - in-state versus out of state/international, gender diversity, and having people from economically and socially disadvantaged classes. But let us look into the details. The maximum marks you get for geographical diversity is when you take 100% of students from outside the state, none from instate and none from a foreign country. Should diversity mean not having any student from the society which is hosting you and nurturing you as an institution. If geographical diversity is a great thing (and I believe that it is a great thing), would the Government free NITs of the in-state quota. Let them decide how they want to compete in this race, and not tie their hands behind their back. If the government does not do this, then it is forcing NITs to get a poor rank. Is this fair to NITs.
Also, should diversity be counted only in terms of in-state versus out-of-state. Should we have some credit for number of different states represented on campus. After all, an institute in Delhi having students from Gurgaon and Noida would meet the diversity requirements but is that really helping the quality of education. And given the political nature of these factors, I am sure one day someone will say that presence of North East students must be part of the ranking (which, by the way, would actually improve diversity in most campuses) explicitly.
To meet diversity goals in case of genders, is 50-50 the ideal for improving the quality of education. It is probably a great social goal, but I would guess from the quality of education perspective, having a substantial presence of both genders would be desirable, but not necessarily 50-50. So may be some mismatch should be acceptable, say 40-60 or 30-70, either more men, or more women. Again, the question will be that if this is the goal of the society and the government, would they allow IITs to do something (anything) to improve this ratio.
And having 50% students from economically and socially backward backgrounds, again, is it furthering the goal of diversity and quality of education, or a social goal. Note that there is no definition of economically and social backwardness. This is going to be a political hot potato. Can I only count SC/ST/OBC (Non creamy layer), or can I also count Muslims and anyone else whose income is less than 6 lakhs. If you allow all those who are non-creamy layer, irrespective of their caste and religion, then every single institute in this country, including some of the expensive private colleges would have the desired 50% or more people from this category. So why have this at all.
On the other hand, religion diversity is important for improving quality of education, which is not mentioned, clearly because that has political overtones. Another diversity which is extremely important for quality of education is having students study different subjects. So a university with many more departments should get some credit compared to narrowly focused universities. But do you really think that IITs were going to include that parameter in ranking.
In one of the curriculum workshop, I heard one very famous Computer Scientist say, that the number of courses in the curriculum is one of the strongest predictor of quality of education. The lower the number, the better is the quality. If the student is being asked to learn 6-7 courses in a semester, the outcomes will be poor. And he mentioned a large number of quality CS departments where he showed that the top departments typically have 4.5 courses per semester, good departments have 5 courses per semester, and then there is downhill. We could use that (and it not only gives students time to learn each subject, but the costs are reduced, the class sizes are reduced, more assignments can be graded, etc.). We don't seem to have such a simple predictor in our ranking.
The point of all this is not that the ranking framework is poor. Of course, it is much better than to ask people in diverse fields to name the top X colleges, the perception of non-experts seem to dictate the current rankings in India and even abroad. The point of all this is to understand that rankings are based on proxy variables and not a direct measurement of quality (since there is no direct measure). Those proxy variables are disputed, and have their own limitations. And hence rankings have limitations. This is the point that all stake holders need to understand. Rankings are just one more input and can be used to short list your potential places to study but then you must think of your own interests, preferences and personality, and do your own research of those colleges.
My concern with this ranking, as I said in the beginning of this article, is that people will have so much trust in this linear ordering that they will not do even the limited research that happens today. I am assuming that a poor quality private ranking supplemented by whatever little research goes on is better than better quality government ranking with no research.
The biggest advantage of this ranking process will be that reliable data will be available at a common portal for most good institutions (hoping that most of them would participate in this ranking). Not only that data will be available, there would be a system to challenge any information and hence colleges would hopefully provide honest data. And hopefully, the systems would be strong enough to ensure compliance. That is, if it is known that wrong data has been given then the college could be barred from ranking. And hopefully, there will be an interface where I could search, order colleges based on my queries.
What is even more disappointing is that the behavior that this framework and the government is hoping to encourage through competition for better ranks could have been achieved even otherwise. First of all, just the publication of this report will encourage the private players to modify their rankings in the right direction. Second, a simple way to do this would be to have NAAC dictate to colleges that they keep updated data on NAAC portal, otherwise they lose accreditation, and allow challenges to that data, similar to what the ranking framework is proposing. Allow people to search, and order accredited colleges, and so on. NAAC could even allow those colleges to upload their data where a formal accreditation has not been done. So the key is that we have good quality data available, which can be easily searched and colleges can be ordered on multitudes of queries. You really don't need a single government approved formal linear ordering to help the potential students and parents. After all the data for ranking and the data for NAAC have huge overlap. So avoid duplication of efforts. Avoid linear ordering. Avoid government approval of that linear order. And yet, give all that quality information to those who need it, and let them use it in interesting ways. In fact, I can see many people will do research on that data, come up with multitudes of lists, share them on the Internet, and that would be great since students and parents would then understand that the ranking depends on your perspective and encourage more research by them.
Frankly, the only reason to not use data with NAAC and NBA can be that IITs don't want to deal with those agencies. This could have been a great opportunity to overhaul accreditation, but IITs strong resistance to be compared with other institutes has done a great disservice to accreditation services in this country and by extension the whole higher technical education sector in this country. And this ranking framework is another outcome of that attitude of IITs.
The government could have done other things as well. Whatever it believes as parameters of good quality, it can incentivize colleges and universities to improve on those parameters. Government has all the power in the world to align incentives with desired outcomes, without forcing the institutes on those outcomes. You will get more grants if you do this or that. You will not get large projects unless all your data is with the central portal, and so on. But this will amount to giving autonomy to colleges and let them decide what goals are important to them vis-a-vis support they can get for those goals. Government does not work in those ways. It will dictate the goals and then have a complicated process to judge whether those goals are being achieved.
And finally, is there anything positive in this. Of course, there is. And perhaps I should support this framework just for that reason. It will allow lazy HR folks to take better decisions. Currently, if you see how many HR folks decide which colleges to go for campus placement (assuming no corruption), it is like let us go to IITs, NITS, IIITs, and BITS, if we have to go to 50 places. If we have to go 10 places, then old IITs, and a couple of places with whom we have friendly relations. It does not matter that a new NIT would have provided far poorer quality of education than some of the private colleges. It is just to avoid doing any research on whether the education in those institutes align with the requirements of the company. With this ranking in place, hopefully, some lazy HR manager will be able to say, let us go to top 50 places. (He will still not do research of his own.) And these top 50 would be a better list than set of IITs, NITS, IIITs, etc. It will, therefore, provide a chance for private colleges to prove that they too are providing quality education. They already appear in private rankings, but those rankings are not trusted. But what will happen if a deemed university not liked by Dr. Tandon Committee appears in the top 100.
To summarize, the ranking framework will certainly be a better predictor of quality than the current private rankings have been. But they do not do anything for our universities to appear in top 100 of international ranking. However, the government backing a linear order of colleges will have so much trust among the stake holders that they will not understand the limitations of the rankings and that will not be good for the decision making. We need rankings but in private sector. And we also need to do things to improve our ranks in international rankings. And most importantly, we need to do all this while fully recognizing the limitations of the rankings.
Mr. M. K. Gandhi and Nathuram Godse
2 weeks ago
5 comments:
One more inherent problem rankings that you have alluded to is Goodhart's Law: once a measure becomes a metric, it is not a good measure : https://en.wikipedia.org/wiki/Goodhart%27s_law
In other words, the tendency to game the system. This is seen everywhere from US News Rankings (http://www.nytimes.com/2012/02/01/education/gaming-the-college-rankings.html), to software benchmarks (http://www.anandtech.com/show/7384/state-of-cheating-in-android-benchmarks), to the latest VW scandal.
Our PM likes to talk about 'minimum government' but it seems that for him, it means not changing the appallingly low number of policemen, judges, bureaucrats, teachers and health workers we have per capita, and continue to let Indians suffer from the absence of public goods.
Instead we have more flights from Air India, more government schemes and more government interference in education.
The GoI, by setting rigid parameters for 'ranking' is dissuading creativity and flexibility among higher education institutions. Is there any incentive for a state government to support a university that offers unique and innovative programs now that they know that it will hurt what the government 'officially' thinks about it ?
Given that rankings are here to stay, I think it is a good idea to have an india based ranking. Unlike the private rankings which are geared towards UG students, these rankings hopefully will be more "rounded" and help sponsors and prospective faculty better understand where institutions stand. I agree a linear order is difficult to achieve, and is unfair for institutes close together, but, as I said, rankings are here to stay. If nothing else, it will put pressure on the centrally funded technical institutes (CFTIs) to perform (the proposed rankings are only for technical institutes I understand; other rankings will follow). Global ranking systems do not reveal enough information on how an institution has performed and they are not responsive to correction of errors. Based on how the global rankings are done, I feel our rankings must take into account the age of institutions, especially as there are so many "new" CFTIs. Thus, for an Institute which started in say 2008, there will be a big difference in numbers between a two year average and a five year average (for citations).
Dr. Barua, why would an accreditation system based on certain expected standards not be preferable ? The great philosophical problem with 'rankings' is the hierarchy implicit in it. And these hierarchies become embedded in the society's psyche and keep resurfacing throughout its life.
And like I mentioned earlier, such rankings and hierarchies greatly dissuade experimentation and innovation, apart from those which will boost points in the parameters established by rankings.
Prof GB says "...they are not responsive to correction of errors". It is not that bad. We have been seeing how Panjab University gets high rankings, mostly because of the high citations received by the papers published by large collaborations like at CERN. I complained about this when Nature put Panjab University at No. 1 in India (http://www.nature.com/news/india-by-the-numbers-1.17519 and my comment there). Nature invited me to formally submit a letter and this was published in June by Nature (http://www.nature.com/nature/journal/v522/n7557/full/522419b.html, Nature 522, P 419). I don't know if this is the reason, but this year, two of the rankings have introduced a correction. In the citation metric, QS has removed papers that have more than 10 institutions in the author affiliations. THES (which used to rank Panjab University as No. 1 in India) has removed papers with more than 1000 authors from its metrics. There may be more such corrections possible.
Post a Comment