Search This Blog

Sunday, April 9, 2017

NIRF 2017

So, the second round of rankings are out. And every single problem that I could have imagined with a ranking system, I find it in these rankings. I had written two blog articles when the NIRF framework was announced in September 2015. They are here and here.

Let me summarize my most important objection to NIRF. While a lot of stake holders use ranking for taking important decisions, and therefore, it is important for the universities and colleges to participate in ranking games, there are several inherent problems with ranking games. The biggest problem is that a single linear rank gives a hugely distorted view of the academic landscape. And even to get that, the data is simple not available in reliable fashion. So a lot of fudging takes place.

Having lots of rankings in private sector is still ok, since most stakeholders will take each of them with a pinch of salt, and hopefully do a bit more homework about the universities they are interested in. While stakeholders may not understand the exact nature of problems with rankings, they are skeptical of any private ranking whose aim typically is to make money out of this information. Also, having multiple rankings mean that to some extent stakeholders realize that either rankings are inconsistent or there are indeed multiple ways to look at universities. But when government creates a ranking, stake holders may use it as the primary information and that would be disastrous.

The second major problem with a government ranking is that it leads to inconsistency in public policy. On one hand, having more out of state students is considered positive for educational outcomes, and on the other hand, we could have 50% reservation for in-state students in the institutes managed by the same government. Opposite for women participation. It is said that having a 50-50 population is good for educational outcomes, but nothing will be done to attract greater number of women students to IITs.

And, third major problem with government ranking is that the government is further reducing its already low credibility. There is no way one can verify this huge amount of data. So there will be huge errors in data leading to some strange ranking. Who should be held responsible if a potential student trusts these rankings, get admission, and then find the truth. What if this happens at a government institute. Will some heads roll, or will we just say, "you are stupid to trust government, and don't deserve anything better."

Last year, when NIRF ranking came out, there were lots of questions, and the answers were rarely provided, and really, the only answer was, "We did not have enough time. So some things may have been overlooked. We will do a better job next year." I personally was very upset at the lack of transparency. We were told that the data consistency would be maintained by way of complete transparency. We will all know what all has been submitted by various institutions and that would keep a check on each institution from lying. When I approached some of the folks in NIRF, I was told that due to very short period within which they had to come up with the ranking, they had taken some ad hoc decisions which may be difficult to explain to public, and hence only that year, the transparency is not there, and the 2017 ranking would be completely transparent.

Well, I don't see the submissions of various institutions on NIRF website this year too. I am told that this year, it was compulsory for all participating institutes to keep a copy of their NIRF submissions on their own websites, and if any institute does not keep that information, they will be removed from the ranking. I just checked several universities, and there is at least one in the top 10 who does not have NIRF data on their website (at least I search on the main page, used their search and used google search with "NIRF " as the quey. One other university has only partial information on its website and some crucial pieces of information missing. Others too may remove it in due course. To be transparent, the easiest thing would have been to put up all the submissions on NIRF website (may be they could have asked the institute to submit sensitive data such as placement details separately, and the rest could have been published by them). I don't know why they couldn't do this.

Also, transparency is not just about making submitted data public. It is also how NIRF has interpreted it. How that information has been converted to marks. Yes, they do give score in the five categories. This is appreciated, but they should give out information in each of the part of major factors. And also tell us various parameters that they have used in ranking.

In the small bit of research that I did, reading several submitted reports, this is what I find:

One of the top 5 business school is claiming that in the past 3 years, 100% of their graduates have got a job through campus placement. Frankly, it is unbelievable. Not a single student opting out of placement to start a company, for example. I see this happening in all IITs, NITs, etc., but not at the top business school of the country. Not even one graduate deciding that may be PhD is a good idea. Not even one graduate going back to the company where s/he came from 2 years ago, and not participating in the campus placement. (I thought even Government of India sponsors a few candidates for MBA at such places, and they are expected to join back, and not seek campus placement.) Also, every faculty member, except one has an experience of 5 years or more. Have not not hired any faculty member in the last 5 years, or everyone hired in the last 5 years has left them, or they only hire faculty with 5 years of experience. All this could be true. But a bit unbelievable. And at least as far as the student placement is concerned, if the data is correct, then it is quite sad.

In one university, the submitted data for consultancy amount is X, and the summary sheet for that university put out by NIRF shows more than 10X. I can understand 10X. Someone typing the information could have made a mistake in placing decimal. But having an arbit number which is unbelievably large is rather strange.

In one university, I saw the number of faculty members in NIRF data sheet to be unexpectedly large. I went to the university website. The NIRF submitted data for this is not on their website. Then I went to the website of each department and counted the faculty members shown there. The number is less than half.

The information on papers has been outsourced by them. I am told that in one university, the numbers shown in the datasheet is very less compared to what this university is claiming. The university folks tell me that NIRF sent them an email several months ago telling them the number of papers that they have found. The university wrote back giving its numbers, which was much higher. Now, if both of them are looking at the same database, the only reason for this difference could be that the search queries are different. In particular, faculty members use different names of the university (like IIT Kanpur, I.I.T.K., Indian Institute of Technology Kanpur and so on), and it is possible that they have searched for some names and not other names. It would have been absolutely trivial for them to share their search query, or ask for our search query. But no communication from them, and at the end when the rankings are out, they have simply used their own data. To me, this is height of callousness and incompetence. And we are going to use such data to take important decisions on funding and autonomy.

I see in the data sheets for engineering colleges that they asked for median salary and from management schools, average salary. Why this difference? I thought median salary captured the performance of placement much better, and must be for everyone. But anyway, I checked the numbers. While many institutes have given numbers which are believable as median salaries, there are many where the numbers are simply not believable. My guess is that they have given average salaries rather than median salaries. (Averages are invariably very high compared to median for most colleges.) In fact, about two institutes, I am sure this has happened. I guess in some places this may have happened inadvertently, but in some places it may be a deliberate error.

Then there is this data about capital expenses (not counting building construction). Will this number vary from year to year very drastically. What if it is shown as a small fraction of last year's expense. Shouldn't it raise a suspicion that perhaps last year, they had some construction going on and this year, there is none. Shouldn't they seek some verification of data at least in cases of suspicion.

And note that I am only talking about universities ranked in the top few in some category or the other. I understand that it is not easy to verify data for all colleges and universities, but they can always have a 2-stage or 3-stage process. That is, get data from all colleges which is not verified. The only "stick" is that all this data will be posted on the NIRF website, and if any questions are raised, they will be investigated and if a serious error is found, the information will be given out to press and they will be barred from ranking for some time. With this, you finalize top 120 or so univs in each category, and for these univs, some level of checking can be done. May be some proofs can be asked for. May be someone can check for information on the univ website, and so on. If any glaring errors are found, they are out of ranking. This way, there will be better trust in top 100. And finally, those whom you are going to declare in top 20-25, there should be yet another level of data verification. May be an agency can be hired to actually visit the university and verify everything. So the top 20 or so ranks would be based on very high quality data.

(Of course, the problem of a linear ranking not reflective of all diverse strengths of universities will always remain.)

Notice that I have not discussed the parameters per se, only the poor quality of data that they have against those parameters.

The parameters are also a problem. They are strongly in favor of bigger institutes. I wonder why. I can see that a bigger class to some extent leads to peer-to-peer learning. But beyond a point it does not help. And then what helps is having a variety of disciplines, a variety of courses available on the campus. But just having larger number of students does not guarantee that variety. There are many other serious problems with the parameters, but may be that is for another blog. But I wonder if they did some research to establish correlation between those factors and better teaching/learning or better research. I also wonder if they did any sensitivity analysis with their parameters. (What if I change the parameters slightly, does it result in major changes in the top order.)

I did not write last year after NIRF ranking were published since I wanted to look at what they will do after they have had enough time to do things right. But unfortunately, they have not utilized the time effectively.

Of course, the good thing is that most students/parents who have approached me for admission related queries, do not seem to bother about NIRF ranking. I hope it stays that way.

Added on 9th April:

More interesting stuff.

Jawaharlal Nehru Centre for Advanced Scientific Research which has no teaching program and is considered a tiny research center is ranked the third best teaching place in the country, overall. Such is the stupidity of this ranking that a place without a teaching program has been ranked 3rd best place in India in teaching.

Homi Bhabha National Institute shows its annual budget as 473 crores, and has 973 faculty members. Basically, the entire budget of DAE institutions. Are they really educational institutions. That they have been declared as a deemed university to encourage their scientists to get PhD in-house now means that they can declare the entire budget as university budget, and all scientists as faculty. And NIRF 2017 admits that. Further, with 973 faculty members, they have 252 publications listed in Web of Science, easily the worst ratio of all research places in India. But guess what. They are in the overall list because of a very strange rule. The number of faculty members will be deemed to be 10% of the number of students, irrespective of the actual number of faculty. So in case of HBNI, it will be assumed that these 252 papers have been written not by 973 faculty members, but 310 faculty members. Absolutely crazy stuff.

This article in wired points out that there are drastic changes in 2016 ranks and 2017 ranks. Can quality of universities change this drastically from year to year. It specifically mentions a university having jumped the rank from #83 to #12. It means that either the ranking last year was arbitrary or ranking this year is arbitrary. How do we know that it is not arbitrary this year.

If you look at the overall ranking in teaching learning, you would find that most of our Agriculture universities or Veterinary colleges have a far superior teaching programs than IITs. May be we should handover IITs to Ministry of Agriculture. They seem to be running far better academic institutions.

A very sensible advice from prof. Ashish Nanda, Director, IIM Ahmedabad, "Rate, don’t rank, academic institutions" in Hindu Business Line.

How do you explain a huge difference in the rating by NAAC and ranking by NIRF.

Added on 11th April, 2017:

A blog by Dhruv Baldawa, "Why we should not be ranking our educational institutions"

Added on 23rd April, 2017:

IIT BHU raises objections on NIRF ranking 2017, says list based on ‘incomplete data. Here is the news report.

25 comments:

Siddharth Jain said...

The ranking coming from government is believed as true by a layman. But as far as what I know about the colleges mentioned based on feedback of my students studied or studying there this ranking seems more or less a fiction then a reality.Also Sir I believe government institutes have also not bothered to submit the data properly for example in the case of my college MNIT the ranking says no student opted for higher studies which I think is impossible (also I believe that college actually never collected such data ).

sumunthra said...

Excellent and great thought. Also single or foucused branch colleges like iiit's must not graded along with multi branch university. The multi branch universities which offers courses in civil, mining, chemical etc obviously have higher publications, more count of faculty, funded projects, lab resouces which the iiit's and da-iict cant have.

Pallab said...

NIRF is a great start.

Is it perfect? --> No
Are there anomalies? --> Perhaps, Yes
Can it be perfect and make everyone happy? --> No
Can it be improved over time? --> Yes.

I don't see any need to be overly cynical. It should be a parameter used by interested students to select their institutions, as much as a means by the government to influence government funding.

Sumitava Mukherjee, PhD said...

Great post. Prof. Sanghvi, I think this needs to find a place in public media to deter ear prospective students and their parents to take all this seriously. However, just a s in private ranking, the list is close to what we might get by simple rank ordering of public perception, so in a way its not too absurd. The worry is in the data being used, given and the processing as you have so wonderfully pointed out.

Unknown said...

As I know placement data is counted those who opt for placement. Those who come from government to do M.Tech/MBA or sure plan for PhD, generally don't opt for placement.

New India said...

Prof. Sanghi How different is the methodology from the one in 2016?
In that perspective some IITs have slipped enormously.
Will MHRD take a note of this?
Will there be a review of the Director in charge (who are stifling the growth by their dictatorship)?

New India said...

Prof. Sanghi How different is the methodology from the one in 2016?
In that perspective some IITs have slipped enormously.
Will MHRD take note of this?
Will there be a review of the Director in charge (who are stifling the growth by their dictatorship)?

Dheeraj Sanghi said...

@Manish, we are not looking at how placement office of IITB will provide data to Director, IITB. But how IITB will provide placement data to NIRF. And in NIRF form, there is only one question about how many people have graduated, and another question about - how many campus placed, and how many higher education. If you reduce the number of graduates compared to number of students who were admitted, then you get less marks for timely graduation.

Dheeraj Sanghi said...

@New India, The ranking criteria and process is so arbitrary that seeing any meaning into the ranks is perhaps not desirable. MHRD should surely get a review done of institutions that it finances. They did a review about 4 years ago.

iitmsriram said...

Dheeraj, I agree broadly with several of your comments but I think you are mistaken in some. Pl. put up with my longish rant :-)

For example, from all the data available, NIRF has asked only for median salaries this year, while they asked for average salaries last year. Is there some documentation that shows they asked for average salaries this year? Even the "published" data available at NIRF and many of the institutional sites shows only median salary. Maybe, there was a typo in a data entry form? The instructions were quite clear. While on the topic, I notice that median salary entered by IITs for UG's is around 10 - 12 lakhs, while IITK has entered 14.7 lakhs. Maybe, this is average salary and not median salary?

More importantly, I think it is not fair to comment on the methodology now, after the rankings come out. The draft methodology document was circulated after the results were published last year and the "final" methodology document has been posted at the NIRF site last September. If we felt the methodology had errors, the time to comment would have been then, not now, I feel.

I agree with you about data authentication. Looks like NIRF tried some sort of crowd sourcing, by asking institutions to publish the data and making it open to anyone to comment / challenge, but this seems to have failed. But part of the failure, I believe, traces back to us. The deadline for NIRF data submission was Nov 30, but very few institutions submitted data at that time. Most institutions, IITs included, submitted data during the "correction window" in December and long after that too, all the way into February. As the results date had been announced, there was not much time left for the publish - challenge - correct (and even otherwise process data) like it was envisioned.

About data, a good deal of data is available (has been since the day the results came out) at NIRF site. Enough to verify what effects are at work. I agree with you, having full raw data published at NIRF site would be better than asking institutions to host. We had suggested that NIRF at least host links so that dead links will also show where institutions had not submitted data.

Dheeraj, I don't believe your assertion about institutions with multiple identities. Both Scopus and WoS have done quite a good job of identifying institutions and it is very very unlikely that there are errors on this front.

I agree with you on distortions due to Institutions like HBNI. The methodology obviously has problems dealing with research institutions that also give out degrees. Incidentally, I believe this is the same factor at work in the Agri and Vet universities - most states (and the central government departments also) fund these as research institutions to support agriculture within the states, but these also enrol students. So, they end up having much better than 1:10 faculty ratios and their expenditure per student will beat IITs, so they score better than IITs in teaching / learning.

I have a suggestion for you to think about. Since this is supposed to be a framework, we should ask NIRF to host a "my ranking" front end, where we can enter whatever weights we want and get our own ranks. For example, someone looking to compare institutions for UG admissions, could consider giving zero weightage to research publications, even blank out the PhD enrolment portion of the teaching learning metric, give high weightage for placement and graduation fractions. The ministry might want to give weightage to diversity (reserved category admissions) as being socially desirable from the government viewpoint, but prospective students could choose to do in other ways - maybe assign high weightage to female enrolment fraction ;-)

I am right now putting together a simple version of this for engineering institutions alone and only upto first level (the broad metrics; anyway, this is all we will be able to do as NIRF has not provided detailed scores on all items), let us see how this works.

Dheeraj Sanghi said...

@Sriram, In all engineering colleges, it is written as median salary and in all management schools that I checked, it was written as average salary. (Of course, I have only checked about 5 management schools.) I am quite certain that IITK has given average salary. I actually sent an email to all faculty members at IITK saying that we seem to have given wrong information and the faculty who is responsible for ranking data did not respond. So I would assume that they do realize that they have given wrong information. (Will NIRF now take action against them? Not a chance. NIRF is too IIT centric to do that.)

In terms of Scopus and WoS, I was referring to IIIT-D. We have once again checked our publications today for last three years and they are what we claimed in November or whenever the email from NIRF had come. May be we are doing something wrong. But when an institute challenges your data, and it is your first year (I discount 2016 ranking as just some random numbers, may be I should have discounted this year too, and not written this blog), would you not engage with one of the top research places in the country to see if the agency to whom you have outsourced this particular data is doing a good job. If you are interested in establishing your credibility, would you not engage with a university who is claiming a different data, which happens to be easily verifiable data. It means that the organization does not care for credibility, like most government regulators, they are a law unto themselves.

If someone did not submit data till February, shouldn't NIRF take them out of ranking rather than later claim that they did not have any time to do any checking, and display some random numbers which has the potential to create havoc in the market. In fact, by agreeing to accept data till late, they have discriminated against many other colleges. I am sure there were many colleges who somehow could not do by November 30th, and later felt that they can't do it now. They have been unfairly left out. Indeed, some colleges in DU have claimed this. I am sure NIRF did not send any emails to colleges that the deadline is being extended.

The suggestion on "My Ranking" was given by me to Prof. Surendra Prasad last year. But, may be you can ask NIRF again to consider this.

I disagree that it is too late to criticize methodology. Of course, most of my criticism has been about the implementation which has been so pathetic, but I don't see why methodology once frozen cannot be criticized. Did they take into account all suggestions that came. Of course not. Simply because they would have received contradictory suggestions. So who decided the final framework. They, not me. Also, when the final framework was published, this thing about dividing papers by deemed faculty of 10% of students was something that I could not understand. In fact, it was only when I was talking to IIITH for this blog that I found this out. I think they should have understood what the impact of that deemed faculty will be, and I am not supposed to understand each line of that document.

One more thing. When they changed the weights and methodology this year, did they simulate ranking using last year's data. Did they find anything unusual. Knowing NIRF little bit, I would bet that they didn't.

Pallab said...

A ranking methodology can always be improved, and the same goes with NIRF as well. That said, it is very convenient to say that institutions should not be ranked. What gets measured, gets done. Indian institutions (barring a few) are not known for quality, because they usually have students lining up to join them. Institutions do not have to compete for good students, as our numbers are so skewed in favour of institutions. Therefore, ranking should be done without fail, every year - year on year.

Any institution that does not score well, instead of analyzing where it went wrong, starts questioning the methodology and the need to score / evaluate institutions. If I get a low CGPA, should I start questioning the faculty, her teaching style and the exam approach?

Unknown said...

Dear Prof. Sanghi,

This is to correct you on below mentioned points with reference to data shared in your blog post on JNCASR:

1) JNCASR admits students with BSc, B Tech, MSc, MBBS, M Tech degrees in wide range of subjects, and offers very stimulating courses designed innovatively for the interdisciplinary research.

2) JNCASR faculty has been actively offering these courses for the last 20 years to students in full time Ph.D, Int. Ph.D and M.S. degree programmes, who are required to complete courses with well-defined credits (12 to 64, depending on the program) and score a minimum CGPA of 5.5 to embark on the thesis work.

3) The course-work at JNCASR played an important role in its recent evaluation by NAAC accreditation getting a 3.76/4.0, A++ score.

4) Presently, JNCASR is ranked 4th amongst all the Universities in the country.

We hope the factual details are verified in future before posting online.

Sincerely,

Nabonita Guha
Sr. Library cum Information Officer,
JNCASR, Bangalore

Dheeraj Sanghi said...

Dear Ms. Das, I have gone through the submission of JNCASR. It is mentioned that there are exactly 17 Masters' students in the whole university. I would guess some of them would go for MS by Research, some of them would go for Integrated PhD, and some of them would be doing the MS (Engg.) As per your website, the university has graduated exactly 55 MS (Engg) students since 2002.

I am sure you would appreciate that a teaching program is one in which most of the graduation outcomes are delivered through a teaching process. Almost all programs at JNCASR are research programs. They may have a course requirement, but that does not make them a teaching program. Nowhere in the world, a PhD program of an MS by Research program is considered a teaching program. MS (Engg) is sometimes considered a teaching program (if it is course based) and sometimes a research program (if it has a substantial thesis part). I couldn't find the detailed graduation requirement of MS (Engg.), so I can not say whether it is a teaching program. However, a university which graduates 4-5 MS (Engg) students per year, I would not admit that to be running a teaching program.

Please note that the blog is not questioning the output of JNCASR. They have a great research output, in fact, the highest per faculty research papers of all the universities that have been ranked in 2017. And many of us in academia are proud of JNCASR's output. This blog is questioning the methodology of NIRF which declares a research institute as one of the best teaching places.

iitmsriram said...

Dear Ms. Nabonita Guha, factual details very much confirm what Prof. Sanghi is stating. As per NIRF data, JNCASR has 17 PG students, about 300 PhD scholars and less than 40 faculty members. I believe this perfectly fits the description of "... no teaching program and is considered a tiny research centre...". Thanks, but please don't try to teach us about what is a teaching program. It is really a stretch of imagination to call such an institution a University.

Venkat said...

Dear Prof. Sanghi,

I understand some of your concerns but would slightly differ with some regarding HBNI. Basically HBNI includes several institutions not directly linked to DAE like : Tata Memorial Center, Mumbai (Cancer Research), IMsc, Chennai, Intitute of Physics, Bhubaneshwar, HRI, Kanpur , Saha Institute of Nuclear Physics, NISER Bhubaneshwar IoP, Bhunaneshwar etc.

Most of the above institutions admit MS/Phd/Post Doc students who are paid a "Fellowship" and not a DAE Scientist's salary (Sadly!). Their reserach output is also significant. One more point which I would like to mention is that NISER, Bhubaneshwar takes in a large ammount of students after + 2 for Integrated MSc courses also.

DAE is also one of the significant funding source for several Mathematics & Science education related activities in India.

Therefore I think that it deserves to be classified as such.

Regards,

Venkat.

Dheeraj Sanghi said...

@Venkat, I am only quoting numbers from HBNI data sheet at NIRF. If your data is different than what NIRF is saying, let us join hands and ask NIRF to correct all data. As of now, 973 faculty members publishing about 80 papers per year makes it one of the worst publication record among the top 100 ranked institutions. (I have been separately told that indeed this is an erroneous data, that many papers continue to have names of constituent institutions and not HBNI. I think it is a good reason for you to complain to NIRF.)

vaibhav said...

Sir
I am a delhi student. I can get both cse in iiit d and h. What shoul i choose? Yes i have a slight interest in cse ( slight because i haven't study anything of cs)

Dheeraj Sanghi said...

@Vaibhav, this is the wrong forum. Wait for some time. We will have our students and alums responding to any such question on a variety of social media platforms.

iitmsriram said...

Dheeraj, I have created a google sheet that can do "My NIRF ranking" on a limited scale. It has partial data for the top 25 engineering institutions (full data for top 10 and only first level data after that), one can enter own weights and see what ranks come out. I have shared this with NIRF also. All are welcome to try.

https://docs.google.com/spreadsheets/d/1002cjQiX6Zd9KbxilaKM-BeoW-1cDrNki6ZIZuxy2ck/edit#gid=1181070635

Unknown said...

What is your opinion about the department-wise QS World University Rankings that they publish every year?Looking at their methodology it is evident that they use quality of research,academic reputation,citation index,employer reputation as an important ingredient for ranking and give a lot of weightage to it. Can it serve as a model for NIRF as it was instituted to address the issue of Indian Universities not performing well in world rankings such as QS? Also, I am surprised that IIIT Delhi and IIIT Hyderabad and no NIT is in the list of QS rankings. Please give an unbiased response to this as students have come to believe that IIITD and IIITH are the only two IIITs worthy of being looked upon with respect as far as research and faculty is concerned,rest all other IIITs including Allahabad, Gwalior, Kancheepuram, Jabalpur are just laggards. Surprisingly, one of these(Allahabad) has made it to the list for two successive years ranking 11th in India which is difficult to swallow as we believe that IIITD and IIITH are much better at research than any of the Centrally Funded IIITs.

https://www.topuniversities.com/subject-rankings/methodology

Dheeraj Sanghi said...

@Vaibhav, NIRF has multiple problems. One is methodology, which will improve over a period of time, and whether it becomes closer to QS department wise ranking or should be something else is a matter of debate. (That the current methodology is very poor is a problem since UGC has already started linking goodies to NIRF ranks.) Second is implementation, and here NIRF can certainly learn from QS, but I don't expect it to. As I have said in one of the comments, NIRF has this attitude that we are the government and hence we are always right. Also, it is too much of government to ever do anything which will put government institutes (particularly, IITs) in poor light. The third problem is really my fundamental objection to NIRF. That ranking should not be done by government even in best of scenarios. Government can insist on accreditation (hopefully by any of the multiple private accreditation agencies). Government can let private sector rankings to exist. But doing a ranking by government is essentially giving rankings a halo which it should not get.

@kumar said...

Sir ,
If we make a rank of Indian universities from the qs world rankings we can clearly see that what nirf has given the top 10 institutions can be placed more or less somewhere in top 10. But do u think the two universities (anna and jadavpur)are worthy of being placed where they are because if we see the current trend then students prefer Nits like trichy , warangal ,etc. instead of them .Even bits pilani has a world ranking of somewhat probably 800.
Can the qs world rankings be trusted ????? How these universities are gaining a rank nearly to IIT Guwhati if we see the recent trend in qs world rankings . Further why are the IITs declining in qs world rankings ? Why more educational grant is being given to top 7 IITs instead of the two universities which I have stated ? Even in quora I found out that people prefers top Nits , bits pilani , etc. above these two universities .Where is the fallacy ?????

Dheeraj Sanghi said...

@Kumar, Each ranking is deciding on some parameters, giving some weight to those parameters, and then has some process to compute that parameter. Now ranking could be irrelevant for several reasons. One is that the process of collecting data is faulty and there are errors or randomness in it. In that case, the ranking can be considered untrustworthy. But the reason why the preference of students is different from ranking (whether NIRF or QS) is not simply that the ranking is untrustworthy (in the sense of data being erroneous), but also what they are evaluating is different from what a student is looking for. Of course, it is also because students and parents believe that the quality has only one parameter - placement, and that the information on quora about placement can be perfectly trusted.

Hari said...

It is clear that NIRF methodology and implementation both are flawed. But, what surprised me is the lack of official response from the participating institutes. Those who got good ranks are very happy. Those who did not do well are not speaking except Prof. Sangal of IIT BHU and maybe a few more (http://indianexpress.com/article/education/iit-bhu-raises-objection-on-nirf-ranking-2017-says-list-based-on-incomplete-data/). For example, IIT Gn got 8th rank in 2016 and they happily celebrated! They did badly in 2017 and did not refute the data of NIRF. Data on NIRF site shows research output in terms of publications, citations, and the patents of IITGn is low (https://nirfcdn.azureedge.net/rankingpdf2017/IR17-ENGG-1-18628.pdf). Is this data correct!? We do not know the version of the institute (except that, the director said, the methodology did not suit them)! More voices from the participating institutions will help, I guess !? A lot of blame goes to participating institute as well! --Hari