Very few computer scientists (or academics in general) think highly of any ranking system. The main reason for this is that rankings are pretty arbitrary, highly biased by personal opinion, and not super informative. There's very minimal feedback pressure on rankers to get it "really right," and there's no particular reason to think that they do.
The methodologies vary widely, and small differences can result in wildly different rankings because the gradations between schools is usually not very big. In the US News and World Report Ranking, five schools all got a perfect score. Their methodology is to solicit rankings from researchers on a scale of 1-5 and then average them. So there were five schools that everyone gave top marks to. But there were also 17 schools that received higher than a 4.0 without receiving a 5.0. Due to their methodology, that means that there are at least 17 schools for which at least half of respondents said it was a 5/5. It seems unambiguous that all of those 17 schools are outstanding, because at least half of researchers gave them a 5/5. However, it likewise is very unclear what a 4.1 vs a 4.5 actually corresponds to.
This also shows how much the presentation of the results matter. The following three statements are all true of the USNWR ranking:
- UIUC is ranked 4 slots behind CMU.
- UIUC's ranking is five times that of CMU.
- UIUC's raw score is 11% lower than that of CMU
- UIUC is a top 3.3% school and CMU is a top 2.8% school.
Other sites use a weighted system that scores universities on a variety of factors and then take the weighted average. The issue with this methodology is that the results are highly variable with differing weights, and there doesn't seem to be any principled way to decide if "mean impact factor of faculty" should be weighted 0.3 or 0.1. Even choosing to measure "mean impact factor of faculty" could be disputed, and one could use median instead of mean or only look at the 5 most active professors. There are arguments for and against myriad tweaks like this, and again there seems to be no principled way to decide which is best. This wouldn't be very concerning if it mattered little, but it matters a lot. This is easily seen by looking at 5 random rankings and observing that there's a very high variance in the ranking of universities from site to site. This issue is exacerbated by the fact that universities try to game ranking systems (note there are two links there).
How your school is ranked can also vary massively with the discipline or sub-discipline. My alma mater had a fabulous CS theory program, a less-reputed systems program, and (at the time) a non-existent AI program. Someone who says "I studied theoretical computer science at Stella's alma mater" gets a very different response than someone who says "I studied programming language theory."
Not only is your discipline going to matter a lot, who your adviser is will as well. Sometimes the word experts on fields or subfields work at universities that are, in general, not thought as high of. But if you can go study the problem you're interested in with the world expert on it, you should jump at that chance, even if it means going to some podunk school you'd have never heard of if your adviser wasn't located there. And these kinds of concerns are entirely ignored by rankings. Some rankings have breakdowns by sub-discipline, but there's no real way to get them nuanced enough to be particularly meaningful.
University rankings are also comically America-centric, and actual perception of the quality of universities widely varies by region, both within the US and across the world.
Finally, university ranking doesn't matter that much compared to other factors.