5

Since GoogleScholar is based on a search engine, it is useful to find the ranks of researchers (of course just by the h-index measure) in a topic or institution. I am conducting a scientometric research, and found three key problems.

  1. Not all researchers (even top ones) are on GoogleScholar (as it needs email verification before building a profile).
  2. The search engine adds works of similar authors to the same profile.
  3. Some researchers deliberately add work of others to improve the overall look of their profile (I came across some cases, which would be impossible for a machine error).

My questions are:

  • Is there any estimate about the participation of academics (at least in research universities) in GoogleScholar? Is it 10% or 90%? Just an estimate.
  • The problems 2 and 3 quoted above overestimate the status of some researchers, and this undermines others. Is there any trick to have a more accurate data?
Wrzlprmft
  • 61,194
  • 18
  • 189
  • 288
user82226
  • 59
  • 1
  • 6
    Since your assessment is based on the h-index, I would add a fourth key problem: what constitutes a good h-index differs quite substantially between fields. – lighthouse keeper Oct 31 '17 at 12:30
  • 2
    Google Scholar is generally terrible in curating citations/work, as it sometimes accumulates both peer reviewed and non-peer reviewed publications, conferences, workshops, presentations, abstracts, all in the same place. Maybe you should look into more carefully curated databases? Some fields such as CS and physics have built very well curated databases. –  Nov 01 '17 at 00:23
  • 4
    @glauc Said differently: Google Scholar is much better than other services in curating citations/work, because it does not distinguish peer reviewed journal papers from other types of publications, such as peer-reviewed conference papers, arXiv preprints, workshop abstracts, presentations, and lecture notes, all of which can be and are cited, and therefore have impact. More carefully curated databases in CS are significantly less complete: for example, DBLP doesn't track citations, and ACM DL only tracks a subset of publishers (notably excluding both LIPIcs and arXiv). – JeffE Nov 01 '17 at 00:53
  • 5
    There is a prior question: how useful are bibliometrics in judging a researcher? There is substantial evidence that the answer is "not very". – David Ketcheson Nov 01 '17 at 07:23

1 Answers1

3

You should probably consider Scopus. It has author profiles for most published academics. Many universities subscribe to the database (so check your university library). It also provides greater curation both of publications (i.e., what gets listed on an author's profile) and of publications that can give rise to citations. With a single click you will get various publication and citation estimates (e.g., total publications, total citations, h-index, citation indices excluding self-citations, etc.).

You occasionally have to be careful with academics who have variations in names (e.g., married names; with and without middle initials combined with changes in institution). If they have not merged their records, then sometimes an academic can be split over two or more author profiles.

So, if you want to for example rank all the academics in a given department, then Scopus is going to be easier to use.

If you do want to use Google Scholar, you could use PublishOrPerish. It is free software that interfaces with Google Scholar. This can be used with authors who do not have a profile. That said, if their name is common, then it can be fiddly.

Jeromy Anglim
  • 20,310
  • 9
  • 72
  • 108