U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.12(11); 2023 Nov
  • PMC10771139

The h-Index: Understanding its predictors, significance, and criticism

Himel mondal.

1 Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India

Kishore Kumar Deepak

2 Centre for Biomedical Engineering, Indian Institute of Technology, New Delhi, India

Manisha Gupta

3 Department of Physiology, Santosh Medical College, Santosh University, Ghaziabad, Uttar Pradesh, India

Raman Kumar

4 National President and Founder, Academy of Family Physicians of India, India

The h-index is an author-level scientometric index used to gauge the significance of a researcher's work. The index is determined by taking the number of publications and the number of times these publications have been cited by others. Although it is widely used in academia, many authors find its calculation confusing. There are websites such as Google Scholar, Scopus, Web of Science (WOS), and Vidwan that provide the h-index of an author. As this metrics is frequently used by recruiting agency and grant approving authority to see the output of researchers, the authors need to know in-depth about it. In this article, we describe both the manual calculation method of the h-index and the details of websites that provide an automated calculation. We discuss the advantages and disadvantages of the h-index and the factors that determine the h-index of an author. Overall, this article serves as a comprehensive guide for novice authors seeking to understand the h-index and its significance in academia.

Introduction

The h-index is a commonly used metric to measure the productivity and impact of academic researchers. It was first introduced in 2005, and since then, the h-index has become an important tool for evaluating researchers, departments, and institutions.[ 1 ] The calculation of the h-index is relatively simple, yet it confuses novice authors. There are several websites where researchers can find their h-index autocalculated. While the h-index has several advantages, such as providing a simple and objective measure of a researcher's impact, there are also some limitations to its use. For example, the h-index does not take into account the quality of the publications or the context in which they were cited.[ 2 ]

In this study, we will explore the calculation of the h-index, the websites where it is available, and the advantages and disadvantages of using this metric, and it is predictors that increase the h-index of an author. By examining the strengths and weaknesses of the h-index, we hope to provide a comprehensive understanding of this important tool for evaluating scientific impact.

Calculation Method

The h-index is defined as the “highest number h, such that the individual has published h papers that have each been cited at least h times.”[ 3 ] For example, if an author has 10 papers and seven of those have been cited at least seven times each, then the h-index for that individual is 7. To make it more easy, we are presenting an example of how an author can calculate the h-index manually [ Table 1 ]. To calculate the h-index, we first sort the papers in descending order based on their citation counts. Then, we count the number of papers that have at least as many citations as their position in the list. The table footnote describes situations where the h-index would be 8 or 9 in future.

h-index of an author who has 10 papers

Before arrangement After arrangement
PapersCitationPaper numberCitation
Paper A10Paper 134
Paper B20Paper 220
Paper C12Paper 318
Paper D18Paper 415
Paper E2Paper 512
Paper F15Paper 610
Paper G5Paper 7*8
Paper H8Paper 85
Paper I34Paper 94
Paper J4Paper 102

The h-index of the author is 7. A total of seven of the papers got at least seven citations each. Eight of the papers have not received at least eight citations. The author's h-index will be 8 when “Paper 8” gets additional three or more than three citations. The author's h-index will be 9 when “Paper 7” gets additional one or more than one, “Paper 8” gets additional four or more than four, and “Paper 9” gets additional five or more than five citations

Where to Get h-Index?

There are databases that provide the h-index information for authors for free. Some of the most commonly used websites that calculate the h-index of an author are listed as follows. The website titles, links, and services that are freely available are shown in Table 2 .

Websites that calculate the h-index of an author

TitleWebsiteInformation freely available
Google Scholar Total articles, total citations, h-index, i10-index
Scopus Total articles, total citations, year-wise publication and citations, h-index
Web of Science Total articles, WOS-indexed article, total citations, number of citing articles, h-index
ResearchGate Total articles, total citations, year-wise citations, number of citing articles, h-index
Vidwan Total article, year-wise articles, type of publication, total citations, citations available from Crossref, number of coauthors, coauthor network, Altmetric scores, h-index

WOS: Web of Science

Google Scholar

Perhaps it is the most commonly used website by scholars around the world. Google Scholar provides h-index information for authors based on the citations of their papers as indexed by Google Scholar.[ 4 ] It is a free service provided by Google Scholar, and any researcher can open an account. However, if the researcher has an institutional email address, then the account can be made public after verifying the email. The authors can observe the year-wise citation count for a quick idea about the trend of citations over the years. An example is shown in Figure 1 a.

An external file that holds a picture, illustration, etc.
Object name is JFMPC-12-2531-g001.jpg

Examples of h-index of an author found in (a) Google Scholar, (b) Scopus, (c) Web of Science, and (d) ResearchGate showing discordance in h-index

This database, provided by Elsevier, is another popular citation database that provides h-index information and other metrics, such as total citations and year-wise citations.[ 5 ] Researchers can search for any name from the home page by clicking on “View your author profile” and searching by surname and name. However, we suggest creating a free account to track your own articles and citations. An example is shown in Figure 1 b. From the same homepage, the authors can also check the articles published and citation count of any journal by clicking on “View journal rankings.”

Web of Science

This database is maintained by Clarivate, and it is one of the most widely used citation databases. Previously, Researcher ID was provided by Thomson Reuters.[ 6 ] Now, the Researcher ID is provided by Web of Science (WOS) that is maintained by the parent company Clarivate. The creation of an account is free in WOS. After creating the account, an author can view own details and also search for other researchers in the database. In the profile, WOS provides h-index information and other metrics, such as total citations, number of WOS-indexed articles, and number of citing articles. An example is shown in Figure 1 c.

ResearchGate

This social networking site for researchers provides h-index information and other metrics, such as total citations and year-wise citations. To get the h-index in ResearchGate, one needs to create an account.[ 7 ] Only published authors or invitee can create an account. Although ResearchGate suggests using the institutional email address, without it authors can open an account too. The authors need to send proof of publication for the creation of an account by a noninstitutional email address. In addition, those who are already in ResearchGate can send invitation to others to open an account. After logging in, the h-index is shown along with other metrics as shown in Figure 1 d.

The Vidwan Expert Database and National Researcher's Network is a comprehensive platform designed to connect and showcase the expertise of scholars and researchers across various fields. It is a service provided by the Information and Communications Technology of Ministry of Education, India. The database is developed and maintained by the Information and Library Network Centre (INFLIBNET). This service is not open to all authors. Any recipient of national or internal award, any postgraduate with 10 years of professional experience, postdoctoral fellow, research scholar, professor (full, associate, or assistant), senior scientist, or having equivalent reaching or research post can open an account. This website shows the h-index along with total articles, year-wise articles, type of publication, total citations, citations available from Crossref ( https://www.crossref.org ), number of coauthors, coauthor network, and Altmetric ( https://www.altmetric.com ) scores. A part of the Vidwan profile with the h-index of a researcher is shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is JFMPC-12-2531-g002.jpg

A part of a Vidwan profile showing the h-index and other metrics of the second author

Why h-Index Differ?

The h-index can differ between different sites. One can see her/his h-index higher in Google Scholar than in Scopus or WOS.[ 8 ]

Different databases may have different coverage and indexing policies. Some databases may include more or fewer journals, conference proceedings, or other sources of academic literature. This can affect the number of citations that are included in the h-index calculation.

Different databases may have different time lags in their citation data, meaning that citations may not be indexed at the same time or may be indexed differently based on the date of publication. This can affect the h-index calculation for a temporary period, especially if a researcher has recently published a highly cited paper that has not yet been indexed by a particular database.

In addition to the above factors, there may be errors or inconsistencies in the citation data used to calculate the h-index, which can lead to differences in the resulting h-index across different databases.

Therefore, it is important to use multiple sources of h-index information and to be aware of the potential differences between different sites. Google Scholar uses maximum sources to calculate the h-index. Hence, the h-index in Google Scholar may be the highest among the h-index provided by other databases. One question may still ponder: Which to take as the final h-index of an author? Although there is no simple answer to this question, Google Scholar may be considered the provider of the most comprehensive h-index. The impact of research is now not limited to citation in a journal article indexed by a single bibliographic database.

Advantages of h-Index

The h-index has several advantages as a measure of research productivity and impact. The h-index takes into account both the number of publications and the number of citations those publications have received. This helps to balance the impact of quantity (by number of publications) and quality of publications (by number of citations it received) on the researcher's overall research output. The h-index can be easily calculated using citation databases, such as Google Scholar. Being a free service, any author can get the h-index automatically calculated in Google Scholar. Scopus and WOS also provide their services free of charge for getting the h-index. We can use the h-index to compare the productivity and impact of researchers across different disciplines. The h-index is less affected by outliers. The h-index is less sensitive to individual highly cited papers or lowly cited papers, as it considers the total number of papers a researcher has published that have been cited a certain number of times. It provides a long-term measure of research impact, as it takes into account the entire career of the researcher rather than just a single paper or a recent burst of activity.[ 9 ]

Limitation of h-Index

Despite these advantages, the h-index is not without limitations. The h-index is criticized for favoring researchers who have been in the field for a longer period of time, as they have had more time to publish and accumulate citations. This can disadvantage early-career researchers. The h-index does not account for differences in citation practices between different fields or subfields, which can lead to unfair comparisons between researchers in different areas. The h-index relies on citation databases, which may not include all relevant citations. This can result in an inaccurate representation of a researcher's impact. However, this is common for all online calculated indices. The h-index includes citations to a researcher's own work, which can inflate the researcher's impact and may not accurately reflect their influence on the field. The h-index can be manipulated by self-citing excessively to increase the number of citations. The h-index does not take into account other important factors, such as the quality of publications, the impact of a researcher's work beyond citations, or their contributions to teaching and service.[ 10 ]

Hence, the h-index should be used in conjunction with other metrics and qualitative evaluations to get a comprehensive assessment of a researcher's productivity and impact.

Usage of h-Index in Academia

There is no thumb rule of the level of h-index for hiring professionals or promotion of faculties. However, this index can be used by the universities for comparison of impact among the candidates for hiring or promotion. In addition, universities are commonly interested in recruiting a researcher with higher publication impact as the impact would be a feather to the crown of the university. A study by Wang et al .[ 11 ] in the Department of Surgery, University of Alabama at Birmingham, Birmingham, Alabama, United States, found that a faculty has a median h-index of 6 at hiring, 11 during the promotion from assistant to associate professor, and 17 during the promotion from associate to full professor. In addition, Schreiber and Giustini studied 14 disciplines in North American medical schools and found that assistant professors have an h-index of 2 to 5, associate professors have 6 to 10, and full professors have an index of 12 to 24.[ 12 ] A study by Kaur from India showed that top publishing authors in the medical field from All India Institute of Medical Sciences, Delhi, and Postgraduate Institute of Medical Education and Research, Chandigarh, have the h-index of 15 and 21, respectively.[ 13 ] Nowak et al .[ 14 ] analyzed 13 medical specialties and found that the median h-index was 19.5. There is a need for further research and reviews to get a generalizable result. Till we get that, the rule is “the more the merrier!”

Other Numbers and Indices Used in Academia

There are other author-level metrics that are used by various universities to evaluate research productivity and impact.

Some universities still use the total number of publications as a criterion for promotion. In addition, the total number of citations is also considered an indicator of research impact. This metric counts the total number of times an author's papers have been cited, regardless of the number of papers they have published. Furthermore, the average number of citations per paper for an author, which can provide insight into the overall quality and impact of their work, is sometimes considered. Table 3 shows the various other calculations and indices that are used.

Other calculations and indices (calculated from data in Table 1 )

MetricValue
Total articles10
Total citations128
Average citation12.8
i10-index6
g-index10
m-value1.4

The i10-index is another simple measure that indicates the number of papers that have received 10 citations each. It is shown in a Google Scholar profile along with the h-index of an author [ Figure 1 a].

The g-index is another metric that is not readily found calculated in the above database websites, but one can manually calculate the g-index of an author. It gives more weight to highly cited papers. It is calculated by finding the largest number of g such that the top g papers have a total of at least g 2 citations. For example, in Table 1 , the author had a g-index of 10 as cumulative citations on the 10 th paper are more than 10 2 [ Table 2 ]. If the author had a 11 th paper with even 0 citations, the g-index would be 11 (as cumulative citations are more than 11 2 ). However, if the author had a 12 th paper with 0 citations, the g-index could be 11 as cumulative citations were below 12 2 .[ 3 ]

The m-index is a metric that takes into account the h-index and years of activity of an author.[ 15 ] Its calculation is simple. For example, if the author is publishing the papers shown in Table 1 for the last 5 years, the m-value or m-index would be 1.4 (7/5) [ Table 3 ].

It is important to note that no single metric can provide a comprehensive evaluation of a researcher's productivity and impact, and these metrics should be used in combination with other qualitative evaluations. Furthermore, no index is still there in academia that is capable of judging the quality of a research paper.

Factors that Influence h-Index

Achieving a high h-index can be a long-term process that requires sustained research productivity and impact.[ 16 ] Here are some factors that have the potential to influence the h-index.

Publish in high-impact journals

Publishing in high-impact journals can help to increase the visibility and impact of one's research, leading to more citations and a higher h-index. High-impact journals are typically those with a large readership and reputation for publishing groundbreaking research. Articles published in these journals tend to be highly cited and can have a significant impact on their respective fields.[ 17 ]

Make research openly accessible

Making research freely and openly accessible can increase the visibility and impact of one's work, leading to more citations and a higher h-index. Open-access articles can reach a wider audience and potential readership, including researchers who might not have access to the article through traditional subscription-based methods. Additionally, open-access articles can be easily shared on social media platforms, blogs, and other online forums, which can increase their reach and promote their visibility.[ 18 ]

Collaborate with other researchers

Collaborating with other researchers can lead to more publications and citations, as well as exposure to new research ideas and methods. Collaboration can bring together researchers with different areas of expertise and skill sets, resulting in more comprehensive and impactful research. Collaborating with other researchers can increase the visibility of the research. Collaborators are likely to share the research with their networks, potentially increasing the readership and citations of the work.[ 19 ]

Balance quality and quantity

While the quantity of publications is important, it is more important to focus on producing high-quality research that is impactful and well-regarded in the field. Higher-quality articles are more likely to be cited by other researchers, which can further increase their impact and visibility.[ 20 ] However, the number is also important. For example, if an author has five papers with a huge 50000 citations, the h-index would be 5 only.

Stay active in the field

Attending conferences can provide opportunities to meet other researchers and learn about new research in the field. By presenting one's own research at a conference, researchers can receive feedback and ideas from other scholars, which can lead to new collaborations and research opportunities. Attending conferences also provides opportunities to network with other researchers. Delivering talks or lectures can also increase visibility and impact. Participating in scholarly discussions, such as by commenting on blogs or participating in online forums, can also increase visibility, which increases the chances of higher citations.[ 21 ]

Promote your research

Promoting research can be an effective strategy for increasing citations. There are several ways to promote research, including sharing it on social media, collaborating with other researchers, and seeking media coverage. Sharing research on social media can be an effective way to increase visibility and reach a wider audience. Researchers can share their work on their personal or professional social media accounts or on specialized platforms, such as ResearchGate or Academia.edu.[ 22 ] Seeking media coverage can also be an effective way to promote research and increase citations. Media coverage can increase the visibility of the research and attract the attention of other researchers who may be interested in citing the work. Researchers can also promote the articles on their own websites for a higher reach in the field, which lead to more citations and a higher h-index.[ 21 ]

Conduct timely research

By working on influential research and trending topics, researchers can increase the likelihood that their work will be cited by other researchers in the field. To conduct timely research, researchers need to stay up-to-date on the latest developments and emerging trends in their field. This may involve reading relevant literature, attending conferences, and collaborating with other researchers. By staying current with the latest research, researchers can identify gaps in the field and opportunities for making meaningful contributions.[ 23 ]

It is important to note that these strategies should not be used to game the system or artificially inflate one's h-index, but rather as ways to increase the impact and visibility of one's research in a genuine and sustainable way.

Institutional Level Data

The institutional h-index is not readily available in Google Scholar. However, one can manually search the total publications from the institution and citation to the published article from the institutional repository (if available) to calculate the h-index of the institution. The calculation method remains the same. Institutions that do not have their own repository can collect data from Google Scholar about publications and citations. If the institution provides an email address to the employee, and teachers or researchers verify the email address, the data can be collected from Google Scholar from the following method. The website https://scholar.google.com/citations?mauthors=aiimsdeoghar.edu.in&hl=en&view_op=search_authors is opened if the institution has the Uniform Resource Locator (URL) as aiimsdeoghar.edu.in. All the authors who verified their accounts would be shown with their papers and citations.[ 24 ] These data can be used to calculate the central tendencies of the h-index of the authors in that institution. A similar method can be used to extract data from other databases, such as Scopus, to compute the institution-level h-index.[ 25 ] Institutions may also open a user account as a researcher in Google Scholar as shown in Figure 3 and add the published “Add article manually” (after clicking the addition “+” button) to get institutional level h-index.

An external file that holds a picture, illustration, etc.
Object name is JFMPC-12-2531-g003.jpg

A profile of an institution in Google Scholar

The h-index of global institutions can also be found at https://exaly.com/institutions/citations . This website hosts data of 53,307 institutions along with their h-index. Exaly is a nonprofit initiative aimed at filling the gap of lacking an inclusive and accessible collection of academic papers and scientometric information. It is referred to as a project rather than an organization to ensure independence from commercial motives. Indian regional data are available on a website https://www.indianscience.net/list_inst.php that provides data till 2019 . This website extracted data from Dimensions ( https://www.dimensions.ai ) and Altmetric ( https://www.altmetric.com ).[ 26 ]

In conclusion, the h-index is a widely used metric for measuring the productivity and impact of researchers. While it has some limitations, such as its inability to capture the quality of publications and the potential for manipulation, the h-index remains a useful tool for evaluating the performance of individual authors and comparing researchers and institutions. Hence, the potential predictors of the index were discussed along with its calculation methods. The h-index in conjunction with other metrics and factors for evaluating research productivity and impact was also highlighted.

Self-Assessment Multiple-Choice Questions

Five questions are available in Table 4 for self-assessment of your learning from this article.

Self-assessment multiple-choice questions

Question numberQuestionResponse option
Q1Where can an author get the h-index?

A) Google Scholar

B) Scopus

C) Web of Science

D) All of the above

Q2What we need to calculate the h-index?

A) Published articles

B) Citations to the published articles

C) Arrangement of articles according to citation in higher to lower order

D) All of the above

Q3An author has published five papers that have been cited as follows: 21, 12, 4, 2, and 1. What will be the h-index of the author?

A) 5

B) 8

C) 3

D) 2

Q4An author has published 10 papers that have been cited as follows: 9, 7, 3, 11, 4, 8, 2, 12, 6, and 1. What will be the h-index of the author?

A) 5

B) 6

C) 2

D) 4

Q5What is false about the h-index?

A) Different databases may show different h-index of an author

B) ResearchGate profile shows the h-index of an author

C) Self-citations can increase the h-index

D) h-index calculation needs the years of activity of an author

Q1: The correct answer is D. Google Scholar, Scopus, and Web of Science show the h-index of an author

Q2: The correct answer is D. We need the total papers and their citations to be arranged in higher to lower order for ease of identification of the h-index.

Q3: The correct answer is C. Three papers of the author have received at least three citations each.

Q4: The correct answer is B. Six papers of the author have received at least six citations each.

Q5: The correct answer is D. The h-index only takes papers and their citations. m-value considers the years of activity of an author

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Katie Schulman

Improve your h-index with these 10 practical strategies.

I recently came across an interesting discussion on ResearchGate 1 about how to get cited more. Since it’s an important measure for academics – often in the form of the so-called h-index – I looked at a few sites to learn about practical strategies.

Being cited is one way of showing the impact of your research. It’s also not easy, as there is an enormous amount of papers being published each year. Moreover, there are language issues. For instance, if I write in German, I make it easier for students to read my paper and be cited by my German speaking colleagues (not all of them might be comfortable in English). On the other hand, publishing in English means people from many countries can read – and cite – my work.

Getting cited is a slow process. Publications in peer-reviewed papers can take a long time, which means it takes a while to get citations.

The h-index

One popular measure for citations is the h-index. The h-index combines how many papers you publish and how often these papers got cited into a single metric 2 . The h-index is available for instance on Google Scholar. There are also apps, like ‘Publish or Perish’.

Currently, Google Scholar shows my h-index as 6, with 139 citations.

What is a good h-index? That seems to vary widely by field 3 . For instance, Holosko & Barner 4 showed that assistant professors in social work had an average h-index of 5, associate professors of 8 and full professors of 16. In psychology, these numbers were 6, 12 and 23. Albion 5 showed that in education, associate professors had an average h-index of 6.2 and professors of 10.6. Researchers at the London School of Economics and Political Science (LSE) 6 found that lecturers in geography had an average h-index of 3.73, senior lecturers of 5.75 and professors of 6.50. So, mine isn’t that bad : ) .

Ways for improving the h-index

Still, there’s still room for improvement. On the web, there’s a lot of advice on how to improve your h-index.

Make your future publications citation-friendly

A lot of the strategies seem to focus on things that you can do before submitting the paper for publication.

  • Choose the right venue : The LSE paper 6 also found that it matters in what type of venue you publish. For geography, academic articles got way more cites than conference papers. Others suggest that accessibility plays a role. Online, free-of-charge papers get more citations than a book that needs to be ordered from the library 9 , 13 , 14 . Peer-reviewed papers (especially in high-impact journals) are not just better regarded, but also might help boost your h-index 13 . Publishing in venues of different disciplines reaches a larger audience 13 . However, if there’s a venue most people in your field read, try to get in there 14
  • Check the attribution: Use ORCID or a similar service to make sure all the papers get attributed to you 7 , 13 , 14 . Some say you should have a consistent version of the name for all publications 13, 14 . However, that doesn’t work if you choose to take your partners name upon marriage, for instance – something I’m looking forward to.
  • Consider adding citation-friendly types of publications to your list: Publishing the results of your projects is important, of course. However, review papers and how-to (methodology) papers get cited more widely than research articles 9 , 1 , 13, 14 . So in-between those results-focused papers, consider doing a review paper. It’s especially beneficial to be the first to do a review in a particular area 14 .
  • Make your paper useful: The more useful to others, the more likely they are to cite it 9 . This can also mean framing your research in relation to what’s “hot” at the moment in your field 13.
  • Make your paper SEO friendly: This can include making sure keywords (that people actually search for) are in the title, body and abstract; having proper meta-data and including the article in places Google crawls and accepts as authoritative 13 ,14  such as ResearchGate, Academia and your institution’s repository.
  • Citing yourself is obvious – a new paper can be one of the ways to get the word out about your previous research and get self-cites. But please, don’t overdo it by including those that are not relevant – despite what this Indonesian example of extreme self-citation on academia claims 15 .
  • Ebrahim, Gholizadeh and Lugmayr 13 summarize research that shows “[…] a ridiculously strong relationship between the number of citations a paper receives and the number of its references […]”. So make sure to include a strong literature review.
  • According to a study by Hudson 8 , titles should be short. However, Jamali & Nikzad found no correlation, and Habibzadeh & Yadollahie showed more favorable results for longer titles 12 . Ebrahim, Gholizadeh & Lugmayr conclude that optimal title length depends on the field 13 . In a study by ResearchTrends, “titles between 31 to 40 characters were cited the most” 12 .
  • Hudson also found that you shouldn’t use questions 8 . A study by Tse & Zhou suggests you should not use hyphens 8 . Jamali & Nikzad found colons and question marks not to work well 12 , 13, 14 . Paiva et al. say no colon, hyphen or question mark 13 . In contrast, Griffith says that colons are good 14 . The ResearchTrends study found that “the ten most cited papers […] did not contain any punctuation at all” 1 2 .
  • In another study, a team from Italy found that you shouldn’t put “country names in the title, abstract or keywords” 8. Paiva et al’s also advise against country names in the title 13.
  • There’s no need for trying to be funny or play on words. A study by Sagi & Yechiam found that “articles with highly amusing titles […] received fewer citations” 1 2 .
  • Think about the authors: In many situations, you don’t have a choice who the authors are. For instance, if you’re working on a certain project together, then that’s most likely who the authors will be. However, Hudson found that having too many authors isn’t good for citations 8 , while Wuchty et al. found “that team-authored articles typically produce more frequently cited research than individuals” 13 .Research shows that having co-authors from different countries is good for attracting citations 13 ,14 . Some especially suggest that super-well known (and cited) co-authors can boost the citations for your paper 1, 14 – so if you can get a Nobel laureate to be co-author on your paper, go for it 13 …

Get more citations for already published work

But what if you have, like I, worked for a couple of years and already have a list of publications? Less strategies seem to deal with how to increase citations for past work. Two ways I found are:

  • Get known: Since well-known co-authors can boost your h-index, one could argue that becoming more well-known yourself would also boost your h-index. This can include e.g. conference presentations (and schmoozing with colleagues), social media, blogging and ‘branding yourself’ 13,  7 .
  • Get your work out there: This can mean spreading the word about your work e.g. on blogs and your website; social media such as ResearchGate, Academia and Linkedin, or even mass media 1, 13, 14 . You can even send out information about your work to some key people 13 ,14 . . Learn about marketing and apply that to your work 13 . Others would likely disagree about the social media aspect. For instance, Cal Newport often argues against using social media 10 , and instead advocates for doing ‘deep work’ which leads to better quality work 11 . What research does see as important is making the papers available: several studies have found a positive impact of self-archiving 13 . So keep your lists up-to date and easy to find online 13 . This can even mean including your work into relevant Wikipedia articles – or writing a Wikipedia article on your topic yourself 13 ,14 .

Reviewing my existing strategy

Looking through my publication list , one of the things I noticed are the titles. I do have a book chapter on GIS education that has Germany in the title .  There are two papers with question marks: one I published with some of my students on how much of geography education research actually ends up in classroom practice and my article on Public Judaism. Several titles have other punctuation: “.”, “–” and “:”. In general, many of the titles are quite long.

According to Google Scholar, my best-performing paper is a r eview paper on GIS education that I’ve authored with international, well-known colleagues.

In terms of ‘getting the word out there’, I have written on Wikipedia before, but haven’t used that systematically. I also notice that while I keep the publication list on my website always up to date, there’s often a lag till I’ve updated my profiles on ResearchGate , Academia , GoogleScholar etc.

Planning to improve my h-index

I have several publications to write in the upcoming semester – among other things about the #TCDTE project   and teacher conceptions . Based on the 10 strategies, I might have to consider writing a review style paper too.

I’m curious to see how much applying the 10 strategies will improve my h-index in the coming months.

Related Posts

Lessons learned from the online presence of 16 geography education professors, about the author.

' src=

Katie Viehrig

' src=

Thanks for this very interesting review. I’ll paste it to my networks, especially directed to young researchers, but not only… Gil Mahé from IRD Montpellier France

' src=

Thanks so much! Glad you find it useful : )

Save my name, email, and website in this browser for the next time I comment.

Privacy Overview

Reference management. Clean and simple.

What is a good h-index? [with examples]

What is a good h index?

What is an h-index?

How to calculate your h-index, now let’s talk numbers: what h-index is considered good, what is a good h-index for a phd student, what is a good h-index for a postdoc, what is a good h-index for an assistant professor, what is a good h-index for an associate professor, what is a good h-index for a full professor, frequently asked questions about h-index, related articles.

An h-index is a rough summary measure of a researcher’s productivity and impact. Productivity is quantified by the number of papers, and impact by the number of citations the researchers' publications have received. It can be useful for identifying the centrality of certain researchers as researchers with a higher h-index will, in general, have produced more work that is considered important by their peers.

➡️ You can learn all about the h-index, why it is important, and how to calculate it, in this guide: The ultimate how-to-guide on the h-index

As Jorge E. Hirsch , the creator of the h-index describes it, the index h is “the number of papers with citation number ≥ h. ” While this formula might not explain much, it makes it clear any researcher is able to calculate their h-index. Below are some guides that will help you find or learn how to calculate your h-index:

➡️  Learn how to calculate your h-index on Google Scholar

➡️  Learn how to calculate your h-index using Scopus [3 steps]

➡️  Learn how to calculate your h-index using Web of Science

➡️  The ultimate how-to-guide on the h-index (to calculate it yourself)

According to Hirsch, a person with 20 years of research experience with an h-index of 20 is considered good, 40 is great, and 60 is remarkable.

But let's go into more detail and have a look at what a good h-index means in terms of your field of research and stage of career.

It is very common for supervisors to expect up to three publications from PhD students. Given the lengthy process of publication and the fact that once the papers are out, they also need to be cited, having an h-index of 1 or 2 at the end of your PhD is a big achievement.

Given that there is no defined time for how long postdoctoral training can go on, let's assume that an average Postdoc is able to publish one paper a year. Building on the papers already published during his/her PhD studies, there is a good chance that after two years of postdoctoral training, it is a total of 5 papers. If each of these 5 papers has been cited 5 times, that makes an h-index of 5.

Below you will find a sample of assistant professors and their h-index ratings:

NameUniversityResearch areah-index

Yuan Lu

Yale

Cardiovascular epidemiology

30

Mohammad Alizadeh

MIT

Computer networks

44

Manuel A. Rivas

Stanford

Human genetics

39

Mark L. Hatzenbuehler

Columbia

LGBT health disparities

66

Martin J. Aryee

Harvard

Statistics

49

Below you will find a sample of associate professors and their h-index ratings:

Nameh-indexUniversityResearch area

Ivan P. Gorlov

27

Dartmouth

Bioinformatics

Arvind Narayanan

41

Princeton

Information privacy

Yajaira Suarez

44

Yale

MicroRNAs

Richa Saxena

63

Harvard

Genetics

Alon Keinan

45

Cornell

Computational genomics

Below you will find a sample of full professors and their h-index ratings:

Nameh-indexUniversityResearch area

James E. Hansen

100

Columbia

Climate change

Olivia S. Mitchell

78

University of Pennsylvania

Economics

Fredo Durand

88

MIT

Computer graphics

Li-Jia Li

35

Stanford

Machine learning

Enrique Rodriguez Boulan

84

Cornell

Cell polarity

These numbers shouldn’t be taken as the yardstick of comparison, as every researcher has different experiences, and the h-index is not the only measure that defines them. Hirsch states that “obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual.”

In conclusion, having a good h-index is great, but every researcher's case is multifaceted. There are plenty of other aspects to consider while evaluating a researcher.

An h-index is a rough summary measure of a researcher’s productivity and impact . Productivity is quantified by the number of papers, and impact by the number of citations the researchers' publications have received.

Google Scholar can automatically calculate your h-index, read our guide How to calculate your h-index on Google Scholar for further instructions.

Even though Scopus needs to crunch millions of citations to find the h-index, the look-up is pretty fast. Read our guide How to calculate your h-index using Scopus for further instructions.

Web of Science is a database that has compiled millions of articles and citations. This data can be used to calculate all sorts of bibliographic metrics including an h-index. Read our guide How to use Web of Science to calculate your h-index for further instructions.

Jorge E. Hirsch created the h-index in 2005. Here is the paper published in PNAS in which he outlines the h-index in detail.

researchgate h index

  • Maps & Floorplans
  • Libraries A-Z

University of Missouri Libraries

  • Ellis Library (main)
  • Engineering Library
  • Geological Sciences
  • Journalism Library
  • Law Library
  • Mathematical Sciences
  • MU Digital Collections
  • Veterinary Medical
  • More Libraries...
  • Instructional Services
  • Course Reserves
  • Course Guides
  • Schedule a Library Class
  • Class Assessment Forms
  • Recordings & Tutorials
  • Research & Writing Help
  • More class resources
  • Places to Study
  • Borrow, Request & Renew
  • Call Numbers
  • Computers, Printers, Scanners & Software
  • Digital Media Lab
  • Equipment Lending: Laptops, cameras, etc.
  • Subject Librarians
  • Writing Tutors
  • More In the Library...
  • Undergraduate Students
  • Graduate Students
  • Faculty & Staff
  • Researcher Support
  • Distance Learners
  • International Students
  • More Services for...
  • View my MU Libraries Account (login & click on My Library Account)
  • View my MOBIUS Checkouts
  • Renew my Books (login & click on My Loans)
  • Place a Hold on a Book
  • Request Books from Depository
  • View my ILL@MU Account
  • Set Up Alerts in Databases
  • More Account Information...

Maximizing your research identity and impact

  • Researcher Profiles
  • h-index for resesarchers-definition

h-index for journals

H-index for institutions, computing your own h-index, ways to increase your h-index, limitations of the h-index, variations of the h-index.

  • Using Scopus to find a researcher's h-index
  • Additional resources for finding a researcher's h-index
  • Journal Impact Factor & other journal rankings
  • Altmetrics This link opens in a new window
  • Research Repositories
  • Open Access This link opens in a new window
  • Methods for increasing researcher impact & visibility

h-index for researchers-definition

  • The h-index is a measure used to indicate the impact and productivity of a researcher based on how often his/her publications have been cited.
  • The physicist, Jorge E. Hirsch, provides the following definition for the h-index:  A scientist has index h if  h of his/her N p  papers have at least h citations each, and the other (N p  − h) papers have no more than h citations each. (Hirsch, JE (15 November 2005) PNAS 102 (46) 16569-16572)
  • The h -index is based on the highest number of papers written by the author that have had at least the same number of citations.
  • A researcher with an h-index of 6 has published six papers that have been cited at least six times by other scholars.  This researcher may have published more than six papers, but only six of them have been cited six or more times. 

Whether or not a h-index is considered strong, weak or average depends on the researcher's field of study and how long they have been active.  The h-index of an individual should be considered in the context of the h-indices of equivalent researchers in the same field of study.

Definition :  The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a journal with a h-index of 20 has published 20 articles that have been cited 20 or more times.

Available from:

  • SJR (Scimago Journal & Country Rank)

Whether or not a h-index is considered strong, weak or average depends on the discipline the journal covers and how long it has published. The h-index of a journal should be considered in the context of the h-indices of other journals in similar disciplines.

Definition :  The h-index of an institution is the largest number h such that at least h articles published by researchers at the institution were cited at least h times each. For example, if an institution has a h-index of 200 it's researchers have published 200 articles that have been cited 200 or more times.

Available from: exaly

In a spreadsheet, list the number of times each of your publications has been cited by other scholars. 

Sort the spreadsheet in descending order by the number of  times each publication is cited.  Then start counting down until the article number is equal to or not greater than the times cited.

Article                   Times Cited

1                              50          

2                              15          

3                              12

4                              10

5                              8

6                              7              == =>h index is 6

7                              5             

8                              1

How to successfully boost your h-index (enago academy, 2019)

Glänzel, Wolfgang On the Opportunities and Limitations of the H-index. , 2006

  • h -index based upon data from the last 5 years
  •  i-10 index is the number of articles by an author that have at least ten citations. 
  •  i-10 index was created by Google Scholar .
  • Used to compare researchers with different lengths of publication history
  • m-index =   ­­­­­­­­­­­­­­­­­­___________ h-index _______________                      # of years since author’s 1 st publication

Using Scopus to find an researcher's h-index

Additional resources for finding a researcher's h-index.

Web of Science Core Collection or Web of Science All Databases

  • Perform an author search
  • Create a citation report for that author.
  • The h-index will be listed in the report.

Set up your author profile in the following three resources.  Each resource will compute your h-index.  Your h-index may vary since each of these sites collects data from different resources.

  • Google Scholar Citations Computes h-index based on publications and cited references in Google Scholar .
  • Researcher ID
  • Computes h-index based on publications and cited references in the last 20 years of Web of Science .
  • << Previous: Researcher Profiles
  • Next: Journal Impact Factor & other journal rankings >>
  • Last Updated: Jul 8, 2024 3:20 PM
  • URL: https://libraryguides.missouri.edu/researchidentity

Facebook Like

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

What is a Good H-index?

  • 4 minute read
  • 407.2K views

Table of Contents

You have finally overcome the exhausting process of a successful paper publication and are just thinking that it’s time to relax for a while. Maybe you are right to do so, but don’t take very long…you see, just like the research process itself, pursuing a career as an author of published works is also about expecting results. In other words, today there are tools that can tell you if your publication(s) is/are impacting the number of people you believed it would (or not). One of the most common tools researchers use is the H-index score.

Knowing how impactful your publications are among your audience is key to defining your individual performance as a researcher and author. This helps the scientific community compare professionals in the same research field (and career length). Although scoring intellectual activities is often an issue of debate, it also brings its own benefits:

  • Inside the scientific community: A standardization of researchers’ performances can be useful for comparison between them, within their field of research. For example, H-index scores are commonly used in the recruitment processes for academic positions and taken into consideration when applying for academic or research grants. At the end of the day, the H-index is used as a sign of self-worth for scholars in almost every field of research.
  • In an individual point of view: Knowing the impact of your work among the target audience is especially important in the academic world. With careful analysis and the right amount of reflection, the H-index can give you clues and ideas on how to design and implement future projects. If your paper is not being cited as much as you expected, try to find out what the problem might have been. For example, was the research content irrelevant for the audience? Was the selected journal wrong for your paper? Was the text poorly written? For the latter, consider Elsevier’s text editing and translation services in order to improve your chances of being cited by other authors and improving your H-index.

What is my H-index?

Basically, the H-index score is a standard scholarly metric in which the number of published papers, and the number of times their author is cited, is put into relation. The formula is based on the number of papers (H) that have been cited, and how often, compared to those that have not been cited (or cited as much). See the table below as a practical example:

1 > 79
2 > 71
3 > 45
4 > 36
5 > 10
6 > 7 H-index=6
7 > 6
8 > 3
9 > 1

In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside.

A good H-index score depends not only on a prolific output but also on a large number of citations by other authors. It is important, therefore, that your research reaches a wide audience, preferably one to whom your topic is particularly interesting or relevant, in a clear, high-quality text. Young researchers and inexperienced scholars often look for articles that offer academic security by leaving no room for doubts or misinterpretations.

What is a good H-Index score journal?

Journals also have their own H-Index scores. Publishing in a high H-index journal maximizes your chances of being cited by other authors and, consequently, may improve your own personal H-index score. Some of the “giants” in the highest H-index scores are journals from top universities, like Oxford University, with the highest score being 146, according to Google Scholar.

Knowing the H-index score of journals of interest is useful when searching for the right one to publish your next paper. Even if you are just starting as an author, and you still don’t have your own H-index score, you may want to start in the right place to skyrocket your self-worth.

See below some of the most commonly used databases that help authors find their H-index values:

  • Elsevier’s Scopus : Includes Citation Tracker, a feature that shows how often an author has been cited. To this day, it is the largest abstract and citation database of peer-reviewed literature.
  • Clarivate Analytics Web of Science : a digital platform that provides the H-index with its Citation Reports feature
  • Google Scholar : a growing database that calculates H-index scores for those who have a profile.

Maximize the impact of your research by publishing high-quality articles. A richly edited text with flawless grammar may be all you need to capture the eye of other authors and researchers in your field. With Elsevier, you have the guarantee of excellent output, no matter the topic or your target journal.

Language Editing Services by Elsevier Author Services:

What is a corresponding author?

What is a Corresponding Author?

Systematic review vs meta-analysis

Systematic Review VS Meta-Analysis

You may also like.

PowerPoint Presentation of Your Research Paper

How to Make a PowerPoint Presentation of Your Research Paper

What is a corresponding author?

How to Submit a Paper for Publication in a Journal

Input your search keywords and press Enter.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

My h-index is very low and I want to increase it

Nowadays universities judge faculty based on their h-index, to be promoted from assistant to associate or to be hired as associate in some universities you need at least a h-index of 10. I am struggling to increase mine, I have tried all the tips I found online, shared my papers on social media such as Research Gate and Linked In.

How can I increase my h-index otherwise?

  • bibliometrics
  • self-promotion

Wrzlprmft's user avatar

  • Answers in comments and side discussions have been moved to chat . Please read this FAQ before posting another comment. –  Wrzlprmft ♦ Commented Jan 16, 2020 at 9:15
  • 1 Hint: Check your H index with and without self citations. Some hiring comittees are smart enough to know the difference, and many of your possible competitors might not look as good as you thought they do. –  Karl Commented Jan 16, 2020 at 9:22
  • 2 It's very similar to gaining reputation on SE. Looks like you have started! ;-) –  Peter - Reinstate Monica Commented Jan 17, 2020 at 12:53

7 Answers 7

Write papers that people will want to cite. In particular:

When you come up with a new concept/technique, write a good explanatory section , so that people will refer to your paper for in depth explanation.

Make something useful , like a piece of software or a benchmark that people working in your field can use. Write a paper that people using your work can cite. For example, the introductory page of PonyGE2 contains a "how to cite PonyGE2" text.

When you make something people can cite, make it easy to cite it . Include snippets for BibTex and other citation systems that people can easily copy-paste. (BibTex in particular is unpleasant to do entirely by hand; so take that work off your readers' hands.)

Use good titles for your paper, good abstracts and pay attention to the keywords . These things matter a lot for whether people will find and decide to read your paper, and that's necessary to get them to cite it.

Collaborate with a lot of different people in your field. If you write something good with X, chances are X and X's colleagues are going to be citing that paper later on. Also, people looking at X's papers also get to see your paper.

Collaborate with famous people in your field. They probably got famous by being good at it (so learn from the best!), and they get published in the more prestigious journals. That's good for increasing your citation odds.

Write about interesting things that other people will want to follow up on.

Supervise good students, teach them well, and co-author their publications sometimes the student exceeds the master, but the master also gets a boost from the student's success.

ObscureOwl's user avatar

  • 10 "BibTex in particular is unpleasant to do entirely by hand" - just use doi2bib.org –  yar Commented Jan 15, 2020 at 23:00
  • 4 @yar Or Google Scholar. –  nick012000 Commented Jan 16, 2020 at 7:20
  • 1 Writing surveys is an option, if there aren't already dozens on the topic. –  Marc Glisse Commented Jan 16, 2020 at 8:49
  • 9 @yar or Zotero or Mendeley. They can all export citations. I don't think "making it easy to cite" is gonna make a difference. The collaborate suggestions are the best ones. H-index doesn't care about author contribution. If you are on the paper it counts for your index! SO just get on lots of papers! –  jerlich Commented Jan 16, 2020 at 8:57

In addition to writing papers that others will want to cite, another important factor is simply the number of publications that you have- if you look at the profiles of researchers who have h-index numbers of 30 or more, they typically have total publication counts of 100 or more with a highly skewed distribution of the number of citations.

Another issue is that citation counts build up over time. In some disciplines, papers published 20 or more years ago are still being heavily cited, while in other disciplines papers are typically only cited for a couple of years before they become out-dated. If you're in a discipline where the citations come in over a very long time, then it can take decades for your H-index to build up.

Brian Borchers's user avatar

  • 3 This is a known weakness of H-index: what constitutes a good score differs noticeably by (sub)field. –  ObscureOwl Commented Jan 16, 2020 at 9:07

The most sustainable and rewarding "tip" is to do good work which is interesting to your peers, and present it well. All other approaches are merely tactics that will only get you so-and-so far. I still include some of them in this answer, since they might be useful to increase your h-index to 10 in a given timeframe.

Self-cite . While a citation record that mainly consists of self-citations might raise some questions, it's an accepted way to get started with building up your record.

Cite other people . Cite active researchers in your field broadly, so they notice you and cite you back. Don't shy away from including multiple references to the same group of authors, so they notice you even more.

Find a "gold-mine" topic . There are some topics that are more amenable for extensions and follow-up papers then others. Once you have such a topic, each new paper allows you to ethically cite and discuss all previous papers in the same line of research.

Spin-off publications . In some fields, it's OK to apply a tactic which is known as "salami publications" in other fields: Publish separate papers which are closely related to another work, for example, a tool or a dataset developed in the context of the work.

An important point is to not overdo it with these tactics. For example, in a book recently published in my field, each chapter contains a reference list with a significant (n>10) number of self-citations. At a certain institute, each PhD thesis contains a separate "Further reading" biography with dozens of references to the institute's papers. I would surely bring such cases up if I was involved in a relevant hiring committee and the topic of research metrics came up.

lighthouse keeper's user avatar

  • 5 H index is often (also) calculated excluding self-citations. –  user9482 Commented Jan 15, 2020 at 11:30
  • 34 I find many of your recommendations border-line bad practice and they would reflect negatively on a candidate in my eyes. –  user9482 Commented Jan 15, 2020 at 11:32
  • 1 @Roland Indeed, h-index without self-citations is even mandatory to specify for some funding programs (like Marie Curie). However, for the vast majority of job applications, it's not, and there's no convenient tool that calculates this number for you like Google Scholar does. I don't remember big discussions about self-citations in hiring committees I was involved in. –  lighthouse keeper Commented Jan 15, 2020 at 11:34
  • 3 @Kimball The application process for Marie Skłodowska-Curie Individual Fellowships (one of the most prestigious grants for young researchers in Europe) requires the applicant to specify their h-index excluding self-citations. I have seen the h-index being considered for hiring decisions in computer science. –  lighthouse keeper Commented Jan 15, 2020 at 21:02
  • 3 @Roland: Those are (common) bad practices indeed, but they're simply symptoms of "publish or perish". :-/ –  Eric Duminil Commented Jan 16, 2020 at 9:59

Obsession with the h-index, in particular among young researchers, is an extremely unfortunate and destructive aspect of the current environment in academia. My suggestion: Don't look it up for the next five years.

Look at the positives: You may end up writing interesting, novel papers that get you recognized, hired and/or promoted.

The down-side: Maybe you neglect to write large numbers of papers on fashionable subjects that can drive up your h-index, some idiotic hiring or promotion committee will punish you for it, and you miss out on that great job.

But then again: Maybe you do write large numbers of papers of fashionable subjects, but they fail to drive up your h, or that hiring committee actually has some sense, and considers your work to be boring me-too work, and you still don't get the great job at FancyU.

In what situation would you rather be?

Thomas (h=xx)

Thomas's user avatar

The H-index is largely a function of how many large projects you are involved in. Without having a large number of co-authors who all write papers, it is impossible to be competitive. If you write a brilliant paper that earns you the Nobel Prize, your h-index will only increase by one. In the meantime, I know one telescope group where everyone who has ever worked on the telescope is automatically added to all future papers produced from the telescope observations. Those will see their h-index steadily increase without effort.

Unfortunately, Google Scholar considers the h-index to be the only measure of scholarly success. AdsAbs also offers normalized citations as a measure, where the number of citations is divided by the number of authors on a paper. If your normalized citation count is high, you could use this as an argument. And don't write a paper that you expect to get fewer than ten citations.

Norbert S's user avatar

  • 2 What, and throw away the results of a completed work, just because you know it will never increase your H index? –  Karl Commented Jan 16, 2020 at 9:16
  • 9 This is needlessly negative. H-index never goes down for having a little-cited paper so there is no harm in writing an obscure paper. –  ObscureOwl Commented Jan 16, 2020 at 13:18
  • 3 Also, some important papers don't get cited a lot directly (for example, because everyone is citing a follow-up publication), but that doesn't mean they weren't important. H-index isn't perfect, it isn't and it probably never will be the only impact metric. Don't fall into the trap of "writing towards the test". –  ObscureOwl Commented Jan 16, 2020 at 13:19
  • 2 @ObscureOwl re: "no harm:" there is a significant time cost to writing that paper, that authors might be able to spend writing a paper which is more useful to the greater scientific community. I'm not saying "don't write them" but I don't think " no harm" is really accurate. –  WBT Commented Jan 16, 2020 at 14:04
  • 1 It's better to write one significant paper than two insignificant papers. –  Norbert S Commented Jan 16, 2020 at 18:04

I have become a bit cynical from what I have observed over the past years in my university. Tactics to boost your H-index:

Force your graduation students to publish their work with you as co-author (or better, promote yourself to first author as supervising is hard work). Numbers count.

Be co-author as often as you can! Promise to lead a follow-up publication but never live up to your promises. People will buy it over and over again. Some of your colleague researchers will manage to be cited successfully and you will benefit from their success.

In the incidental situation where you are first author, invite a co-author with a high H-index. Your work may actually get accepted.

When reviewing papers (anonymously), force the authors to cite your important work even if it is not relevant for their research.

Supervise as many PhD students as is allowed and be their co-author

Participate in (literature) review papers. Once accepted these are often cited frequently.

My advice: Find a faculty elsewhere. You are more than your H-index. Own your research and publications. Be proud of your work.

Don's user avatar

Consider what gets cited the most - reviews .

If you want to increase your h-index, then in many areas reviews are the way forward. Often reviews are invited, but you solicit the invitation.

The Polymer Chemist's user avatar

  • 2 Actually, I suspect that reviews are one of the worst ways to improve your H-index. It means spending a lot of time on one paper that might get hundreds of citations, when for the purposes of an H-index you'd probably be better off writing three simple papers that get a handful of citations each. Of course, this is one reason the H-index is so silly. –  Rococo Commented Dec 30, 2021 at 2:33
  • We are discussing H-indices at the ca. 10 level. Increasing this in a timely fashion requires quite a lot of citations, well above the average for most journals. An average research paper in, say, Angewandte Chemie (IF = 15, much of which comes from their reviews) would require many years to garner ca. 10 citations, whereas a review there would typically pass this in ca. 6 months. Remember, IF is the average number of citations in two years, and most papers are barely cited. A strategy of 3 research papers vice a review only works if you have reasonable research in a reasonable journal. –  The Polymer Chemist Commented Dec 31, 2021 at 19:59

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged bibliometrics self-promotion ..

  • Featured on Meta
  • Introducing an accessibility dashboard and some upcoming changes to display...
  • We've made changes to our Terms of Service & Privacy Policy - July 2024
  • Announcing a change to the data-dump process

Hot Network Questions

  • Can the Bible be the word of God, when there are multiple versions of it?
  • "Seagulls are gulling away."
  • Where did it come from: an etymology puzzle
  • Why can't I sign into iMessage on OS X 10.9?
  • High-precision solution for the area of a region
  • Reduce spacing between letters in equations
  • Should I include MA theses in my phd literature review?
  • Abrupt increase of evaluation time of MatrixExp
  • Why are my IK rigged legs shaking/jiterring?
  • English equivalent to the famous Hindi proverb "the marriage sweetmeat: those who eat it regret, and those who don't eat it also regret"?
  • Can I use specific preprocess hooks for a node type or a view mode?
  • Is there a reason SpaceX does not spiral weld Starship rocket bodies?
  • Emphasizing the decreasing condition in the Integral Test or in the AST (in Calculus II): is it worth the time?
  • Washing machine drain
  • Combining Regex and Non-Regex in the same function
  • Will a spaceship that never stops between earth and mars save fuel?
  • Tips/strategies to managing my debt
  • dealing with the feeling after rejection
  • Is there a pre-defined compiler macro for legacy Microsoft C 5.10 to get the compiler's name and version number?
  • Mobility of salts on unbonded bare silica HILIC column
  • Is Marisa Tomei in the film the Toxic Avenger?
  • Fix warning for Beamer subitem bullet with Libertine font (newtx)
  • Mathematical Theory of Monotone Transforms
  • Positive Freedom v. Negative Freedom: a binary or a spectruum?

researchgate h index

Finding Your H-index (Hirsch Index) in Web of Science

What is the h-index.

"An index that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist."  (The h-index was suggested by Jorge E. Hirsch, physicist at San Diego State University in 2005. The h-index is sometimes referred to as the  Hirsch index  or  Hirsch number .)

  • e.g., an h-index of 25 means the researcher has 25 papers, each of which has been cited 25+ times.

STEP 1: Access Web of Science

Locate the  Web of Science  link on the Library  website . If you are accessing the application remotely remember to use the remote access link also located on the Library website.

STEP 2: Searching for Your Articles

Enter your name, surname and initials, in the second search box (set to “author”) and click search.

  • eg. Smith, ja

If you have published under different names/initials you will need to incorporate this into your search criteria by using truncation (eg smith, j*) or using the "Author Search" option located directly under the search box.

Once you are satisfied with the search criteria, it is suggested that you make a note of the search criteria displayed on the results screen. This will save you time if you should need to repeat the process.

  • eg. Author = (smith,ja)
  • Refined by: Web of Science Categories = (CHEMISTRY PHYSICAL OR GEOGRAPHY PHYSICAL OR PHYSICS APPLIED)
  • Timespan = All years. Databases = SCI-EXPANDED, SSCI.

STEP 3: Creating and Using the Citation Report 

On the left side of the results page is the “Create Citation Report” indicator which will display the h-index and Average Citations per item/year and other statistics. You have the option to re-fine the listing by selecting the checkboxes to remove individual items that are not yours from the Citation Report or restrict publication years.

You can see how the h-index has changed over time by revising the dates in the gray box directly above the first listed article.

Issues to be Aware of:

  • Web of Science counts the number of papers published, therefore favors authors who publish more and are more advanced into their careers.
  • When comparing impact factors you need to compare similar authors in the same discipline, using the same database, using the same method.
  • Be sure to indicate limitations.
  • In general you can only compare values within a single discipline. Different citation patterns will mean, for example, an average medical researcher will generally have much larger h-index value than a world-class mathematician.
  • The h-index may be less useful in some disciplines, particularly some areas of the humanities.

For further assistance in this process, please  Ask a Librarian  or call 303-497-8559.

  • University of Michigan Library
  • Research Guides

Research Impact Metrics: Citation Analysis

  • Web of Science
  • Google Scholar
  • Alternative Methods
  • Journal Citation Report
  • Scopus for Journal Ranking
  • Google Journal Metrics
  • Alternative Sources for Journal Ranking
  • Other Factors to Consider When Choosing a Journal
  • Finding Journal Acceptance Rates
  • Text/Data Mining for Citation Indexes

H-Index Overview

The h-index, or Hirsch index, measures the impact of a particular scientist rather than a journal. "It is defined as the highest number of publications of a scientist that received h or more citations each while the other publications have not more than h citations each." 1  For example, a scholar with an h-index of 5 had published 5 papers, each of which has been cited by others at least 5 times. The links below will take you to other areas within this guide which explain how to find an author's h-index using specific platforms. 

NOTE: An individual's h-index may be very different in different databases. This is because the databases index different journals and cover different years. For instance, Scopus only considers work from 1996 or later, while the Web of Science calculates an h-index using all years that an institution has subscribed to. (So a Web of Science h-index might look different when searched through different institutions.)  

1  Schreiber, M. (2008). An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index.   Journal of the American Society for Information Science and Technology , 59(9), 1513.

  • Find an Individual Author's h-index Using the Citation Analysis Report in Web of Science
  • Find an Individual Author's h-index Using the Author Profile in Scopus
  • Find an Individual h-index Using Publish or Perish

Finding an Individual h-index Using Publish or Perish

  Finding h-Index using Publish or Perish

1.  The Publish or Perish site uses data from Google Scholar.  An explanation of citation metrics is available here .

2.  Publish or Perish is available in Windows and Linux formats and can be downloaded at no cost from the Publish or Perish website.

3.  Once you have downloaded the application, you can use Publish or Perish to find h-Index by entering a simple author search.  You can exclude names or deselect subject areas to the right of the search boxes to help with disambiguation of authors.

4.  The h-Index will display on the results page.

5.  You can narrow your search results further by deselecting individual articles.  The h-Index will update dynamically as you do this.

Do researchers know what the h-index is? And how do they estimate its importance?

  • Open access
  • Published: 26 April 2021
  • Volume 126 , pages 5489–5508, ( 2021 )

Cite this article

You have full access to this open access article

researchgate h index

  • Pantea Kamrani   ORCID: orcid.org/0000-0002-8880-8105 1 ,
  • Isabelle Dorsch   ORCID: orcid.org/0000-0001-7391-5189 1 &
  • Wolfgang G. Stock   ORCID: orcid.org/0000-0003-2697-3225 1 , 2  

10k Accesses

6 Citations

26 Altmetric

Explore all metrics

The h-index is a widely used scientometric indicator on the researcher level working with a simple combination of publication and citation counts. In this article, we pursue two goals, namely the collection of empirical data about researchers’ personal estimations of the importance of the h-index for themselves as well as for their academic disciplines, and on the researchers’ concrete knowledge on the h-index and the way of its calculation. We worked with an online survey (including a knowledge test on the calculation of the h-index), which was finished by 1081 German university professors. We distinguished between the results for all participants, and, additionally, the results by gender, generation, and field of knowledge. We found a clear binary division between the academic knowledge fields: For the sciences and medicine the h-index is important for the researchers themselves and for their disciplines, while for the humanities and social sciences, economics, and law the h-index is considerably less important. Two fifths of the professors do not know details on the h-index or wrongly deem to know what the h-index is and failed our test. The researchers’ knowledge on the h-index is much smaller in the academic branches of the humanities and the social sciences. As the h-index is important for many researchers and as not all researchers are very knowledgeable about this author-specific indicator, it seems to be necessary to make researchers more aware of scholarly metrics literacy.

Similar content being viewed by others

researchgate h index

Multiple versions of the h-index: cautionary use for formal academic purposes

Rejoinder to “multiple versions of the h-index: cautionary use for formal academic purposes”.

researchgate h index

Dispersion measures for h-index: a study of the Brazilian researchers in the field of mathematics

Avoid common mistakes on your manuscript.

Introduction

In 2005, Hirsch introduced his famous h-index. It combines two important measures of scientometrics, namely the publication count of a researcher (as an indicator for his or her research productivity) and the citation count of those publications (as an indicator for his or her research impact). Hirsch ( 2005 , p. 1569) defines, “A scientist has index h if h of his or her N p papers have at least h citations each and the other ( N p   –   h ) papers have <  h citations each.” If a researcher has written 100 articles, for instance, 20 of these having been cited at least 20 times and the other 80 less than that, then the researcher’s h-index will be 20 (Stock and Stock 2013 , p. 382). Following Hirsch, the h-index “gives an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contribution” (Hirsch 2005 , p. 16,572). Hirsch ( 2007 ) assumed that his h-index may predict researchers’ future achievements. Looking at this in retro-perspective, Hirsch had hoped to create an “objective measure of scientific achievement” (Hirsch 2020 , p. 4) but also starts to believe that this could be the opposite. Indeed, it became a measure of scientific achievement, however a very questionable one.

Also in 2005, Hirsch derives the m-index with the researcher’s “research age” in mind. Let the number of years after a researcher’s first publication be t p . The m-index is the quotient of the researcher’s h-index and her or his research age: m p  =  h p / t p (Hirsch 2005 , p. 16,571). An m -value of 2 would mean, for example, that a researcher has reached an h-value of 20 after 10 research years. Meanwhile, the h-index is strongly wired in our scientific system. It became one of the “standard indicators” in scientific information services and can be found on many general scientific bibliographic databases. Besides, it is used in various contexts and generated a lot of research and discussions. This indicator is used or rather misused—dependent on the way of seeing—in decisions about researchers’ career paths, e.g. as part of academics’ evaluation concerning awards, funding allocations, promotion, and tenure (Ding et al. 2020 ; Dinis-Oliveira 2019 ; Haustein and Larivière 2015 ; Kelly and Jennions 2006 ). For Jappe ( 2020 , p. 13), one of the arguments for the use of the h-index in evaluation studies is its “robustness with regards to incomplete publication and citation data.” Contrary, the index is well-known for its inconsistencies, incapability for comparisons between researchers with different career stages, and missing field normalization (Costas and Bordons 2007 ; Waltman and van Eck 2012 ). There already exist various advantages and disadvantages lists on the h-index (e.g. Rousseau et al. 2018 ). And it is still questionable what the h-index underlying concept represents, due to its conflation of the two concepts’ productivity and impact resulting in one single number (Sugimoto and Larivière, 2018 ).

It is easy to identify lots of variants of the h-index concerning both, the basis of the data as well as the concrete formula of calculation. Working with the numbers of publications and their citations, there are the data based upon the leading general bibliographical information services Web of Science (WoS), Scopus, Google Scholar, and, additionally, on ResearchGate (da Silva and Dobranszki 2018 ); working with publication numbers and the number of the publications’ reads, there are data based upon Mendeley (Askeridis 2018 ). Depending of an author’s visibility on an information service (Dorsch 2017 ), we see different values for the h-indices for WoS, Scopus, and Google Scholar (Bar-Ilan 2008 ), mostly following the inequation h( R ) WoS  < h( R ) Scopus  < h( R ) Google Scholar for a given researcher R (Dorsch et al. 2018 ). Having in mind that WoS consists of many databases (Science Citation Index Expanded, Social Science Citation Index, Arts & Humanities Citation Index, Emerging Sources Citation Index, Book Citation Index, Conference Proceedings Citation Index, etc.) and that libraries not always provide access to all (and not to all years) it is no surprise that we will find different h-indices on WoS depending on the subscribed sources and years (Hu et al. 2020 ).

After Hirsch’s publication of the two initial formulas (i.e. the h-index and the time-adjusted m-index) many scientists felt required to produce similar, but only slightly mathematically modified formulas not leading to brand-new scientific insights (Alonso et al. 2009 ; Bornmann et al. 2008 ; Jan and Ahmad 2020 ), as there are high correlations between the values of the variants (Bornmann et al. 2011 ).

How do researchers estimate the importance of the h-index? Do they really know the concrete definition and its formula? In a survey for Springer Nature ( N  = 2734 authors of Springer Nature and Biomed Central), Penny ( 2016 , slide 22) found that 67% of the asked scientists use the h-index and further 22% are aware of it but have not used it before; however, there are 10% of respondents who do not know what the h-index is. Rousseau and Rousseau ( 2017 ) asked members of the International Association of Agricultural Economists and gathered 138 answers. Here, more than two-fifth of all questionees did not know what the h-index is (Rousseau and Rousseau 2017 , p. 481). Among Taiwanese researchers ( n  = 417) 28.78% self-reported to have heard about the h-index and fully understood the indicator, whereas 22.06% never heard about it. The remaining stated to hear about it and did not know its content or only some aspects (Chen and Lin 2018 ). For academics in Ireland ( n  = 19) “journal impact factor, h-index, and RG scores” are familiar concepts, but “the majority cannot tell how these metrics are calculated or what they represent” (Ma and Ladisch 2019 , p. 214). Likewise, the interviewed academics ( n  = 9) could name “more intricate metrics like h-index or Journal Impact Factor, [but] were barely able to explain correctly how these indicators are calculated” (Lemke et al. 2019 , p. 11). The knowledge about scientometric indicators in general “is quite heterogeneous among researchers,” Rousseau and Rousseau ( 2017 , p. 482) state. This is confirmed by further studies on the familiarity, perception or usage of research evaluation metrics in general (Aksnes and Rip 2009 ; Derrick and Gillespie 2013 ; Haddow and Hammarfelt 2019 ; Hammarfelt and Haddow 2018 ).

In a blog post, Tetzner ( 2019 ) speculates on concrete numbers of a “good” h-index for academic positions. Accordingly, an h-index between 3 and 5 is good for a new assistant professor, an index between 8 and 12 for a tenured associate professor, and, finally, an index of more than 15 for a full professor. However, these numbers are gross generalizations without a sound empirical foundation. As our data are from Germany, the question arises: What kinds of tools do German funders, universities, etc. use for research evaluation? Unfortunately, there are only few publications on this topic. For scientists at German universities, bibliometric indicators (including the h-index and the impact factor) are important or very important for scientific reputation for more than 55% of the questionees (Neufeld and Johann 2016 , p.136). Those indicators have also relevance or even great relevance concerning hiring on academic positions in the estimation of more than 40% of the respondents (Neufeld and Johann 2016 , p.129). In a ranking of aspects of reputation of medical scientists, the h-index takes rank 7 (with a mean value of 3.4 with 5 being the best one) out of 17 evaluation criteria. Top-ranked indicators are the reputation of the journals of the scientists’ publications (4.1), the scientists’ citations (4.0), and their publication amount (3.7) (Krempkow et al. 2011 , p. 37). For hiring of psychology professors in Germany, the h-index had factual relevance for the tenure decision with a mean value of 3.64 (on a six-point scale) and ranks on position 12 out of more than 40 criteria for professorship (Abele-Brehm and Bühner 2016 ). Here, the number of peer-reviewed publications is top-ranked (mean value of 5.11). Obviously, these few studies highlight that the h-index indeed has relevance for research evaluation in Germany next to publication and citation numbers.

What is still a research desideratum is an in-depth description of researchers’ personal estimations on the h-index and an analysis of possible differences concerning researchers’ generation, their gender, and the discipline.

What is about the researchers’ state of knowledge on the h-index? Of course, we may ask, “What’s your knowledge on the h-index? Estimate on a scale from 1 to 5!” But personal estimations are subjective and do not substitute a test of knowledge (Kruger and Dunning 1999 ). Knowledge tests on researchers’ state of knowledge concerning the h-index are—to our best knowledge—a research desideratum, too.

In this article, we pursue two goals, namely on the one hand—similar to Buela-Casal and Zych ( 2012 ) on the impact factor—the collection of data about researchers’ personal estimations of the importance of the h-index for themselves as well as their discipline, and on the other hand data on the researchers’ concrete knowledge on the h-index and the way of its calculation. In short, these are our research questions:

RQ1: How do researchers estimate the importance of the h-index?

RQ2: What is the researchers’ knowledge on the h-index?

In order to answer RQ1, we asked researchers on their personal opinions; to answer RQ2, we additionally performed a test of their knowledge.

Online survey

Online-survey-based questionnaires provide a means of generating quantitative data. Furthermore, they ensure anonymity, and thus, a high degree of unbiasedness to bare personal information, preferences, and own knowledge. Therefore, we decided to work with an online survey. As we live and work in Germany, we know well the German academic landscape and thus restricted ourselves to professors working at a German university. We have focused on university professors as sample population (and skipped other academic staff in universities and also professors at universities of applied sciences), because we wanted to concentrate on persons who have (1) an established career path (in contrast to other academic staff) and (2) are to a high extent oriented towards publishing their research results (in contrast to professors at universities of applied science, formerly called Fachhochschulen , i.e. polytechnics, who are primarily oriented towards practice).

The online questionnaire (see Appendix 1 ) in German language contained three different sections. In Sect.  1 , we asked for personal data (gender, age, academic discipline, and university). Section  2 is on the professors’ personal estimations of the importance of publications, citations, their visibility on WoS, Scopus, and Google Scholar, the h-index on the three platforms, the importance of the h-index in their academic discipline, and, finally, their preferences concerning h-index or m-index. We chose those three information services as they are the most prominent general scientific bibliographic information services (Linde and Stock 2011 , p. 237) and all three present their specific h-index in a clearly visible way. Section  3 includes the knowledge test on the h-index and a question concerning the m-index.

In this article, we report on all aspects in relation with the h-index (for other aspects, see Kamrani et al. 2020 ). For the estimations, we used a 5-point Likert scale (from 1: very important via 3: neutral to 5: very unimportant) (Likert 1932 ). It was possible for all estimations to click also on “prefer not to say.” The test in Sect.  3 was composed of two questions, namely a subjective estimation of the own knowledge on the h-index and an objective knowledge test on this knowledge with a multiple-choice test (items: one correct answer, four incorrect ones as distractors, and the option “I’m not sure”). Those were the five items (the third one being counted as correct):

h is the quotient of the number of citations of journal articles in a reference period and the number of published journal articles in the same period;

h is the quotient of the general number of citations of articles (in a period of three years) and the number of citations of a researcher’s articles (in the same three years);

h is the number of articles by a researcher, which were cited h times at minimum;

h is the number of all citations concerning the h-index, thereof subtracted h 2 ;

h is the quotient of the number of citations of a research publication and the age of this publication.

A selected-response format for the objective knowledge test was chosen since it is recommended as the best choice for measuring knowledge (Haladyna and Rodriguez 2013 ). For the development of the knowledge test items we predominantly followed the 22 recommendations given by Haladyna and Rodriguez ( 2013 , in section II). Using a three-option multiple-choice should be superior to the four- or five-option for several reasons. However, we decided to use five options because our test only contained one question. The “I’m not sure” selection was added for the reason that our test is not a typical (classroom) assessment test. We, therefore, did not want to force an answer, for example through guessing, but rather wanted to know if participants do not know the correct answer. Creating reliable distractors can be seen as the most difficult part of the test development. Furthermore, validation is a crucial task. Here we tested and validated the question to the best of our knowledge.

As no ethical review board was involved in our research, we had to determine the ethical harmlessness of the research project ourselves and followed suggestions for ethical research applying online surveys such as consent, risk, privacy, anonymity, confidentiality, and autonomy (Buchanan and Hvizdak 2009 ). We found the e-mail addresses of the participants in a publicly accessible source (a handbook on all German faculty members, Deutscher Hochschulverband 2020 ); the participation was basically voluntary, and the participants knew that their answers became stored. At no time, participants became individually identifiable through our data collection or preparation as we strictly anonymized all questionnaires.

Participants

The addresses of the university professors were randomly extracted from the German Hochschullehrer-Verzeichnis (Deutscher Hochschulverband 2020 ). So, our procedure was non-probability sampling, more precisely convenience sampling in combination with volunteer sampling (Vehovar et al. 2016 ). Starting with volume 1 of the 2020 edition of the handbook, we randomly picked up entries and wrote the e-mails addresses down. The link to the questionnaire was distributed to every single professor by the found e-mail addresses; to host the survey we applied UmfrageOnline . To strengthen the power of the statistical analysis we predefined a minimum of 1000 usable questionnaires. The power tables provided by Cohen ( 1988 ) have a maximum of n  = 1000 participants. Therefore, we chose this value of the sample size to ensure statistically significant results, also for smaller subsets as single genders, generations, and disciplines (Cohen 1992 ). We started the mailing in June 2019 and stopped it in March 2020, when we had response of more than 1000 valid questionnaires. All in all we contacted 5722 professors by mail and arrived at 1081 completed questionnaires, which corresponds to a response rate of 18.9%.

Table 1 shows a comparison between our sample of German professors at universities with the population as one can find it in the official statistics (Destatis 2019 ). There are only minor differences concerning the gender distribution and also few divergences concerning most disciplines; however, Table 1 exhibits two huge differences. In our sample, we find more (natural) scientists than in the official statistics and less scholars in the humanities and the social sciences.

In our analysis, we distinguished always between the results for all participants, and, additionally, the results by gender (Geraci et al. 2015 ), generation (Fietkiewicz et al. 2016 ), and the field of knowledge (Hirsch and Buela-Casal 2014 ). We differentiated two genders (men, women) (note the questionnaire also provided the options “diverse” and “prefer not to say,” which were excluded from further calculations concerning gender), four generations: Generation Y (born after 1980), Generation X (born between 1960 and 1980), Baby Boomers (born after 1946 and before 1960), Silent Generation (born before 1946), and six academic disciplines: (1) geosciences, environmental sciences, agriculture, forestry, (2) humanities, social sciences, (3) sciences (including mathematics), (4) medicine, (5) law, and (6) economics. This division of knowledge fields is in line with the faculty structure of many German universities. As some participants answered some questions with “prefer not to say” (which was excluded from further calculations), the sum of all answers is not always 1081.

As our Likert scale is an ordinal scale, we calculated in each case the median as well as the interquartile range (IQR). For the analysis of significant differences we applied the Mann–Whitney u-test (Mann and Whitney 1947 ) (for the two values of gender) and the Kruskall–Wallis h-test (Kruskal and Wallis 1952 ) (for more than two values as the generations and academic disciplines). The data on the researchers’ knowledge on the h-index are on a nominal scale, so we calculated relative frequencies for three values (1: researcher knows the h-index in her/his self-estimation and passed the test; 2: researcher does not know the h-index in her/his self-estimation; 3: researcher knows the h-index in her/his self-estimation and failed the test) and used chi-squared test (Pearson 1900 ) for the analysis of differences between gender, knowledge area, and generation. We distinguish between three levels of statistical significance, namely *: p  ≤ 0.05 (significant), **: p  ≤ 0.01 (very significant), and ***: p  ≤ 0.001 (extremely significant); however, one has to interpret such values always with caution (Amrhein et al. 2019 ). All calculations were done with the help of SPSS (see a sketch of the data analysis plan in Appendix 2 ).

Researchers’ estimations of the h-index

How do researchers estimate the importance of the h-index for their academic discipline? And how important is the h-index (on WoS, Scopus, and Google Scholar) for themselves? In this paragraph, we will answer our research question 1.

Table 2 shows the different researcher estimations of the importance of the h-index concerning their discipline. While for all participants the h-index is “important” (2) for their academic field (median 2, IQA 1), there are massive and extremely significant differences between the single disciplines. For the sciences, medicine, and geosciences (including environmental sciences, agriculture, and forestry) the h-index is much more important (median 2, IQA 1) than for economics (median 3, IQA 1), humanities and social sciences (median 4, IQA 2), and law (median 5, IQA 0). The most votes for “very important” come from medicine (29.1%), the least from the humanities and social sciences (1.0%) as well as from law (0.0%). Conversely, the most very negative estimations (5: “very unimportant”) can be found among lawyers (78.6%) and scholars from the humanities and social sciences (30.4%). There is a clear cut between sciences (including geosciences, etc., and medicine) on one hand and humanities and all social sciences (including law and economics) on the other hand—with a stark importance of the h-index for the first-mentioned disciplines and a weak importance of the h-index for the latter.

In Tables 3 , 4 and 5 we find the results for the researchers’ estimations of the importance of their h-index on WoS (Table 3 ), Scopus (Table 4 ), and Google Scholar (Table 5 ). For all participants, the h-index on WoS is the most important one (median 2; however, with a wide dispersion of IQR 3), leaving Scopus and Google Scholar behind it (median 3, IQR 2 for both services). For all three bibliographic information services, the estimations of men and women do not differ in the statistical picture. For scientists (including geoscientists, etc.), a high h-index on WoS and Scopus is important (median 2); interestingly, economists join scientists when it comes to the importance of the h-index on Google Scholar (all three disciplines having a median of 2). For scholars from humanities and social sciences, the h-indices on all three services are unimportant (median 4), for lawyers they are even very unimportant (median 5). For researchers in the area of medicine there is a decisive ranking: most important is their h-index on WoS (median 2, IQR 2, and 41.5% votes for “very important”), followed by Scopus (median 2, IQA 1, but only 18.4% votes for “very important”), and, finally, Google Scholar (median 3, IQR 1, and the modus also equals 3, “neutral”). For economists, the highest share of (1)-votes (“very important”) is found for Google Scholar (29.9%) in contrast to the fee-based services WoS (19.7%) and Scopus (12.2%).

Similar to the results of the knowledge areas, there is also a clear result concerning the generations. The older a researcher, the less important is his or her h-index for him- or herself. We see a declining number of (1)-votes in all three information services, and a median moving over the generations from 2 to 3 (WoS), 2 to 4 (Scopus), and 2 to 3 (Google Scholar). The youngest generation has a preference for the h-index on Google Scholar ((1)-votes: 34.9%) over the h-indices on WoS ((1)-votes: 25.9%) and Scopus ((1)-votes: 19.8%).

A very interesting result of our study are the impressive differences of the importance estimations of the h-index by discipline (Fig.  1 ). With three tiny exceptions, the estimations for the general importance and the importance of the h-indices on WoS, Scopus, and Google Scholar are consistent inside each scientific disciplines. For the natural sciences, geosciences etc., and medicine, the h-index is important (median 2), for economics, it is neutral (median 3), for the humanities and social sciences it is unimportant (median 4), and, finally, for law this index is even very unimportant (median 5).

figure 1

Researchers’ estimations of the h-index by discipline (medians). N  = 1001 (general importance), N  = 961 (WoS), N  = 946 (Scopus), N  = 966 (Google Scholar); Scale: (1) very important, (2) important, (3) neutral, (4) unimportant, (5) very unimportant

We do not want to withhold a by-result on the estimation on a modification of the h-index by the time-adjusted m-index. 567 participants made a decision: for 50.8% of them the h-index is the better one, 49.2% prefer the m-index. More women (61.1%) than men (47.3%) choose the m-index over the original h-index. All academic disciplines except one prefer the m-index; scientists are the exception (only 42.8% approval for the m-index). For members of Generation Y, Baby Boomers, and Silent Generation the m-index is the preferable index; Generation X prefers mainly (54.3%) the h-index. Inside the youngest generation, Generation Y (being discriminated by the h-index), the majority of researchers (65.5%) likes the m-index more than the h-index.

Researchers’ state of knowledge on the h-index

Answering our research question 2, the overall result is presented in Fig.  2 . This is a combination of three questions, as we initially asked the researchers regarding their personal estimations of their general familiarity (Appendix 1 , Q10) and calculation knowledge (Q13) on the h-index. Only participants who confirmed that they have knowledge on the indicators’ calculation (Q10 and Q13) made the knowledge test (Q14). About three fifths of the professors know the h-index in their self-estimations and passed the test, one third of all answering participants does not know the h-index following their self-estimations, and, finally, 7.2% wrongly estimated their knowledge on the h-index, as they failed the test but meant to know it.

figure 2

Researchers’ state of knowledge on the h-index: The basic distribution. N  = 1017

In contrast to many of our results concerning the researchers’ estimation of the importance of the h-index we see differences in the knowledge on the h-index by gender (Table 6 ). Only 41.6% of the women have justified knowledge (men: 64.6%), 50.0% do not know the definition or the formula of the h-index (men: 28.7%), and 8.3% wrongly estimate their knowledge as sufficient (men: 6.9%). However, these differences are statistically not significant.

In the sciences (incl. geosciences, etc.) and in medicine, more than 70% of the participants do know how to calculate the h-index. Scientists have the highest level of knowledge on the h-index (79.1% passed the knowledge test). Participants from the humanities and social sciences (21.1%) as well as from law (7.1%) exhibit the lowest states of knowledge concerning the h-index. With a share of 48.3%, economists take a middle position between the two main groups of researchers; however, there are 13.8% of economists who wrongly overestimate their knowledge state.

We found a clear result concerning the generations: the older the researcher the less is the knowledge on the h-index. While 62.9% of the Generation X know the calculation of the h-index, only 53.2% of the Baby Boomers possess this knowledge. The differences in the states of the researchers’ knowledge on the h-index within the knowledge areas and generations are extremely significant each.

Main results

Our main results are on the researchers’ estimations of the h-index and their state of knowledge on this scientometric indicator. We found a clear binary division between the academic knowledge fields: For the sciences (including geosciences, agriculture, etc.) and medicine the h-index is important for the researchers themselves and for their disciplines, while for the humanities and social sciences, economics, and law the h-index is considerably less important. For the respondents from the sciences and medicine, the h-index on WoS is most important, followed by the h-index of Google Scholar and Scopus. Surprisingly, for economists Google Scholar’s h-index is very attractive. We did not find significant differences between the estimations of the importance of the h-index between men and women; however, there are differences concerning the generations: the older the participants the less important they estimate the importance of the h-index.

Probably, for older professors the h-index has not the same significance as for their younger colleagues, as they are not so much in need to plan their further career or to apply for new research projects. On average, for researchers aged 60 and more, their productivity declines in contrast to younger colleagues (Kyvik 1990 ). And perhaps some of them simply do not know the existence of more recent services and of new scientometric indicators. Younger researchers are more tolerant of novelty in their work (Packalen and Bhattachrya 2015 ), and such novelty includes new information services (as Scopus and Google Scholar) as well as new indicators (as the h-index). It is known that young researchers rely heavily on search engines like Google (Rowlands et al. 2008 ), which partly may explain the high values for Google Scholar especially from Generation Y. Furthermore, the increasing publication pressure and the h-index utilization for decisions about early career researchers’ work-related paths thus also impact the importance of the indicator for those young professors (Farlin and Majewski 2013 ).

All in all, two fifths of the professors do not know the concrete calculation of the h-index or—which is rather scary—wrongly deem to know what the h-index is and failed our simple knowledge test. The women do even worse, as only about two fifths really know what the h-index is and how it is defined and calculated, but we should have in mind that this gender difference is statistically not significant. The older the researcher, the higher is the share of participants who do not know the definition and calculation of the h-index. The researchers’ knowledge on the h-index is much smaller in the academic disciplines of the humanities and the social sciences.

The h-index in the academic areas

Especially the obvious differences between the academic areas demand further explanation. Participants from the natural sciences and from medicine estimate the importance of the h-index as “important” or even “very important,” and they know details on this indicator to a high extend. The participants from the humanities, the social sciences, economics, and law are quite different. They estimate the h-index’ importance as “neutral,” “unimportant,” or even as “very unimportant,” and the share of researchers with profound knowledge on the h-index is quite low. Haddow and Hammarfelt ( 2019 ) also report a lower use of the h-index within these fields. Similar to our study, especially researchers in the field of law ( n  = 24) did not make use of the h-index. All researchers publish and all cite, too. There are differences in their publication channels, as scientists publish mostly in journals and researchers from the humanities publish in monographs and sometimes also in journals (Kulczycki et al. 2018 ), but this may not explain the differences concerning the importance of and the knowledge state on the h-index. Furthermore, more information on how such researchers’ h-index perceptions through different disciplines comply with the h-index (mis)usage for research evaluation within those disciplines would add another dimension to this topic.

The indeed very large general information services WoS and Scopus are, compared to personal literature lists of researchers, quite incomplete (Hilbert et al. 2015 ). There is also a pronounced unequal coverage of certain disciplines (Mongeon and Paul-Hus 2016 ) and many languages (except English) (Vera-Baceta et al. 2019 ). Perhaps these facts, in particular, prevent representatives of the disadvantaged disciplines and languages (including German—and we asked German professors) from a high estimation of the relevance of their h-index as important on these platforms. Then, however, the rejection of the h-index of Google Scholar, which can also be seen, is surprising, because this information service is by far the most complete (Martin-Martin et al. 2018 ). However, economists are very well informed here, as they—as the only academic representatives—highly value their h-index at Google Scholar. On the other hand, the use of Google Scholar for research evaluation is discussed in general. Although its coverage is usually broader than those provided by more controlled databases and steadily expanding its collection, there exist widely known issues, for example, its low accuracy (Halevi et al. 2017 ). Depending on a researcher’s own opinion on this topic, this could be a reason for seeing no importance in the h-index provided by Google Scholar as well.

Another attempt for an explanation may be the different cultures in the different research areas. For Kagan ( 2009 , p. 4), natural scientists see their main interest in explanation and prediction, while for humanists it is understanding (following Snow 1959 and Dilthey 1895 , p. 10). The h-index is called an indicator allowing explanation and prediction of scientific achievement (Hirsch 2007 ); it is typical for the culture of natural sciences. Researchers from the natural science and from medicine are accustomed to numbers, while humanists seldom work quantitatively. In the humanities, other indicators such as book reviews and the quality of book publishers are components for their research evaluation; however, such aspects are not reflected by the h-index. And if humanities scholars are never asked for their h-index, why should they know or use it?

Following Kagan ( 2009 , p. 5) a second time, humanists exhibit only minimal dependence on outside support and natural scientists are highly dependent on external sources of financing. The h-index can work as an argument for the allocation of outside support. So for natural scientists the h-index is a very common fabric and they need it for their academic survival; humanists are not as familiar with numerical indicators and for them the h-index is not so much-needed as for their colleagues from the science and medicine faculties. However, this dichotomous classification of research and researchers may be an oversimplifying solution (Kowalski and Mrdjenovich 2016 ) and there is a trend in consulting and using such research evaluation indicators in the humanities and social sciences, too. For preparing a satisfying theory of researchers’ behavior concerning the h-index (or, in general, concerning scientometric indicators)—also in dependence on their background in an academic field—more research is needed.

Limitations, outlook, and recommendations

A clear limitation of the study is our studied population, namely university professors from Germany. Of course, researchers in other countries should be included in further studies. It seems necessary to broaden the view towards all researchers and all occupational areas, too, including, for instance, also lecturers in polytechnics and researchers in private companies. Another limitation is the consideration of only three h-indices (of WoS, Scopus, and Google Scholar). As there are other databases for the calculation of an h-index (e.g., ResearchGate) the study should be broadened to all variants of the h-index.

Another interesting research question may be: Are there any correlations between the estimations of the importance of the h-index or the researcher’s knowledge on the h-index and the researcher’s own h-index? Does a researcher with a high h-index on, for instance, WoS, estimate the importance of this indicator higher than a researcher with a low h-index? Hirsch ( 2020 ) speculates that people with high h-indexes are more likely to think that this indicator is important. A more in-depth analysis on the self-estimation of researchers’ h-index knowledge might also consider the Dunning-Kruger effect, showing certain people can be wrongly confident about their limited knowledge within a domain and not having the ability to realize this (Kruger and Dunning 1999 ).

As the h-index has still an important impact on the evaluation of scientists and as not all researchers are very knowledgeable about this author-specific research indicator, it seems to be a good idea to strengthen their knowledge in the broader area of “metric-wiseness” (Rousseau et al. 2018 ; Rousseau and Rousseau 2015 ). With a stronger focus on educating researchers and research support staff in terms of the application and interpretation of metrics as well as to reduce misuse of indicators, Haustein ( 2018 ) speaks about better (scholarly) “metrics literacies.” Following Hammarfelt and Haddow ( 2018 ), we should further discuss possible effects of indicators within the “metrics culture.” Likewise, this also applies to all knowledgeable researchers as well as research evaluators who also may or may not be researchers by themselves. Here, the focus rather lies to raise awareness for metrics literacies and to foster fair research evaluation practices not incorporating any kind of misuse. This leads directly to a research gap in scientometrics. Further research on concrete data about the level of researchers’ knowledge not only concerning the h-index, but also on other indicators such as WoS’s impact factor, Google’s i-10 index, Scopus’ CiteScore, the source normalized impact per paper (SNIP), etc., also in a comparative perspective would draw a more comprehensive picture on the current indicator knowledge. All the meanwhile “classical” scientometric indicators are based upon publication and citation measures (Stock 2001 ). Alternative indicators are available today, which are based upon social media metrics, called “altmetrics” (Meschede and Siebenlist 2018 ; Thelwall et al. 2013 ). How do researchers estimate the importance of these alternative indicators and do they know their definitions and their formulae of calculation? First insights on this give Lemke et al. ( 2019 ), also in regard to researchers’ personal preferences and concerns.

Following Hirsch ( 2020 ), the h-index is by no means a valid indicator of research quality; however, it is very common especially in the sciences and medicine. Probably, it is a convenient indicator for some researchers who want to avoid the hassle of laborious and time-consuming reviewing and scrutinizing other researchers’ œuvre. Apart from its convenience and popularity, and seen from an ethical perspective, one should consider what significance a single metric should have and how we—in general—want to further shape the future of research evaluation.

Abele-Brehm, A., & Bühner, M. (2016). Wer soll die Professur bekommen? Eine Untersuchung zur Bewertung von Auswahlkriterien in Berufungsverfahren der Psychologie. Psychologische Rundschau, 67 , 250–261. https://doi.org/10.1026/0033-3042/a000335 .

Article   Google Scholar  

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38 (6), 895–905. https://doi.org/10.1016/j.respol.2009.02.001 .

Amrhein, V., Greenland, S., & McShane, B. (2019). Retire statistical significance. Nature, 567 (7748), 305–307. https://doi.org/10.1038/d41586-019-00857-9 .

Askeridis, J. (2018). An h index for Mendeley: Comparison of citation-based h indices and a readership-based h men index for 29 authors. Scientometrics, 117 , 615–624. https://doi.org/10.1007/s11192-018-2882-8 .

Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74 (2), 257–271. https://doi.org/10.1007/s11192-008-0216-y .

Bornmann, L., Mutz, R., & Daniel, H.-D. (2008). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59 (5), 830–837. https://doi.org/10.1002/asi.20806 .

Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H.-D. (2011). A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5 , 346–359. https://doi.org/10.1016/j.joi.2011.01.006 .

Buchanan, E. A., & Hvizdak, E. E. (2009). Online survey tools: Ethical and methodological concerns of human research ethics committees. Journal of Empirical Research on Human Research Ethics, 4 (2), 37–48. https://doi.org/10.1525/jer.2009.4.2.37 .

Buela-Casal, G., & Zych, I. (2012). What do the scientists think about the impact factor? Scientometrics, 92 , 281–292. https://doi.org/10.1007/s11192-012-0676-y .

Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). H-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3 (4), 273–289. https://doi.org/10.1016/j.joi.2009.04.001 .

Chen, C. M.-L., & Lin, W.-Y. C. (2018). What indicators matter? The analysis of perception toward research assessment indicators and Leiden Manifesto. The case study of Taiwan. In R. Costas, T. Franssen, & A. Yegros-Yegros (Eds.), Proceedings of the 23rd International Conference on Science and Technology Indicators (STI 2018) (12–14 September 2018) (pp. 688–698). Leiden, NL: Centre for Science and Technology Studies (CWTS). https://openaccess.leidenuniv.nl/bitstream/handle/1887/65192/STI2018_paper_121.pdf?sequence=1.

Cohen, J. (1988). Statistical power analysis for the behavioral science . (2nd ed.). Hillsdale: Lawrence Erlbaum. https://doi.org/10.4324/9780203771587 .

Book   MATH   Google Scholar  

Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155–159. https://doi.org/10.1037//0033-2909.112.1.155 .

Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1 (3), 193–203. https://doi.org/10.1016/j.joi.2007.02.001 .

da Silva, J. A. T., & Dobranszki, J. (2018). Multiple versions of the h-index: Cautionary use for formal academic purposes. Scientometrics, 115 (2), 1107–1113. https://doi.org/10.1007/s11192-018-2680-3 .

Derrick, G. E., & Gillespie, J. (2013). A number you just can’t get away from: Characteristics of adoption and the social construction of metric use by researchers. In S. Hinze & A. Lottman (Eds.), Proceedings of the 18th International Conference on Science and Technology Indicators (pp. 104–116). Berlin, DE: Institute for Research Information and Quality Assurance. http://www.forschungsinfo.de/STI2013/download/STI_2013_Proceedings.pdf.

Destatis. (2019). Bildung und Kultur. Personal an Hochschulen (Fachserie 11, Reihe 4.4). Wiesbaden, Germany: Statistisches Bundesamt. https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Bildung-Forschung-Kultur/Hochschulen/Publikationen/Downloads-Hochschulen/personal-hochschulen-2110440187004.html.

Deutscher Hochschulverband. (2020). Hochschullehrer-Verzeichnis 2020, Band 1: Universitäten Deutschland. 28th Ed. Berlin, New York: De Gruyter Saur. https://db.degruyter.com/view/product/549953.

Dilthey, W. (1895). Ideen über eine beschreibende und zergliedernde Psychologie. Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin, 7. Juni 1894, Ausgabe XXVI, Sitzung der philosophisch historischen Classe , 1–88. http://www.uwe-mortensen.de/Dilthey%20Ideen%20beschreibendezergliederndePsychologie.pdf.

Ding, J., Liu, C., & Kandonga, G. A. (2020). Exploring the limitations of the h-index and h-type indexes in measuring the research performance of authors. Scientometrics, 122 (3), 1303–1322. https://doi.org/10.1007/s11192-020-03364-1 .

Dinis-Oliveira, R. J. (2019). The h-index in life and health sciences: Advantages, drawbacks and challenging opportunities. Current Drug Research Reviews, 11 (2), 82–84. https://doi.org/10.2174/258997751102191111141801 .

Dorsch, I. (2017). Relative visibility of authors’ publications in different information services. Scientometrics, 112 , 917–925. https://doi.org/10.1007/s11192-017-2416-9 .

Dorsch, I., Askeridis, J., & Stock, W. G. (2018). Truebounded, overbounded, or underbounded? Scientists’ personal publication lists versus lists generated through bibliographic information services. Publications, 6 (1), 1–9. https://doi.org/10.3390/publications6010007 .

Farlin, J., & Majewski, M. (2013). Performance indicators: The educational effect of publication pressure on young researchers in environmental sciences. Environmental Science and Technology, 47 (6), 2437–2438. https://doi.org/10.1021/es400677m .

Fietkiewicz, K. J., Lins, E., Baran, K. S., & Stock, W. G. (2016). Inter-generational comparison of social media use: Investigating the online behavior of different generational cohorts. In Proceedings of the 49th Hawaii international conference on system sciences (pp. 3829–3838). Washington, DC: IEEE Computer Society. https://doi.org/10.1109/HICSS.2016.477.

Geraci, L., Balsis, S., & Busch, A. J. B. (2015). Gender and the h index in psychology. Scientometrics, 105 (3), 2023–2043. https://doi.org/10.1007/s11192-015-1757-5 .

Haddow, G., & Hammarfelt, B. (2019). Quality, impact, and quantification: Indicators and metrics use by social scientists. Journal of the Association for Information Science and Technology, 70 (1), 16–26. https://doi.org/10.1002/asi.24097 .

Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items . New York: Routledge. https://doi.org/10.4324/9780203850381 .

Book   Google Scholar  

Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation. Review of the literature. Journal of Informetrics, 11 (3), 823–834. https://doi.org/10.1016/j.joi.2017.06.005 .

Hammarfelt, B., & Haddow, G. (2018). Conflicting measures and values: How humanities scholars in Australia and Sweden use and react to bibliometric indicators. Journal of the Association for Information Science and Technology, 69 (7), 924–935. https://doi.org/10.1002/asi.24043 .

Haustein, S. (2018). Metrics literacy [Blog post]. https://stefaniehaustein.com/metrics-literacy/

Haustein, S., & Larivière, V. (2015). The use of bibliometrics for assessing research: Possibilities, limitations and adverse effects. In I. Welpe, J. Wollersheim, S. Ringelhan, & M. Osterloh (Eds.), Incentives and performance: Governance of research organizations (pp. 121–139). Cham, CH: Springer. https://doi.org/10.1007/978-3-319-09785-5_8

Hilbert, F., Barth, J., Gremm, J., Gros, D., Haiter, J., Henkel, M., Reinhardt, W., & Stock, W. G. (2015). Coverage of academic citation databases compared with coverage of social media: Personal publication lists as calibration parameters. Online Information Review, 39 (2), 255–264. https://doi.org/10.1108/OIR-07-2014-0159 .

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102 (46), 16569–16572. https://doi.org/10.1073/pnas.0507655102 .

Article   MATH   Google Scholar  

Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences of the United States of America, 104 (49), 19193–19198. https://doi.org/10.1073/pnas.0707962104 .

Hirsch, J. E. (2020). Superconductivity, What the h? The emperor has no clothes. Physics and Society, 49 (1), 4–9.

Google Scholar  

Hirsch, J. E., & Buela-Casal, G. (2014). The meaning of the h-index. International Journal of Clinical and Health Psychology, 14 (2), 161–164. https://doi.org/10.1016/S1697-2600(14)70050-X .

Hu, G. Y., Wang, L., Ni, R., & Liu, W. S. (2020). Which h-index? An exploration within the Web of Science. Scientometrics, 123 , 1225–1233. https://doi.org/10.1007/s11192-020-03425-5 .

Jan, R., & Ahmad, R. (2020). H-index and its variants: Which variant fairly assess author’s achievements. Journal of Information Technology Research, 13 (1), 68–76. https://doi.org/10.4018/JITR.2020010105 .

Jappe, A. (2020). Professional standards in bibliometric research evaluation? A meta-evaluation of European assessment practice 2005–2019. PLoSONE, 15 (4), 1–23. https://doi.org/10.1371/journal.pone.0231735 .

Kagan, J. (2009). The three cultures. Natural sciences, social sciences, and the humanities in the 21st century . Cambridge, MA: Cambridge University Press. https://www.cambridge.org/de/academic/subjects/psychology/psychology-general-interest/three-cultures-natural-sciences-social-sciences-and-humanities-21st-century?format=HB&isbn=9780521518420.

Kamrani, P., Dorsch, I., & Stock, W. G. (2020). Publikationen, Zitationen und H-Index im Meinungsbild deutscher Universitätsprofessoren. Beiträge zur Hochschulforschung, 42 (3), 78–98. https://www.bzh.bayern.de/fileadmin/user_upload/Publikationen/Beitraege_zur_Hochschulforschung/2020/3_2020_Kamrani-Dorsch-Stock.pdf .

Kelly, C. D., & Jennions, M. D. (2006). The h index and career assessment by numbers. Trends in Ecology and Evolution, 21 (4), 167–170. https://doi.org/10.1016/j.tree.2006.01.005 .

Kowalski, C. J., & Mrdjenovich, A. J. (2016). Beware dichotomies. Perspectives in Biology and Medicine, 59 (4), 517–535. https://doi.org/10.1353/pbm.2016.0045 .

Krempkow, R., Schulz, P., Landrock, U., & Neufeld, J. (2011). Die Sicht der Professor/innen auf die Leistungsorientierte Mittelvergabe an Medizinischen Fakultäten in Deutschland . Berlin: iFQ–Institut für Forschungsinformation und Qualitätssicherung. http://www.forschungsinfo.de/Publikationen/Download/LOM_Professorenbefragung.pdf.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77 (6), 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121 .

Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47 (260), 583–621. https://doi.org/10.1080/01621459.1952.10483441 .

Kulczycki, E., Engels, T. C. E., Pölönen, J., Bruun, K., Dušková, M., Guns, R., Nowotniak, R., Petr, M., Sivertsen, G., Istenič Starčič, A., & Zuccala, A. (2018). Publication patterns in the social sciences and humanities: Evidence from eight European countries. Scientometrics, 116 (1), 463–486. https://doi.org/10.1007/s11192-018-2711-0 .

Kyvik, S. (1990). Age and scientific productivity. Differences between fields of learning. Higher Education, 19 , 37–55. https://doi.org/10.1007/BF00142022 .

Lemke, S., Mehrazar, M., Mazarakis, A., & Peters, I. (2019). “When you use social media you are not working”: Barriers for the use of metrics in Social Sciences. Frontiers in Research Metrics and Analytics, 3 (39), 1–18. https://doi.org/10.3389/frma.2018.00039 .

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22 (140), 5–55.

Linde, F., & Stock, W. G. (2011). Information markets . Berlin, New York: De Gruyter Saur. https://doi.org/10.1515/9783110236101 .

Ma, L., & Ladisch, M. (2019). Evaluation complacency or evaluation inertia? A study of evaluative metrics and research practices in Irish universities. Research Evaluation, 28 (3), 209–217. https://doi.org/10.1093/reseval/rvz008 .

Mann, H., & Whitney, D. (1947). On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics, 18 (1), 50–60. https://doi.org/10.1214/aoms/1177730491 .

Article   MathSciNet   MATH   Google Scholar  

Martin-Martin, A., Orduna-Malea, E., Thelwall, M., & Lopez-Cozar, E. D. (2018). Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. Journal of Informetrics, 12 (4), 1160–1177. https://doi.org/10.1016/j.joi.2018.09.002 .

Meschede, C., & Siebenlist, T. (2018). Cross-metric compatibility and inconsistencies of altmetrics. Scientometrics, 115 (1), 283–297. https://doi.org/10.1007/s11192-018-2674-1 .

Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106 (1), 213–228. https://doi.org/10.1007/s11192-015-1765-5 .

Neufeld, J., & Johann, D. (2016). Wissenschaftlerbefragung 2016. Variablenbericht – Häufigkeitsauszählung . Berlin: Deutsches Zentrum für Hochschul- und Wissenschaftsforschung. https://www.volkswagenstiftung.de/sites/default/files/downloads/Wissenschaftlerbefragung%202016%20-%20Variablenbericht%20-%20H%C3%A4ufigkeitsausz%C3%A4hlungen.pdf

Packalen, M., & Bhattacharya, J. (2015). Age and the trying out of new ideas. Cambridge, MA: National Bureau of Economic Research. (NBER Working Paper Series; 20920). http://www.nber.org/papers/w20920.

Pearson, K. (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Series, 50 (302), 157–175.

Penny, D. (2016). What matters where? Cultural and geographical factors in science. Slides presented at 3rd altmetrics conference, Bucharest, 2016. https://figshare.com/articles/What_matters_where_Cultural_and_geographical_factors_in_science/3969012 .

Rousseau, R., Egghe, L., & Guns, R. (2018). Becoming metric-wise: A bibliometric guide for researchers . Cambridge, MA: Chandos.

Rousseau, S., & Rousseau, R. (2015). Metric-wiseness. Journal of the Association for Information Science and Technology, 66 (11), 2389. https://doi.org/10.1002/asi.23558 .

Rousseau, S., & Rousseau, R. (2017). Being metric-wise: Heterogeneity in bibliometric knowledge. El Profesional de la Informatión, 26 (3), 480–487.

Rowlands, I., Nicholas, D., William, P., Huntington, P., Fieldhouse, M., Gunter, B., Withey, R., Jamali, H. R., Dobrowolski, T., & Tenopir, C. (2008). The Google generation: The information behaviour of the researcher of the future. Aslib Proceedings, 60 (4), 290–310. https://doi.org/10.1108/00012530810887953 .

Snow, C. P. (1959). The two cultures and the scientific revolution . Cambridge: Cambridge University Press.

Stock, W. G. (2001). Publikation und Zitat. Die problematische Basis empirischer Wissenschaftsforschung. Köln: Fachhochschule Köln; Fachbereich Bibliotheks- und Informationswesen (Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft; 29). https://epb.bibl.th-koeln.de/frontdoor/deliver/index/docId/62/file/Stock_Publikation.pdf.

Stock, W. G., & Stock, M. (2013). Handbook of information science . De Gruyter Saur. https://doi.org/10.1515/9783110235005 .

Sugimoto, C. R., & Larivière, V. (2018). Measuring research: What everyone needs to know . New York: Oxford University Press.

Tetzner, R. (2019). What is a good h-index required for an academic position? [Blog post]. https://www.journal-publishing.com/blog/good-h-index-required-academic-position/.

Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLoS ONE, 8 (5), e64841. https://doi.org/10.1371/journal.pone.0064841 .

Vehovar, V., Toepoel, V., & Steinmetz, S. (2016). Non-probability sampling. In C. Wolf, D. Joye, T. W. Smith, & Y.-C. Fu (Eds.), The SAGE handbook of survey methodology. (pp. 327–343). London: Sage. https://doi.org/10.4135/9781473957893.n22 .

Chapter   Google Scholar  

Vera-Baceta, M. A., Thelwall, M., & Kousha, K. (2019). Web of science and Scopus language coverage. Scientometrics, 121 (3), 1803–1813. https://doi.org/10.1007/s11192-019-03264-z .

Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63 (2), 406–415. https://doi.org/10.1002/asi.21678 .

Download references

Open Access funding enabled and organized by Projekt DEAL. No external funding.

Author information

Authors and affiliations.

Department of Information Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany

Pantea Kamrani, Isabelle Dorsch & Wolfgang G. Stock

Department of Operations and Information Systems, Karl Franzens University Graz, Graz, Austria

Wolfgang G. Stock

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: PK, ID, WGS; Methodology: PK, ID, WGS; Data collection: PK; Writing and editing: PK, ID, WGS; Supervision: WGS.

Corresponding author

Correspondence to Wolfgang G. Stock .

Ethics declarations

Conflict of interest.

No conflicts of interest, no competing interests.

Appendix 1: List of all questions (translated from German)

figure a

Appendix 2: Data analysis plan (intuitive sketch)

figure b

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kamrani, P., Dorsch, I. & Stock, W.G. Do researchers know what the h-index is? And how do they estimate its importance?. Scientometrics 126 , 5489–5508 (2021). https://doi.org/10.1007/s11192-021-03968-1

Download citation

Received : 19 May 2020

Accepted : 23 March 2021

Published : 26 April 2021

Issue Date : July 2021

DOI : https://doi.org/10.1007/s11192-021-03968-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bibliometrics
  • Researchers
  • Generations
  • Knowledge fields
  • Google Scholar
  • Metrics literacy
  • Find a journal
  • Publish with us
  • Track your research
  • Meet the Mentors
  • Get Involved
  • Get the T-Shirt
  • Life Science Marketing
  • Community Marketing
  • Custom Marketing

Join Us Sign up for our feature-packed newsletter today to ensure you get the latest expert help and advice to level up your lab work.

  • Genomics & Epigenetics
  • DNA / RNA Manipulation and Analysis
  • Protein Expression & Analysis
  • PCR & Real-time PCR
  • Flow Cytometry
  • Microscopy & Imaging
  • Cells and Model Organisms
  • Analytical Chemistry and Chromatography Techniques
  • Chemistry for Biologists
  • Basic Lab Skills & Know-how
  • Equipment Mastery & Hacks
  • Managing the Scientific Literature
  • Career Development and Networking
  • Dealing with Fellow Scientists
  • Getting Funded
  • Lab Statistics & Math
  • Organization & Productivity
  • Personal Development
  • PhD Survival
  • Soft Skills & Tools
  • Software & Online Tools
  • Survive & Thrive
  • Taming the Literature
  • Writing, Publishing & Presenting
  • Writing, Publishing and Presenting

What is the H-index, and Does it Matter?

How do you measure how good you are as a scientist? One way is the h-index. Discover what this is, and learn about the pros and cons of using it to assess your scientific career.

Published October 20, 2023

researchgate h index

Nick has a PhD from the University Dundee and is the Founder and Director of Bitesize Bio , Science Squared Ltd and The Life Science Marketing Society .

Red, yellow, green and blue tape measures to represent an author's h-index

The h-index is a measure of research performance and is calculated as the highest number of manuscripts from an author (h) that all have at least the same number (h) of citations. The h-index is known to penalize early career researchers and does not take into account the number of authors on a paper. Alternative indexes have been created, including the i-10, h-frac, G-index, and M-number.

Listen to one of our scientific editorial team members read this article. Click  here  to access more audio articles or subscribe.

How do you measure how good you are as a scientist? How would you compare the impact of two scientists in a field? What if you had to decide which one would get a grant? One method is the h-index, which we will discuss in more detail below. First, we’ll touch on why this is not a simple task.

Measuring scientific performance is more complicated and more critical than it might first seem. Various methods for measurement and comparison have been proposed, but none of them is perfect.

At first, you might think that the method for measuring scientific performance doesn’t concern you—because all you care about is doing the best research you can. However, you should care because these metrics are increasingly used by funding bodies and employers to allocate grants and jobs. So, your perceived scientific performance score could seriously affect your career.

Metrics for Measuring Scientific Performance

What are the metrics involved in measuring scientific performance? The methods that might first spring to mind are:

  • Recommendations from peers. At first glance, this is a good idea in principle. However, it is subject to human nature, so personal relationships will inevitably affect perceived performance. Also, if a lesser-known scientist publishes a ground-breaking paper, they would likely get less recognition than if a more eminent colleague published the same paper.
  • The number of articles published. A long publication list looks good on your CV, but the number of articles published does not indicate their impact on the field. Having a few publications well-heeded by colleagues in the field (i.e., they are cited often) is better than having a long list of publications cited poorly or not at all.
  • The average number of citations per article published. So, if it’s citations we’re interested in, then surely the average number of citations per paper is a better number to look at. Well, not really. The average could be skewed dramatically by one highly cited article, so it does not allow a good comparison of overall performance.

The H-Index

In 2005, Jorge E. Hirsch of UCSD published a paper in PNAS in which he put forward the h-index as a metric for measuring and comparing the overall scientific productivity of individual scientists. [1]

The h-index has been quickly adopted as the metric of choice for many committees and bodies.

How to Calculate An Author’s H-Index

The h-index calculation is pretty simple. You plot the number of papers versus the number of citations you (or someone else) have received, and the h-index is the number of papers at which the 45-degree line (citations=papers, orange) intercepts the curve, as shown in Figure 1 . That is, h equals the number of papers that have received at least h citations. For example, do you have one publication that has been cited at least once? If the answer is yes, then you can go on to your next publication. Have your two publications each been cited at least twice? If yes, then your h-index is at least 2. You can keep going until you get to a “no.”

What is the H-index, and Does it Matter?

So, if you have an h-index of 20, you have 20 papers with at least 20 citations. It also means that you are doing pretty well with your science!

What is a Good H-Index?

Hirsch reckons that after 20 years of research, an h-index of 20 is good, 40 is outstanding, and 60 is truly exceptional.

In his paper, Hirsch shows that successful scientists do, indeed, have high h-indices: 84% of Nobel Prize winners in physics, for example, had an h-index of at least 30. Table 1 lists some eminent scientists and their respective h-indexes.

Table 1: H-index scores of some Nobel Laureates (data from Google Scholar collected on September 27, 2023).

Advantages of the H-Index

The advantage of the h-index is that it combines productivity (i.e., number of papers produced) and impact (number of citations) in a single number. So, both productivity and impact are required for a high h-index; neither a few highly cited papers nor a long list of papers with only a handful of (or no!) citations will yield a high h-index.

Limitations of the H-Index

Although having a single number that measures scientific performance is attractive, the h-index is only a rough indicator of scientific performance and should only be considered as such.

Limitations of the h-index include the following:

  • It does not take into account the number of authors on a paper. A scientist who is the sole author of a paper with 100 citations should get more credit than one on a similarly cited paper with 10 co-authors.
  • It penalizes early-career scientists. Outstanding scientists with only a few publications cannot have a high h-index, even if all of those publications are ground-breaking and highly cited. For example, Albert Einstein would have had an h-index of only 4 or 5 if he had died in early 1906 despite being widely known as an influential physicist at the time.
  • Review articles have a greater impact on the h-index than original papers since they are generally cited more often.
  • The use of the h-index has now broadened beyond science. However, it’s difficult to compare fields and scientific disciplines directly, so, really, a ‘good’ h-index is impossible to define.

Calculating the H-Index

There are several online resources and h-index calculators for obtaining a scientist’s h-index. The most established are ISI Web of Knowledge, and Scopus, both of which require a subscription (probably via your institution), but there are free options too, one of which is Publish or Perish .

You might get a different value if you check your own (or someone else’s) h-index with each of these resources. Each uses a different database to count the total publications and citations. ISI and Scopus use their own databases, and Publish or Perish uses Google Scholar. Each database has different coverage and will provide varying h-index values. For example, ISI has good coverage of journal publications but poor coverage of conferences, while Scopus covers conferences better but needs better journal coverage pre-1992. [2]

Is the H-index Still Effective?

A paper published in PLoS One in 2021 concluded that while a scientist’s h-index previously correlated well with the number of scientific awards, this is no longer the case. This lack of correlation is partly because of the change in authorship patterns, with the average number of authors per paper increasing. [3]

Are Alternatives to the H-Index Better?

Let’s take a look at some of the alternative measures available.

The H-Frac Index

The authors of the PLoS One paper suggest fractional analogs of the h-index are better suited for the job. [3] Here, the number of authors on a paper is also considered. One such measure is the h-frac, where citation counts are divided by the number of authors. However, this solution could also be manipulated to the detriment of more junior researchers, as minimizing the number of authors on a paper would maximize your h-frac score. This could mean more junior researchers are left off papers where they did contribute, harming their careers. 

The G-Index

This measure looks at the most highly cited articles of an author and is defined as “the largest number n of highly cited articles for which the average number of citations is at least n .” [4] This measure allows highly cited papers to bolster lower cited papers of an author. 

The i-10 Index

Developed by Google Scholar, this index is the number of articles published by an author that have received at least 10 citations. This measure, along with the h-index, is available on Google Scholar.

The m-value was developed to try to balance the scales for early career researchers. It corrects the h-index for time, allowing for easier comparison of researchers with different seniority and career lengths. It is calculated as the h-index divided by the number of years.

The Problem with Measuring Performance

While these numbers can be helpful to give a flavor of a scientist’s performance, they are all flawed. Many are biased towards researchers who publish often and are further into their careers. Many of these indexes can also be manipulated, such as adding extra authors to papers who didn’t contribute.

In reality, it isn’t possible to distill a researcher’s contributions to a single number. They may not have published many papers, but those papers they have published made vital contributions. Or their skills are in training the next round of researchers. When looking at these numbers, we should remember they are just a reflection of one small part of a researcher’s contributions and values and are not the be-all and end-all.

The H-Index Summed Up

The h-index provides a useful metric for scientific performance, but only when viewed in the context of other factors. While other measures are available, including the i-10 index, the G-index, and the h-frac index, these also have limitations. Therefore, when making decisions that are important to you (funding, job, finding a PI), be sure to read through publication lists, talk to other scientists (and students) and peers, and take account of career stage. So, remember that an h-index is only one consideration among many—and you should definitely know your h-index—but it doesn’t define you (or anyone else) as a scientist.

  • Hirsch JE. (2005) An index to quantify an individual’s scientific research output . PNAS 102(46):16569–72
  • Meho LI, Yang K. (2007) Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar . JASIST 58(13):2105–25
  • Koltun V, Hafner D. (2021) The h-index is no longer an effective correlate of scientific reputation . PLoS One . 16(6):e0253397
  • Wikipedia. g-index . Accessed 25 September 2023

Originally published April 2, 2009. Reviewed and updated October 2023.

Share this article:

More 'Writing, Publishing and Presenting' articles

Assertive slide titles to guide your presentation.

When I first learned how to design scientific presentations, I kept hearing the same advice from well-intentioned mentors: “Make sure your presentation tells a story!” they’d say. I understood what they meant in principle, but I had difficulty implementing this advice in my own presentations. In this article, I’m going to share a simple way…

Pre- and Post-Publication Peer Review and Why You Should Get Involved

Pre- and Post-Publication Peer Review and Why You Should Get Involved

Getting involved in the post-publication review of scientific papers can seriously improve your critical analysis skills. Here’s how.

How to Create an Effective PowerPoint Presentation

Presenting your work is a fantastic opportunity to get feedback on your project, demonstrate the significance of your results, and make the connections that will enhance your future career. And yet, how many incomprehensible lab meetings have we all sat through? How many seminars have you attended that left you feeling more confused than inspired?…

Build a CV You Can Be Proud Of – Part I: Communication Skills

Build a CV You Can Be Proud Of – Part I: Communication Skills

They say scientists are highly skilled… and rightly so! While many people would think that we’re shy, retiring types who sit at our lab benches obsessing over teeny-weeny molecules, science (and particularly the process of obtaining a PhD) sets us up as highly skilled members of the workforce. I can hear you all groaning as…

The Easier Way to Write a PhD Thesis

The Easier Way to Write a PhD Thesis

Without good planning and preparation, writing your thesis can be a nightmare. Here are some tips on how to make the process a whole lot easier.

A Picture Speaks a Thousand Words – Making Diagrams Simple

A Picture Speaks a Thousand Words – Making Diagrams Simple

Figures play a central role in science not just as a way of displaying results, although this is obviously important, but also as a way of getting across complicated theories and processes in a relatively simple and direct manner.  I’m a firm believe in the power of putting ideas into diagrams and spent a considerable…

It seems doubtful whether all fields of research can be effectively measured in this way. I am a First World War historian. If I want to be cited a lot, I will write about very popular questions (masculinity, identity, space etc at the moment). If I go off into virgin territory and explore, for the first time ever, say, comparative studies of First World War popular music, I will get far fewer citations for a good while, and this may see, a strange reward for asking rarer questions. Whereas asking rare questions, in history, is a key skill (see Keith Thomas for example). This is one of the reasons that scholarly human sciences organizations in France where I live often refuse to use bibliometric indexes of this sort.

I’ve recently proposed a novel index for evaluation of individual researchers that does not depend on the number of publications, accounts for different co-author contributions and age of publications, and scales from 0.0 to 9.9 ( https://f1000research.com/articles/4-884 ). Moreover, it can be calculated with the help of freely available software. Please, share your thoughts on it. Would you use it along with the h-index, or maybe even instead of it, for evaluating your peers, potential collaborators or job applicants? If you’ve tried it on the people you know, do you find the results fair?

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Become a better scientist in just 3 steps at a time.

Every week, our team condenses the best hard-won wisdom from our mentors’ webinars, podcasts, articles, eBooks, and more and delivers it directly to your inbox. 

Don’t delay! Sign up now

Newsletters

  • Technical Skills
  • More Skills
  • Microscopy Focus
  • Privacy Policy
  • Cookie Policy
  • Terms of Use

Copyright © Science Squared – all rights reserved

10 Things Every Molecular Biologist Should Know

researchgate h index

The eBook with top tips from our Researcher community.

Calculate your h-index

What is the h-index, find your h-index, metrics, impact and engagement.

Use metrics  to provide evidence of:

  • engagement with your research, and
  • the impact of your research.

Reusing content from this guide

researchgate h index

Attribute our work under a Creative Commons Attribution-NonCommercial 4.0 International License.

The h-index is a measure of the number of publications published (productivity), as well as how often they are cited .

h-index = the number of publications with a citation number greater than or equal to h.

For example, 15 publications cited 15 times or more, is a h-index of 15.

Read more about the h-index, first proposed by J.E. Hirsch, as An index to quantify an individual's scientific research output .

  • Do an author search for yourself in Scopus
  • Click on your name to display your number of publications, citations and h-index.

Google Scholar

  • Create a Google Scholar Citations Profile
  • Make sure your publications are listed.

Web of Science

Create a citation report of your publications that will display your h-index in Web of Science .

Watch Using Web of Science to find your publications and track record metrics 

h-index tips

  • Citation patterns vary across disciplines . For example, h-indexes in Medicine are much higher than in Mathematics
  • h-indexes are dependent on the coverage and related citations in the database. Always provide the data source and date along with the h-index
  • h-indexes do not account for different career stages
  • Your h-index changes over time . Recalculate it each time you include it in an application

Provide additional information about your metrics when talking about your h-index.

Example statement

A statement about your h-index could follow this format:

"My h-index, based on papers indexed in Web of Science, is 10. It has been 5 years since I finished my PhD. I have 4 papers (A, B, C, D) with more than 20 citations and 1 paper (E) with 29 citations (Web of Science, 05/08/19). I also have an additional 3 papers not indexed by WoS, with 29 citations based on Scopus data (01/12/20)"

Other indices

  • i10 index calculation includes the number of papers with at least 10 citations. Available from Google Scholar Citations Profile. Can also be calculated manually.
  • g-index modification of the h-index to give more weight to highly cited papers
  • m-Quotient accounts for career length, the h-index divided by the number of years since an author's first publication
  • h-index and Variants overview of various indices, including a look at the advantages and disadvantages
  • Last Updated: May 31, 2024 11:40 AM
  • URL: https://guides.library.uq.edu.au/for-researchers/h-index

Syracuse University Libraries

Research Metrics

  • Scholarly Profiles
  • Journal-level Metrics

Introduction: Author-Level Metrics

  • Web of Science

Google Scholar

Controversial tools, academia.edu, researchgate.

  • Book-level Metrics
  • Article-level Metrics
  • Definitions
  • Benefits and Impact
  • Types of Open Access Publishing
  • Evaluating Open Access Journals
  • Experts @ Syracuse This link opens in a new window
  • Research Metrics Challenge This link opens in a new window
  • CV/Resume Citing: Conferences Cancelled or Changed
  • ORCID This link opens in a new window
  • Responsible Research Assessment
  • Additional Resources

Author-level metrics measure the impact of the scholarly output of a single researcher. Author-level metrics are designed to help researchers assess the cumulative impact of their work, rather than the impact of a single publication. All author-level metrics are derived from article-level metrics: they aggregate or summarize the impact of an author's publications.

*Resources that are controversial. Please refer to the "Controversial Tools" tab.

Frequently-used Metrics

h-index: measures the cumulative impact of a researcher's output by looking at the number of citations a work has received

i10-index: created by Google Scholar, it measures the number of publications with at least 10 citations

g-index: aims to improve on the h-index by giving more weight to highly-cited articles

e-index: The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns

For more information on these and additional metric options, visit  Publish or Perish  and  Metrics Toolkit .

Limitations

As a "general rule of thumb:

  • If an academic shows good citation metrics, it is very likely that he or she has made a significant impact on the field.

"However, the reverse is not necessarily true. If an academic shows weak citation metrics, this may be caused by a lack of impact on the field, but also by one or more of the following:

  • Working in a small or newly developing field (therefore generating fewer citations in total)
  • Publishing in a language other than English (LOTE - effectively also restricting the citation field);
  • Publishing mainly (in) books."

Source: Anne-Wil Harzing,  Publish or Perish

Note on Citation Analysis

Citation analysis is a quantifiable measure of academic output. Users need to be aware of the limitations and incongruities of citation metrics. Library subscription databases and Google Scholar do not correct errors in citing papers. This means that one paper may be cited many different ways and appear as separate entries in these tools. Also, author and institutional naming inconsistencies complicate these analyses. Comparisons between these tools should be avoided. The databases use different sources to generate data and some are more comprehensive than others.  

The Author Citation Report in Web of Science will generate an h-index, total number of publications, sum of times cited, and citing articles in both a text and graphical format. The report also includes a breakdown of each publication, cites per year, and average citations of that publication per year. Caveat: this data is extracted from Web of Science publications, so it is recommended to also use one of the other author-level metrics tools for a more comprehensive view of an author's work.

Searching an Author in Web of Science:

  • start with the "Author Search," adding name and initial(s) as indicated, then add "Research Domain" (if relevant) and select the organization
  • at the results page, click on "Create Citation Report"
  • the report can be exported by saving to an Excel or text file

Adapted with permission from the  University of Oklahoma Research Impact Metrics research guide .

Your Google Scholar profile will include a list of the articles you have entered, with "cited by" links for each of them. Google Scholar will display a graph of your citation activity and calculate your total number of citations, h-index, and i10-index. The profile also includes a "recent" version of those three metrics, based on activity in the last five years.

Creating Your Profile in Google Scholar:

click "My Profile" at the top and either login to your Google account (or create a new one)

you will be prompted to enter some brief biographical information

add your articles and decide how to handle updates

Note that your profile is private by default. You can opt to make it public on your profile page.

Mendeley is a reference manager and academic social network. Creating a profile allows researchers to interact with colleagues and track some metrics on the use of their work. Please note that although Mendeley was purchased by Elsevier in 2013, it uses Scopus to generate its metrics and include non-Elsevier materials.

Creating Your Mendeley Profile:

  • click "Create a free account" and enter some basic information
  • after your profile is created, you can add your research interests, biography, and publications
  • you can view your publication count, citations, views, and readers by clicking "Stats" in the top navigation bar

Use Publons (available within Web of Science) to track your publications, citation metrics, peer reviews, and journal editing work in a single profile. Your publications will be imported from Web of Science, ORCID, or your citation manager (e.g. EndNote or Mendeley), along with citation metrics from the Web of Science Core Collection. Download a record summarizing your scholarly impact as an author, editor and peer reviewer.

Creating Your Publons Profile:

  • register with your email address, ORCID, Google account, LinkedIn, or WeChat

Controversial Tools: Academia.edu and ResearchGate

Academia.edu and ResearchGate both seem attractive to scholars, but they also have their share of disadvantages and downsides. We think it is especially important to place these two sites into context and preface them with important considerations.

Consideration #1: You Are Not the Customer

Similar to many other academic social networks, you are not the customer when you interact with these companies, even though you may feel like one. Instead, you are the product that these services seek to monetize and/or “offer up” to advertisers. We do not fault businesses for making money; that is the imperative for them to exist. But we also see Academia.edu and ResearchGate as an extension of those who monetize what many scholars believe should be freely shared. Importantly, if these companies are bought, sold, or go out of business, what would happen to the content you have placed there? This is one reason why it is advisable to first upload items you want to share – articles, preprints, postprints, conference posters, proceedings, slide decks, lesson plans, etc. – to SURFACE, SU’s institutional repository where you can deposit your work. The items in SURFACE are indexed by Google and Google Scholar, so they are searchable, findable, and downloadable by researchers around the world. SU Libraries maintains the platform, the content, and the links. Most importantly, maintaining and preserving content is one of the core missions of SU Libraries. We are not going out of business, so your content on SURFACE will not go away either.

Consideration #2: You Might Be Breaking the Law

Another consideration with these particular services is the legality of uploading your work to these platforms. Most publishers require authors to sign a publication agreement/copyright transfer prior to a manuscript being published, which outlines what you can/cannot do with your own work in the future. Uploading your work –  especially a publisher’s pdf  – to a site such as Academia.edu or ResearchGate may be a violation of the terms of the publishing agreement, whereas uploading it to an institutional repository may not be (or can be negotiated not to be). Several years ago,  a major academic publisher  actively went after Academia.edu, requiring them to take down all of the publisher’s content that had been illegally uploaded, much to the surprise and dismay of the authors. And Academia.edu  is not the only target . In 2018, ResearchGate was set to take down  nearly 7 million articles  or about 40% of their content.

Consideration #3: Understand the Privacy Implications

Finally, some of these sites’ tactics are troubling from the standpoint of privacy and intellectual freedom. Personally and professionally, many find it distressing that a private company, which does not adhere to the same  professional ethics  as librarians and other scholars do, collects information about who is reading what. Academia.edu, in particular, then offers to share that information with you if you subscribe to their “premium service.” And while their analytics dashboard does not reveal readers’ names, it may provide enough information for you to know exactly who read your work. You may decide not to pay for Academia.edu’s premium service, but even so – what you view and download will still be tracked. This may not be troubling to you (the “I’m not doing anything wrong, so I don’t care” argument), but we think it sets a bad precedent. What about tracking researchers who study terrorism? Or whistleblowing? Or even climate change? How might people at these academic social media companies create profiles and make judgments about you based on what you are reading? And what will they do with the information they collect, especially if asked for it by government entities?

Additional Readings and Resources:

  • A Social Networking Site is Not an Open Access Repository  by Katie Fortney and Justin Gonder
  • I Have a Lot of Questions: RG, ELS, SN, STM, and CRS  by Lisa Janicke Hinchliffe
  • Dear Scholars, Delete Your Account At Academia.Edu  by Sarah Bond
  • Academia, Not Edu  by Kathleen Fitzpatrick
  • Reading, Privacy, and Scholarly Networks  by Kathleen Fitzpatrick
  • Upon Leaving Academia.edu  by G. Geltner
  • Should You #DeleteAcademiaEdu  by Paolo Mangiafico
  • Should This Be the Last Thing You Read on Academia.edu?  by Gary Hall (downloads as a .pdf)

Adapted with permission from the University of Oklahoma Research Impact Metrics research guide.

*Controversial Tool (go to the Controversial Tools tab for more information)

Academia.edu (like ResearchGate) is designed to be an online research community with a social network component. You can upload papers and follow other scholars. Before uploading any of your papers, please ensure that you have permission from the publisher to do so. This falls under "Green" Open Access for author self-archiving. Remember your audience will likely run into a paywall in accessing this content. Additionally, Academia.edu does not offer traditional metrics but does show document and page views.  Please note that, although the site has an .edu domain, it is not associated with an educational institution.

Creating Your Academia.edu Profile:

  • sign up using Google, Facebook, or an email address
  • after entering your login information, you will be prompted to designate your status as researcher (faculty, graduate student, post-doc, etc.)
  • you will then be prompted with suggestions of other researchers to follow
  • after your profile has been created, you can add your publications and a biography

ResearchGate (Iike Academia.edu) is designed to be an online research community and functions as a social network for researchers. Users can share updates about their research and full papers, and they can follow others to receive updates about their works. In addition to reads, citation counts, profile views, and h-index, ResearchGate has its own metric called RG score. As it is created by a propriety algorithm, it is not clear how this number is generated; therefore, it should be used only cautiously.

Creating Your ResearchGate Profile:

  • click "Join for Free" and select "Academic"
  • you can then confirm your authorship
  • when selecting your publications, you will see a check box selected by default that will automatically send your coauthors an invitation to join Research Gate, so be sure to un-check this box if you do not want to send these emails
  • add research interests and skills
  • ResearchGate will then suggest that you follow others from your institution
  • << Previous: Journal-level Metrics
  • Next: Book-level Metrics >>
  • Last Updated: Jan 19, 2024 9:44 AM
  • URL: https://researchguides.library.syr.edu/researchmetrics
  • Print This Page

了解 H 指数:它对你 2024 年的职业生涯有何意义

您是否认识一位 h指数 20 人中至少有 20 人发表了至少 20 篇论文?每篇论文必须被引用至少 2005 次。这一指标由物理学家 Jorge E. Hirsch 于 2024 年创建,是衡量研究成果和学术影响力的关键。在我们迈向 XNUMX 年之际,这一点至关重要 1 3>。

我们推荐使用 h指数 既要看你发表的文章数量,也要看你的工作有多重要。它显示了你的研究质量和影响力 2 。在这个被引用率很重要的世界里,这个指标有助于显示你的影响力。它还有助于职业发展和获得资金。

随着学术的进步, h指数 将继续成为一项重要工具。它可以帮助研究人员在其领域中脱颖而出。

  • H指数是衡量研究成果和影响力的指标。
  • 该指标由 Jorge E. Hirsch 创立,对于评估学业成绩至关重要。
  • 分数为 20 意味着发表了 20 篇论文,每篇论文被引用 20 次。
  • 引文分析 工具自动计算各种 h 指数 数据库 .
  • H指数影响着学术界的职业机会和资金决策。

H 指数是学术界的一个重要指标。它表明研究人员的生产力和影响力。物理学家 JE Hirsch 于 2005 年引入了该指数来评估学术工作。 h 指数定义 解释为最高数字 h,即作者发表了 h 篇论文,每篇论文至少被引用 h 次 3 .

该指标通过忽略极端情况来提供平衡的观点。此类情况往往会扭曲科学家的真正影响力 4 .

H 指数是为了衡量科学家的影响力而设立的。它关注的是被引用次数达到一定数量的论文数量 4 例如,H 指数为 20 的作者至少有 20 篇论文被引用 20 次 5 .

自推出以来,h 指数在许多科学领域都变得非常重要。它有助于评估 研究表现 .

H 指数不仅仅显示数字。它影响资金、招聘以及我们如何看待 研究表现 4 在不同领域,例如助理教授或正教授,有特定的 h 指数分数 5 .

Scopus、Web of Science 和 Google Scholar 等工具有助于计算此指标。但是,数字可能会根据所使用的数据库而变化 3 .

H 指数是衡量学者影响力和生产力的关键方法。它基于计算其作品被引用的次数。首先,列出您的出版物及其引用次数。然后,按引用次数对这些文章进行排序。

此时,您可以通过查看引用次数与排名相同的文章来找到 h 指数。此方法显示每篇作品获得的引用次数和质量。

计算 h 指数的方法如下:

  • 首先列出您的所有出版物。
  • 计算每篇文章被引用的次数。
  • 按引用次数最多到最少的顺序排列列表。
  • 查找引用量等于或高于其排名的文章。
  • 这篇文章的价值就是您的 h-index。

更多来自Google的 数据库 像 Web of Science 和 Scopus 这样的网站可以轻松找到 h 指数。研究人员还可以使用 SciFinder 和 Google Scholar 进行手动检查。但是,这些方法可能会产生不同的结果,因为它们涵盖不同的期刊 6 例如,Scopus 不包含 1995 年之前的文章,这会影响 h 指数 6 .

不同数据库如何测量 h 指数

不同 数据库 由于它们索引不同的期刊,因此可以给出不同的 h 指数值。这意味着用户在比较 Web of Science 和 Scopus 的结果时需要小心 6 这些平台的引用数范围很广,从而改变了 h 指数。

Scopus 等数据库中的图表有助于使这些数据更易于理解。它们可以帮助研究人员清楚地看到其影响。

研究调查了巴基斯坦等地的 H 指数如何发挥作用。研究表明,H 指数是衡量研究成果和影响力的关键工具 7 。随着研究的变化,了解这些指标对于分享研究贡献至关重要 6 7 .

为什么 H 指数对学术指标如此重要

h 指数是 学术指标 ,详细介绍 研究表现 。Jorge E. Hirsch 于 2005 年推出了该指标。它衡量了研究人员工作的数量和影响力 8 . 这对于了解学者在其领域的持久影响力至关重要。

H 指数表明研究人员的总体工作成果和长期影响力。H 指数为 5 的作者至少发表了 XNUMX 篇论文,每篇论文被引用 XNUMX 次 9 。这比仅仅计算论文数量更能凸显研究的真正影响力。

对于学术界的招聘,h 指数提供了一种标准化的方式来衡量研究的深度和广度 9 。但了解其局限性很重要,例如引文数据库的差异或自我引用问题 8 .

H 指数通过将数量和影响力结合在一个数字中而在其他指标中脱颖而出 9 。这与仅仅看你写了多少篇论文或被引用了多少次不同 9 例如,G-Index 关注的是被引率高的文章,这可能对早期研究人员没有帮助 8 .

H 指数中衡量优秀或卓越的标准因领域而异 10 经过 20 年的研究,h 指数为 20 被视为良好,40 被视为杰出,60 被视为卓越 10 这表明,在评估学术表现时,需要了解您所在领域的 h 指数。

h指数比较

H 指数是学术研究领域的关键。它不仅衡量你写了多少篇论文,还衡量它们的影响力。这使得它成为评估研究人员成功的重要工具。

H 指数对于职业发展至关重要。它会影响谁能被聘用以及谁能获得资助。拥有高 H 指数意味着你在自己的领域取得了成就,并为你打开了获得更好工作和项目的大门。

当你的 H 指数达到 20 时,说明你已经发表了 20 篇论文,每篇论文至少被引用 20 次。这表明你不仅工作高效,而且备受尊重。招聘委员会和资助评审员会密切关注这一分数,因此它对你的职业发展至关重要 11 12 .

高 H 指数意味着您被视为该领域的领导者。这可以带来更多发表重要研究的机会。学校和资助机构使用 H 指数来了解部门的表现并检查提案质量 13 .

撰写一流的研究论文并与知名作者合作可以提高你的 H 指数。这反过来又可以提高你的 学术声誉 .

要提高您的 H 指数,请专注于在顶级期刊上发表文章并与顶尖学者合作。高 H 指数表明您在该领域的生产力和影响力。例如,H 指数为 7 表示您发表了 7 篇论文,每篇论文被引用至少 7 次。这表明选择正确的发表地点对成功的重要性 14 15 .

最大化影响力的出版策略

选择合适的期刊是获得更多引用的关键。选择知名且受人尊敬的期刊来提高知名度和引用率 15 开放获取期刊通常能获得更多引用,有助于提高你的 H 指数 14 . 在发表论文时注重质量而非数量将会慢慢提高你的 h 指数。

详细了解如何选择 适合稿件提交的期刊

积极参与 研究社区 有助于提高您的 H 指数。参加会议和建立人际网络可以提高您的知名度,并带来更多合作 14 。与顶尖研究人员合作可以大大增加你的引用量,并提升你的 学术声誉 ,帮助您提高 H 指数 15 .

文献计量学在 h 指数中的作用

文献计量学 是学术界的关键,它为我们提供了科学论文的统计数据。它通过引用来了解一项研究的影响力和生产力。这种方法用于创建 h 指数,这是自 2005 年以来流行的指标。h 指数既考虑作者的论文数量,也考虑这些论文被引用的次数 16 .

文献计量学的定义及其重要性

文献计量学 使用引用计数和趋势等指标来衡量 研究影响 。这些信息有助于研究人员了解他们在各自领域和更广泛的学术界的地位。例如,从 1994 年到 2004 年,自然资源领域科学家的 H 指数得分范围为 1 到 29 16 。这显示了不同领域的广泛影响。它有助于研究人员了解他们的立场并规划未来的研究。

文献计量学如何影响研究影响力

查看文献计量数据可以让我们清楚地了解研究人员的工作和科学趋势。例如,意大利媒体在 1990 世纪 XNUMX 年代初开始讨论 h 指数和期刊影响因子 17 。这表明 h 指数如何影响我们对科学可信度的看法。使用这些指标可以帮助识别优点,但也会带来偏见,尤其是当它们过于关注一件事时。这就是为什么我们需要一个平衡的观点,将 h 指数和其他指标结合起来,以充分了解研究人员的影响力。

文献计量学对研究影响力的影响

关于 h 指数的常见误解

H 指数是衡量学术成就的一种流行方法,但它也有缺陷。它经常受到过多关注,导致人们对研究人员的工作产生错误的看法。该指数并不能反映全貌,尤其是在出版方式不同的领域。

一些重要的研究可能因此而不被关注。这表明h指数有其局限性。

许多人认为 H 指数是衡量学术成功的唯一标准。但这忽视了研究的质量和原创性。出版习惯和引用模式因领域而异,这使得 H 指数过于简单。

H 指数的创始人 Jorge Hirsch 表示,应关注更多衡量成功的指标 这里。 18 .

H 指数作为成功衡量标准的局限性

H 指数有其局限性,尤其是在研究不同研究领域时。在科学领域,它用于衡量影响力,但在社会科学和人文学科中,它不那么可靠。与 H 指数相比,ISI 期刊影响因子 (JIF) 有其自身的优点和缺点 18 .

人们还担心自我引用和引文操纵。这些问题让我们质疑基于引文的指标的可信度 19 通过了解这些问题,学者们可以更加谨慎地评估成功。

追踪和计算你的 h 指数的工具

对于想要展示自己研究成果的学者来说,了解如何追踪自己的 H 指数是关键 研究影响 。 有许多 引文分析工具 去那里提供帮助。 顶级数据库 诸如 Scopus、Web of Science 和 Google Scholar 等对此至关重要。

顶级数据库概览:Scopus、Web of Science、Google Scholar

每个数据库都有自己的特点, 追踪h指数 更容易。Scopus 的一大优势在于它拥有来自同行评审期刊的广泛引用。Web of Science 提供详细的引用跟踪和分析,帮助研究人员获得准确的 h 指数分数。Google Scholar 易于使用,通过统计不同来源的引用,用户可以了解其科学工作的影响力。

对于研究人员来说,了解这些很重要 顶级数据库 。他们确实可以帮助 追踪h指数 20 .

如何有效使用引文分析工具

充分利用 引文分析工具 ,研究人员应密切关注他们的出版指标。通过使用这些工具,他们可以快速查看他们的 h 指数如何变化并了解其含义。按年份和类型显​​示引用的功能可以揭示影响其研究影响力的模式。

研究人员还可以研究其他指数来配合他们的 H 指数。这可以帮助他们应对其缺点,例如偏爱较早的工作或大型合作 21 .

现实生活中的例子和案例研究

综观 现实生活中的 h 指数示例 向我们展示了顶尖研究人员如何建立自己的职业生涯。h 指数于 2005 年创建,是衡量研究人员工作成果的关键工具。它既考察他们发表的文章数量,也考察其他人引用他们作品的频率。自创建以来,h 指数催生了 g 指数和 m 指数等新工具 22 .

分析杰出研究人员的 H 指数

顶尖研究人员通常通过做出突出的工作获得较高的 H 指数分数。例如,H 指数为 25 的化学家的每篇论文至少被引用 25 次。这表明,你发表的文章越多,你需要的引用就越多,才能产生影响 23 平均而言,事业有成的科学家在 20 年后 h 值为 20 23 .

H指数可以更容易地了解不同领域的研究成果 22 .

案例研究:史蒂芬·W·霍金的 H 指数

理论物理学巨擘史蒂芬·W·霍金 (Stephen W. Hawking) 很好地诠释了 H 指数的作用原理。他的工作和出版物使他成为顶尖科学家。他撰写的每篇论文都会增加他的 H 指数,这表明努力工作和高质量研究可以让你受到关注。

研究人员可以从霍金的职业生涯中学到很多关于出版质量和数量之间的平衡 23 霍金的 h 指数在不同的数据库中有所不同,这表明测量 研究影响 22 .

学术指标和 H 指数的未来趋势

世界 学术指标 正在快速变化,这要归功于新技术和新做事方式。 未来趋势 h 指数 将使我们更深入地了解研究工作的效果。随着我们越来越多地使用 h 指数,了解其发展方向对于学者和学校来说至关重要。

新兴技术及其对引文分析的影响

数据分析和人工智能的新技术正在改变我们看待 H 指数的方式。这些工具将使我们更容易理解研究的影响。它们将帮助我们了解研究的真正重要性。

例如,更好的算法将帮助我们更好地整理引文。这意味着我们将获得关于一项研究重要性的更准确评分。得益于更多数据和新视角,我们将获得更精确的结果。

对 h 指数演变的预测

随着研究的变化和传播,h 指数预计将变得更加详细。开放获取出版和更多团队合作等因素将影响 未来趋势 h 指数 我们认为元数据质量和数据库覆盖范围将在 h 指数得分中发挥重要作用。

研究表明, 顶尖研究人员经常在大型期刊上发表文章并合作。这可能会改变我们认可伟大研究的方式 24 .

了解 H 指数是研究人员了解自己在 2024 年的成功程度和影响力的关键。这不仅关乎你发表了多少论文,还关乎你对所在领域的影响力。学者可以通过选择在哪里发表论文以及在自己所在领域的活跃程度来提升自己的声誉 25 26 .

高 H 指数确实有助于你的职业生涯取得进步。随着新技术的出现,我们衡量研究的方式也可能发生变化。及时了解这些变化对你的职业生涯至关重要 25 .

使用 Avidnote 等工具可以更轻松地跟踪您的进度。通过提高您的 h 指数,您将在您的领域脱颖而出。这样,您的工作将获得应有的尊重 获得进一步的见解 .

为什么 H 指数对于学术生涯发​​展如此重要?

有哪些策略可以提高我的 h 指数?, 不同的数据库如何影响 h-index 值?, 文献计量学在 h 指数中起什么作用?, 关于 h 指数有哪些常见的误解?, 我可以使用什么工具来追踪和计算我的 h 指数?, 您能提供一个 h 指数分析的真实例子吗?, 预计未来 h 指数等学术指标的趋势是什么?.

  • https://paperpile.com/g/h-index/
  • https://scientific-publishing.webshop.elsevier.com/publication-recognition/what-good-h-index/
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10771139/
  • https://mdanderson.libanswers.com/faq/26221
  • https://paperpile.com/g/what-is-a-good-h-index/
  • https://www.wur.nl/en/article/how-do-i-calculate-my-h-index.htm
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10025721/
  • https://mindthegraph.com/blog/h-index/
  • https://library.cumc.columbia.edu/kb/author-impact-metric-h-index
  • https://www.linkedin.com/pulse/what-good-h-index-how-boost-yours-young-researcher-fikk-6lrgc
  • https://www.linkedin.com/pulse/navigating-academic-impact-unraveling-h-index-its-hamed-taherdoost-muaue
  • https://www.aje.com/arc/what-is-the-h-index/
  • https://www.editverse.com/understanding-your-h-index-scholarly-impact/
  • https://www.enago.com/academy/how-to-successfully-boost-your-h-index/
  • https://blog.degruyter.com/how-to-improve-your-h-index/
  • https://www.sciencedirect.com/science/article/abs/pii/S1751157707000338
  • https://blogs.lse.ac.uk/impactofsocialsciences/2022/07/04/bibliometrics-at-large-the-role-of-metrics-beyond-academia/
  • https://harzing.com/publications/white-papers/google-scholar-h-index-versus-isi-journal-impact-factor
  • https://link.springer.com/article/10.1007/s11192-020-03417-5
  • https://medium.com/@aliborji/how-good-is-h-index-cde224b3870d
  • https://www.editage.com/insights/looking-forward-a-new-formula-designed-to-measure-your-future-h-index/
  • https://www.sciencedirect.com/science/article/abs/pii/S1751157709000339
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3737023/
  • https://www.nature.com/articles/s41598-023-46050-x
  • https://www.psychologicalscience.org/observer/the-h-index-in-science-a-new-measure-of-scholarly-contribution
  • https://academic.oup.com/ajcp/article/151/3/286/5139644
  • ← 2024-2025 年增加论文引用量的策略

IMAGES

  1. What is the difference between H-index, i10-index, and G-index

    researchgate h index

  2. What is a good H index for a Professor in Biology compared to a

    researchgate h index

  3. h index

    researchgate h index

  4. H-Index Distributions in Biology

    researchgate h index

  5. What is an H-index?

    researchgate h index

  6. Publications

    researchgate h index

COMMENTS

  1. h-index

    The h-index is a simple way to measure the impact of your work and other people's research. It does this by looking at the number of highly impactful publications a researcher has published. The higher the number of cited publications, the higher the h-index, regardless of which journal the work was published in.

  2. How to get my H-Index?

    The h-index reflects both the number of publications and the number of citations per publication. For example a scientist with an h - index of 20 has 20 papers cited at least 20 times.

  3. Citations

    Also in the citations section of your Stats tab, you'll find your h-index. The h-index measures your research output and citation impact. ... Here's what you can do to help your citations appear on ResearchGate: Make sure the citing research item is on ResearchGate; Check to see if the research item has complete and accurate metadata (e.g ...

  4. H-Index and the research interest score?

    The H-Index can be found on a researcher's ResearchGate profile and is also commonly used in other bibliometric tools like Google Scholar. Researchers with higher H-Indices are generally ...

  5. What is an h-index? How do I find the h-index for a ...

    The h-index is calculated by counting the number of publications for which an author has been cited by other authors at least that same number of times. For instance, an h-index of 17 means that ...

  6. The h-Index: Understanding its predictors, significance, and criticism

    The h-index is an author-level scientometric index used to gauge the significance of a researcher's work. The index is determined by taking the number of publications and the number of times these publications have been cited by others. Although it is widely used in academia, many authors find its calculation confusing.

  7. How can I calculate the H index of a researcher manually?

    Third, the h-index will never be greater than the number of papers the author has published; to have an h-index of 20, the author must have published at least 20 articles which have each been ...

  8. How to increase my h-index on Research Gate?

    Nasrin. There are many ways to increase your h index. Collaborating with good researchers, publishing in a reputable journal, publishing in an open access journal, and making the research ...

  9. H-Index

    Abstract. The h-index is a metric that uses both the number of an author's publications along with the number of times those publications have been cited by other authors in an attempt to gauge ...

  10. Improve your h-index with these 10 practical strategies

    The h-index. One popular measure for citations is the h-index. The h-index combines how many papers you publish and how often these papers got cited into a single metric2. The h-index is available for instance on Google Scholar. There are also apps, like 'Publish or Perish'. Currently, Google Scholarshows my h-index as 6, with 139 citations.

  11. What is a good h-index? [with examples]

    Now let's talk numbers: what h-index is considered good? According to Hirsch, a person with 20 years of research experience with an h-index of 20 is considered good, 40 is great, and 60 is remarkable. But let's go into more detail and have a look at what a good h-index means in terms of your field of research and stage of career.

  12. Understanding your stats

    Stats on ResearchGate help to show that your work matters. By giving you a range of insights and information, your stats are a great way to understand the wider impact of your work and track your achievements. ... You can also see your h-index, which gives you an idea of your impact based on your citations in relation to your publishing activity.

  13. h-index

    The h-index is a measure used to indicate the impact and productivity of a researcher based on how often his/her publications have been cited.; The physicist, Jorge E. Hirsch, provides the following definition for the h-index: A scientist has index h if h of his/her N p papers have at least h citations each, and the other (N p − h) papers have no more than h citations each.

  14. What is a good H-index?

    3. 9. >. 1. In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside. A good H-index score depends not only on a prolific output but also on a large number of citations by other authors.

  15. bibliometrics

    2. The H-index is largely a function of how many large projects you are involved in. Without having a large number of co-authors who all write papers, it is impossible to be competitive. If you write a brilliant paper that earns you the Nobel Prize, your h-index will only increase by one.

  16. Finding Your H-index (Hirsch Index) in Web of Science

    (The h-index was suggested by Jorge E. Hirsch, physicist at San Diego State University in 2005. The h-index is sometimes referred to as the Hirsch index or Hirsch number.) e.g., an h-index of 25 means the researcher has 25 papers, each of which has been cited 25+ times. STEP 1: Access Web of Science

  17. H-Index

    (So a Web of Science h-index might look different when searched through different institutions.) 1 Schreiber, M. (2008). An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index. Journal of the American Society for Information Science and Technology, 59(9), 1513.

  18. Do researchers know what the h-index is? And how do they ...

    The h-index is a widely used scientometric indicator on the researcher level working with a simple combination of publication and citation counts. In this article, we pursue two goals, namely the collection of empirical data about researchers' personal estimations of the importance of the h-index for themselves as well as for their academic disciplines, and on the researchers' concrete ...

  19. The h-Index: A Helpful Guide for Scientists

    The h-index is a measure of research performance and is calculated as the highest number of manuscripts from an author (h) that all have at least the same number (h) of citations. The h-index is known to penalize early career researchers and does not take into account the number of authors on a paper. Alternative indexes have been created ...

  20. Library Guides: Calculate your h-index: Using the h-index

    What is the h-index? The h-index is a measure of the number of publications published (productivity), as well as how often they are cited. h-index = the number of publications with a citation number greater than or equal to h. For example, 15 publications cited 15 times or more, is a h-index of 15.

  21. How to search one's H-index?

    how to search one's H-index? Get help with your research. Got a technical question? Get high-quality answers from experts. Your toughest technical questions will likely get answered within 48 ...

  22. Author-level Metrics

    In addition to reads, citation counts, profile views, and h-index, ResearchGate has its own metric called RG score. As it is created by a propriety algorithm, it is not clear how this number is generated; therefore, it should be used only cautiously. Creating Your ResearchGate Profile: ResearchGate. click "Join for Free" and select "Academic"

  23. How to increase citation in Research Gate?

    This study describes the meaning of and the formula for Kor-, which is a modified index built on the tapered h-index by applying 'the ranking according to the number of citations of journals'.

  24. Understanding the h-index: What It Means for Your Career in 2024

    Hawking's h-index varies across different databases, showing the challenges in measuring 研究影响 22. Future Trends in Academic Metrics and the h-index. 世界 学术指标 is changing fast, thanks to new tech and new ways of doing things. The future trends h-index will give us deeper insights into how well research works. As we use the h ...