Does the Number of Authors Matter? Data from 101,580 Research Papers

I analyzed a random sample of 101,580 full-text research papers, uploaded to PubMed Central between the years 2016 and 2021, in order to explore the influence of the number of authors of a research paper on its quality.

I used the BioC API to download the data (see the References section below).

Here’s a summary of the key findings

1. The median research paper had 6 authors, with 90% of papers having between 1 and 15 authors .

2. The median number of authors of a research paper increased, from 3 to 6, in the past 20 years . The data also show that single-authored papers are becoming less popular , as the percentage of single-authored papers dropped from 33.9% in 2002 to 2.1% in 2021.

3. Descriptive studies (such as case reports, case series, and cross-sectional studies) usually require less work and therefore have fewer authors than analytical studies (such as cohort, case-control, and experimental studies) .

4. The number of authors does not influence the quality of the journal in which the research paper is published . In fact, the median single-authored paper is published in a journal with an impact factor of 3.11 compared to 3.15 for the median multiple-authored paper.

5. Single-authored research papers receive 34.7% less yearly citations compared to multiple-authored papers (p < 0.01).

1. How many authors can a research paper have?

The following is a histogram representing the distribution of the number of authors:

number of authors in a research paper

The graph shows that:

  • The distribution of authors has a right skew, as expected.
  • Most research papers have less than 10 authors.
  • It is somewhat exceptional for a research paper to have more than 15 authors.

Here’s a table that summarizes these data in numbers:

In our sample of 101,580 research papers, the median research paper was written by 6 authors, and the majority had between 4 and 9 authors. Only 5% were written by single authors (n=5,280).

A physics paper had 2,902 authors, which was the largest number of contributors to a single work in our sample (here’s the link to the paper on PubMed ).

2. Number of authors: 20-year trend

The 20-year trend shows that the median number of authors increased from 3 in 2002 to 6 in 2012 and remained constant for the past decade:

Looking at the trend of single-authored papers below, we see that these are declining in popularity over the past 20 years:

Perhaps that collaborations are more encouraged nowadays, especially for larger projects.

If this is the case, then the number of authors should differ for various study designs, as different types of studies require different amount of work. So, this is what we are going to discuss next.

3. Number of authors of different article types

The following table shows the median number of authors of different article types:

The data agree with the hypothesis that descriptive studies (such as case reports, case series, and cross-sectional studies) usually require less amount of work and therefore have fewer authors — medians between 5 and 6 authors — than analytical studies (such as cohort, case-control, and experimental studies) — medians between 7 and 8 authors.

4. Number of authors of papers in different journals

To study the influence of the number of authors on the quality of the journal in which the paper is published, I used linear regression to predict the journal impact factor given the number of authors.

Here’s the outcome of that model:

The model shows that research papers with more authors are published in slightly higher-quality journals: an increase in 1 author is associated with an increase of 0.03 in the journal impact factor. Although statistically significant, this effect is practically negligible.

In addition, when comparing the median journal impact factor for single-authored and multiple-authored research papers, I found only a difference of 0.04 (3.11 and 3.15, respectively)

5. Does the number of authors influence the citation count?

In order to study the influence of the number of authors of a paper on the number of citations it receives, I used Poisson regression to model the number of citations per year given the number of authors.

The model shows that research papers that have 1 additional author are associated with a 0.62% increase in the yearly citation count. For the median article, this means an increase of 1 citation every 100 years — with is negligible.

However, when we compare single-authored with multiple-authored papers, the Poisson model shows that the yearly citation count for single-authored papers is 34.7% less compared to multiple-authored papers (p < 0.01). Specifically, the median number of yearly citations for single-authored papers was 1.4 compared to 2.2 for multiple-authored papers.

  • Comeau DC, Wei CH, Islamaj Doğan R, and Lu Z. PMC text mining subset in BioC: about 3 million full text articles and growing,  Bioinformatics , btz070, 2019.

Further reading

  • Paragraph Length: Data from 9,830 Research Papers
  • Can a Research Title Be a Question? Real-World Examples
  • How Long Should a Research Paper Be? Data from 61,519 Examples
  • How Many References to Cite? Based on 96,685 Research Papers
  • How Old Should References Be? Based on 3,823,919 Examples

How many authors are (too) many? A retrospective, descriptive analysis of authorship in biomedical publications

  • Open access
  • Published: 25 January 2024
  • Volume 129 , pages 1299–1328, ( 2024 )

Cite this article

You have full access to this open access article

number of authors in a research paper

  • Martin Jakab   ORCID: orcid.org/0000-0003-2110-5090 1 ,
  • Eva Kittl 1 &
  • Tobias Kiesslich   ORCID: orcid.org/0000-0001-5403-9478 1 , 2  

6282 Accesses

3 Citations

29 Altmetric

Explore all metrics

Publishing in academic journals is primary to disseminate research findings, with authorship reflecting a scientist’s contribution, yielding academic recognition, and carrying significant financial implications. Author numbers per article have consistently risen in recent decades, as demonstrated in various journals and fields. This study is a comprehensive analysis of authorship trends in biomedical papers from the NCBI PubMed database between 2000 and 2020, utilizing the Entrez Direct (EDirect) E-utilities to retrieve bibliometric data from a dataset of 17,015,001 articles. For all publication types, the mean author number per publication significantly increased over the last two decades from 3.99 to 6.25 (+ 57%, p  < 0.0001) following a linear trend ( r 2  = 0.99) with an average relative increase of 2.28% per year. This increase was highest for clinical trials (+ 5.67 authors per publication, + 97%), the smallest for case reports (+ 1.01 authors, + 24%). The proportion of single/solo authorships dropped by a factor of about 3 from 17.03% in 2000 to 5.69% in 2020. The percentage of eleven or more authors per publication increased ~ sevenfold, ~ 11-fold and ~ 12-fold for reviews, editorials, and systematic reviews, respectively. Confirming prior findings, this study highlights the escalating authorship in biomedical publications. Given potential unethical practices, preserving authorship as a trustable indicator of scientific performance is critical. Understanding and curbing questionable authorship practices and inflation are imperative, as discussed through relevant literature to tackle this issue.

Similar content being viewed by others

number of authors in a research paper

Freeloading in biomedical research

Biomedical journal speed and efficiency: a cross-sectional pilot survey of author experiences.

number of authors in a research paper

Prolific non-research authors in high impact scientific journals: meta-research study

Avoid common mistakes on your manuscript.

Introduction

Authorship in biomedical sciences: relevance, conventions, guidelines, proliferation.

Authorship in scholarly journal publications indicates a scientist’s genuine contribution to the work. It demonstrates the intellectual efforts and accomplishments and conveys scientific prestige and esteem, which translates into opportunities for research funding, patent applications, and personal financial benefits (Bennett & Taylor, 2003 ; Claxton, 2005 ; Cronin, 2001 ; Greene, 2007 ; Shapiro et al., 1994 ; Sharma & Verma, 2018 ; Strange, 2008 ). Beginning with the PhD degree, the usual criteria for academic careers are publication output and/or citation counts, cumulative impact factors, and a ranking of author positions (Greene, 2007 ; Shapiro et al., 1994 ; Sharma & Verma, 2018 ). This is based on an assumed direct correlation between the number of authorships and the individual’s productivity (academic performance), where authorship is considered a return on investment due to the quantitative metrics mentioned above and the academic incentive systems that use such metrics (Cronin, 1996 ; Greene, 2007 ; Sharma & Verma, 2018 ).

Until the early twentieth century, single authorship was the norm, but became more and more uncommon (“demise of the lone author” (Greene, 2007 )) since approximately 1920 when multicenter studies and collaborative research grew (Baethge, 2008 ; Bennett & Taylor, 2003 ; Claxton, 2005 ; Cronin, 2001 ; Greene, 2007 ; Thelwall & Maflahi, 2022 ; Wuchty et al., 2007 ). The shift from single-authored to co-authored papers raised two fundamental questions: a) who qualifies as an author, and b) what significance has the ordering of authors (position in the author byline)? Regarding the latter, in some disciplines (mathematics, physics, economics), a strict determination is occasionally circumvented by simple alphabetical listing of the authors. In biomedical publishing, the first (greatest contribution to the study) and last author position (senior author, supervision, overall responsibility) are those with the highest “value” while the authors in between (contributing) are listed according to their relative contribution to the study—although these conventions are neither definitive nor founded on explicit definitions of author positions (Baerlocher et al., 2007 ; Bennett & Taylor, 2003 ; Claxton, 2005 ; Fernandes & Cortez, 2020 ; ICMJE, 2022 ; Rahman et al., 2021 ; Shapiro et al., 1994 ; Sharma & Verma, 2018 ; Strange, 2008 ). Regarding the first question on who qualifies for authorship, the currently most widespread guideline covering authors’ roles and requirements has been developed by the International Committee of Medical Journal Editors (ICMJE), Footnote 1 stating that (all) four distinct criteria (in brief: contribution to/revising of/approving/responsibility for a manuscript) must be met to qualify for authorship (ICMJE, 2022 ). Again, this recommendation is not without controversy or suggestions for modification (Lee, 2009 ; Lin, 2023 ; Miles et al., 2022 ; Strange, 2008 ).

These questions have gained importance as the progressive increase in volume and output of scientific research due to scientific and technological advancements (Grieger, 2005 ; Weinberg, 1961 ) was accompanied by an increase in the number of authors per publication (Greene, 2007 ; Grieger, 2005 ; Grobbee & Allpress, 2016 ; Lee, 2009 ). The trend in “authorship proliferation” is most noticeable in high-energy physics and biomedicine (Changa et al., 2019 ; Cronin, 2001 ; Pell, 2019 ) and also well documented in biomedical publishing for single or sets of journals, publications types, disciplines or clinical specialties (see discussion for references).

For example, in a science-wide journal-based study on 88 million journal articles between 1900 and 2020 listed in Scopus, Footnote 2 the strongest increases in author numbers for life science and health-related broad fields was found for immunology & microbiology and biochemistry, genetics & molecular biology, followed by neuroscience, medicine and pharmacology, toxicology & pharmaceutics; here, the geometric mean of co-authors per article rose from between 1 and 1.5 before 1930 to 5–6 in 2020 (Thelwall & Maflahi, 2022 ). A similar rise in average co-author numbers per publication had been shown in a previous large-scale study on articles in the ISI (Institute for Scientific Information) Web of Science in the field of science & engineering from below two authors per article in 1960 to approximately 3.5 in 2000 (Wuchty et al., 2007 ).

Authorship proliferation has led to the adoption of novel terms such as multi- or multiple authorship (Cronin, 2001 ; Rahman et al., 2021 ), which refers to the co-authoring of papers by 2–99 authors. Some studies have further categorized multi-authored papers (MAPs) based on specific numbers (e.g., megaauthorship in case of more than five authors; see (Changa et al., 2019 ) and citations therein). However, even the term multiauthorship seems understated in face of the ever-growing number of papers with many more than 100 authors listed. For example, 1014 authors are listed in a 2015-paper on fruit-fly genomics (Leung et al., 2015 ). Not less than 2080 authors are listed on a paper from high energy physics (Khachatryan et al., 2010 ), which “needed 165 lines on the PubMed site to spell out their surnames and initials” (Marusic et al., 2011 ). A 2017-paper in astrophysics had 3674 authors (Abbott et al., 2017 ) and in 2021, during the pandemic, the number of co-authors peaked at over 15,000 in a multi-center study on the efficacy of SARS-CoV-2 vaccination on post-surgical COVID-19 infections and mortality (Covidsurg Collaborative, 2021 ). The issue of substantial authorship inflation (or proliferation) in some disciplines was addressed by Cronin, who coined the term hyperauthorship to name this increasing manifestation of multiauthorship and asked, if this tendency might indicate a structural shift in scientific communication or if it might be seen as mere perversion (Cronin, 2001 ). Such extreme examples might be explained by increased research complexity, increasingly sophisticated methodology, multidisciplinary research, larger research units, internationalization, multicenter collaborations and stronger involvement of graduating students in biomedical science (An et al., 2020 ; Baethge, 2008 ; Cronin, 2001 ; Greene, 2007 ; Grobbee & Allpress, 2016 ; Lutnick et al., 2021 ; Ojerholm & Swisher-McClure, 2015 ; Singh Chawla, 2019 ; Thelwall & Maflahi, 2022 ). Given the importance of authorship for individual academic careers, the growing pressure to publish, driven by promotion policies and reward structures, alongside unethical practices like gift authorship, are considered additional factors leading to authorship inflation (An et al., 2020 ; Greene, 2007 ; Grieger, 2005 ; Grobbee & Allpress, 2016 ; Kornhaber et al., 2015 ; Lee, 2009 ; Levsky et al., 2007 ; Rahman et al., 2021 ; Tilak et al., 2015 ).

Research questions

Against this background and focusing on biomedical publications, the current study’s objective was a large-scale bibliometric, retrospective, quantitative per-year analysis of authorship of publications listed in the PubMed Footnote 3 database.

Specifically, all articles of 13 common publication types (Table  1 ) of each year between 2000 and 2020 were included in the analysis, journal articles in the first place as the most frequent and primary publication type for the dissemination of original research. By avoiding an arbitrary selection of articles (e.g., in (sets of) journals, disciplines etc.), we aimed for an unbiased, generally applicable assessment of authorship trends in the biomedical literature over all research fields. The study is based on the following research questions: (a) to what extent has the number of authors per publication increased between 2000 and 2020 for all or for single publication types? (b) is such a tendency accompanied by (significant) decreases in single-authored papers overall and for single publication types? and (c) which of the analyzed PubMed publication types were most affected by changes in the proportions of multi- and single-authored articles in this period?

Data retrieval

Bibliometric data were retrieved from the U.S. National Institute of Health (NIH) National Library of Medicine (NLM) National Center of Biotechnology Information (NCBI) PubMed database comprising > 33 million citations including all MEDLINE content as PubMed’s primary component. Footnote 4 The Entrez Direct (EDirect) E-utilities “esearch”, “efetch”, and “xtract” were used to access the database system (currently 38 databases of biomedical data including the PubMed Kans, 2022 ; Sayers, 2017 )). E-utilities were run in the macOS Unix terminal application.

Figure  1 depicts the data retrieval and extraction strategy. A local copy of PubMed baseline was built on an external Solid-State Drive (SSD) by executing the “archive-pubmed” command included in EDirect Footnote 5 (step 1 in Fig.  1 ). As of the latest update of the local copy on February 28, 2022, the archive contained ~ 33.5 million entries. An “esearch” query for the year published (publication date, search field abbreviations [PDAT]) was performed from 2000 to 2020 (first line of the Unix command lines shown below, corresponding to step 2 in Fig.  1 ). The data query was performed on March 10, 2022 (accession date). This query was connected to the online retrieval of the unique identifiers (UIDs) of all PubMed entries from 2000 to 2021 as a text file (2000_2021UID.txt) using “efetch” (second line of the command lines below, step 3 in Fig.  1 ). In case of the PubMed, this returns the entries’ PMIDs. (Although not included in the final analysis, the entries of 2021 were included in the UID retrieval). Unix command lines for step 4 and 5 of data retrieval as shown in Fig. 1 .

figure 1

Flow chart illustrating the data retrieval and extraction workflow. Data were retrieved from a locally archived PubMed baseline database using Entrez Direct (EDirect) E-utilities on the Unix command line (Kans, 2022 ; Sayers, 2017 ) based on UID (PMID) numbers. Search field abbreviations in square brackets. E-utilities and commands are shown in grey rounded rectangles. For details see text

figure a

The entries in the 2000_2021UID.txt file were allocated to the respective entries in the locally stored PubMed archive by executing “fetch-pubmed” (first and second line of the command lines below, step 4 in Fig.  1 ). Finally, the data element pattern relevant for further analysis was specified and extracted from the local archive and stored as a text file (2000_2021.txt) using “xtract” (lines three to seven of the command lines below, step 5 in Fig.  1 ). This returned a complete list of all distinct PubMed entries from 2000 to 2021 with the following elements extracted:

Author number per record

Author list complete: check for complete author lists (all authors are listed)

Publication type(s)

Inclusion and exclusion criteria

The 13 included publication types consisted of commonly found types like journal articles, reviews, and case reports, in addition to various clinical trials, meta-analyses, and systematic reviews (Table  1 ). The publication types of Clinical Trial, Clinical Trial–Phase I (first in man), Clinical Trial–Phase II (proof of concept), Clinical Trial–Phase III (pivotal studies), and Clinical Trial–Phase IV (post marketing) were jointly analyzed and are denoted as Clinical Trial, I–IV.

Comparatively rare publication types (e.g., Webcast, Twin Study, Letter, Lecture, or Portrait) were excluded as well as work published in abstract form only. Also, publications without authors listed were excluded, which affected 0.84% of articles across all years and publication types with a minimum of 0.52% in 2013, and a maximum of 1.71% in 2000. Publications assigned to multiple publication types were included in the analyses of all publication types specified. This affected 64.48% of articles across all years and publication types with a minimum of 42.11% in 2017, and a maximum of 71.92% in 2004.

The NCBI policy on the listing of authors in the byline have changed over time. From 1966 to 1983 all authors were included. From October 29, 1983, authors per publication were limited to a maximum of ten. With 1966 (date of publication) there was a limit of 25 (1–24 plus last author, the rest was omitted). Effective with 2000, the personal author limit was removed and as of mid-2005 all limitations were lifted so that from that point all authors, also of hyperauthored papers may be listed (NIH–NLM, 2020 ). Therefore, all articles included in the present study should have all authors listed in the byline without truncations applied. To verify complete author listing, the “author list complete” element was included in the data extraction pattern.

Data management and processing

Data in the 2000_2021.txt file were processed with Microsoft Power Query, filtered based on publication year, author numbers, and types, then analyzed in Microsoft Office 365 Excel (version 16.60) using Pivot tables. The results were exported to Microsoft Excel (.xlsx) for additional analysis and graphing in GraphPad Prism 10.

Data analysis and statistics

Descriptive statistics included sample size (N), average author numbers as arithmetic and geometric mean ± 95% confidence interval (CI) and the standard deviation (SD). Median author numbers were also calculated and are shown in Online resource 2. Statistical testing was performed using two-tailed t -tests for unmatched (unpaired) samples, or ordinary one-way analysis of variance (ANOVA) with Tukey’s multiple comparisons test. Means were considered as statistically significant at p -values < 0.05.

Literature survey

A systematic literature search was performed (general Web search and specifically in the PubMed) between September 2021 and April 2022 and continuously updated until submission of this manuscript. The following search terms, truncations thereof and keyword combinations were used with Boolean operators: author, authorship, biomedical, growth, guideline, hyperauthorship, increase, inflation, multiauthorship, multiple, progression, proliferation, recommendation, trend.

The figures demonstrating the trends in authorship are presented in absolute numbers, displaying both the arithmetic and geometric means of authors per publication. Median values are given in Online Resource 2. Throughout the results text, tables, and discussions, we have opted to use only arithmetic means, facilitating a more direct comparison with the findings of other studies, the majority of which also rely on arithmetic means (i.e., average numbers).

Author numbers per publication increased overall

In total 17,015,001 PubMed articles published between 2000 and 2020 were included. Across all 13 publication types, the number of articles per year steadily increased almost threefold from ~ 490,000 in 2000 to over ~ 1.3 million in 2020 (Fig.  2 A and Table  2 ). In the same period, the mean number of authors per publication significantly increased from 3.99 to 6.25 (+ 57%, p < 0.0001). Regressions (best fits determined by empirical evaluation) for mean author numbers are shown in Online Resource 1 for the overall analysis and for sub-analyses of the selected publication types listed in Table  1 . Overall, the increase in author numbers followed a linear trend ( r 2  = 0.99) with an average relative change of + 2.28% per year (Table  2 ).

figure 2

Overall analysis of included publication types. A Arithmetic mean (authors mean) and geometric mean (authors geomean) of authors per publication and number of PubMed publications from 2000 to 2020. Mean ± 95% CI bands. B Percentages of single authors, 2–4 authors, 5–7 authors, 8–10 and 11 or more (11 +) authors per publication. C Publications with 100 or more authors per year from 2000–2020. D Overlay of histograms showing author number distributions in 2000, 2010 and 2020 (X-Axis truncated at 100 authors, Y-Axis in logarithmic scale)

While mean author numbers increased, the proportion of single-authored papers steadily declined by a factor of ~ 3 from 17.03% in 2000 to 5.69% in 2020 (Fig.  2 B and Table  3 ). The proportion of papers authored by two to four authors decreased by approximately one third from 47.59% to 34.55%, while the proportions of articles with five to seven and eight to ten authors showed a 1.25-fold and 2.24-fold increase, respectively. Papers with eleven or more authors accounted for only 2.32% of articles in 2000 but reached 10% in 2018 and accounted for 11.24% of articles in 2020, i.e., a total increase by a factor of 4.84 between 2000 and 2020 (Table  3 ).

A remarkable trend emerged in publications with 100 or more authors, indicating an approximate doubling from the early 2000s to 2020. Prior to 2004, fewer than 100 articles annually had 100 + authors, surpassing 200 articles per year in 2015 (Fig.  2 C). The shift to higher author numbers per publication over time is also obvious from the author number distributions overlay for the years 2000, 2010 and 2020 in Fig.  2 D.

Author numbers increased for all selected publication types

Analyzing the publication types listed in Table  1 separately, potential variations in authorship trends were investigated. The outcomes of these sub-analyses are depicted in Figs. 3 , 4 , 5 and 6 . Tables 2 and 3 present an overview of authorship changes from 2000 to 2020 for all publication types (cumulative) and for individual types based on the criteria outlined in section “ Inclusion and exclusion criteria ”.

figure 3

Journal Articles and Review. ( A , C ) Arithmetic mean (authors mean) and geometric mean (authors geomean) of authors per publication and number of PubMed publications from 2000 to 2020. Mean ± 95% CI bands. ( B , D ) Percentages of single authors, 2–4 authors, 5–7 authors, 8–10 and 11 or more (11 +) authors per publication

figure 4

Case Reports and Editorial. ( A , C ) Arithmetic mean (authors mean) and geometric mean (authors geomean) of authors per publication and number of PubMed publications from 2000 to 2020. Mean ± 95% CI bands. ( B , D ) Percentages of single authors, 2–4 authors, 5–7 authors, 8–10 and 11 or more (11 +) authors per publication

figure 5

Clinical Trial, I–IV, Randomized Controlled Trial and Multicenter Study. ( A , C , E ) Arithmetic mean (authors mean) and geometric mean (authors geomean) of authors per publication and number of PubMed publications from 2000 to 2020. Mean ± 95% CI bands. ( B , D , F ) Percentages of single authors, 2–4 authors, 5–7 authors, 8–10 and 11 or more (11 +) authors per publication

figure 6

Meta-Analysis and Systematic Review. ( A , C ) Arithmetic mean (authors mean) and geometric mean (authors geomean) of authors per publication and number of PubMed publications from 2000 to 2020. Mean ± 95% CI bands. ( B , D ) Percentages of single authors, 2–4 authors, 5–7 authors, 8–10 and 11 or more (11 +) authors per publication

Journal articles

Journal articles, the dominant publication type in PubMed, significantly influence the trends depicted in Fig.  1 . The average author count per paper exhibited a significant linear rise ( r 2  = 0.99) from 4.07 in 2000 to 6.42 in 2020 (+ 58%, p  < 0.0001), with an average annual relative change of + 2.31% (Fig.  3 A, Table  2 ). The proportions of single and multiple authorships mirror the general findings, with single authors decreasing from 15.85% in 2000 to 4.78% in 2020 (a 3.3-fold decrease; Fig.  3 B, Table  3 ). The proportions of papers authored by two to four, five to seven, eight to ten, and 11 + authors displayed changes consistent with the overall analysis ( “Author Numbers per Publication Increased Overall” section).

The count of review articles, the third-most frequent publication type in PubMed, more than doubled from ~ 65,000 in 2000 to ~ 136,000 in 2020 (Table  2 ). The mean author number per article increased by + 3.12% per year on average from 2.48 to 4.57 (+ 84%, p  < 0.0001; Fig.  3 C and Table  2 ) following a second order polynomial function ( R 2  = 0.99). Notably, review articles saw a strong ~ 4.3-fold decline of single authors from a relatively high level of 35.85% in 2000 to 8.28% in 2020 (Table  3 ) and a reciprocal increase of papers with five to seven authors (8.84% in 2000 and 25.12% in 2020), while the proportion of articles with two to four authors remained relatively stable over the years (Fig.  3 D).

Case reports

Among all the examined publication types, case reports exhibited the slightest increase in mean author numbers per publication, rising by1.01 from 4.14 in 2000 to 5.15 in 2020 (+ 24%, p  < 0.0001; Fig.  4 A), with an average annual relative increase of 1.10% (linear, r 2  = 0.99; Table  2 ). Similar to clinical trials, meta-analyses, and systematic reviews (Figs.  5 and 6 ), case reports had a relatively low proportion of single-authored articles from the beginning. In 2000, single-authored case reports accounted for 9.50%, decreasing to 2.39% in 2020 (Fig.  4 B and Table  3 ). While the proportion of case reports with two to four and five to seven authors remained relatively stable throughout, the rise in mean authors primarily resulted from a 1.67-fold and 3.63-fold increase in articles authored by eight to ten and eleven or more authors, respectively (Table  3 ).

In 2000, the average number of authors per editorial was 1.40 and increased 1.76-fold to 2.47 in 2020 ( p  < 0.0001; Fig.  4 C) following a polynomial function ( R 2  = 0.96; Online Resource 1). The modest absolute rise in average author counts was accompanied by a drastic change in single authorship with a steady decrease from a high level of 72.39% in 2000 to 41.03% in 2020 (Fig.  4 D). This 1.8-fold decline was coupled with a twofold rise in articles with two to four authors from 26.52% in 2000 to 50.67% in 2020 (Table  3 ). Editorials with five or more authors were rare before 2004 accounting for less than 1% altogether, but gradually increased thereafter. In 2020, 5.46% of articles had five to seven authors, which equals an almost sevenfold increase from 2000. Although not well discernible from the graph, editorials with eight to ten or 11 + authors increased 8.41-fold and 10.85-fold from 2000 to 2020 with a notable step from 2019 to 2020 (Fig.  4 D).

Clinical trials

In total, data retrieval returned 291,699 publications between 2000 and 2020 consisting of Clinical Trial–Phase I to IV. From 2000 to 2004 the number increased from 22,320 to 26,524 and by 2006 abruptly halved to ~ 10,000 entries per year (Fig.  5 A). From then, the number stayed relatively constant (9323 in 2020). The observed decrease in the count of clinical trials, not readily apparent through a standard web browser-based PubMed advanced search engine query, remains unexplained based on the present dataset. This discrepancy might stem from alterations in the PubMed indexing policy for clinical trials around 2005, potentially leading to the exclusion of certain subsequently published trials from the retrieval method employed in our study.

While the data retrieval for this publication type may be incomplete after 2006, there is a noticeable steady upward trend in mean author counts. Starting at 5.87 in 2000, mean author numbers increased to 11.54 authors in 2020 ( p  < 0.0001), which was the highest absolute change among all analyzed publication types (+ 97%; Fig.  5 A and Table  2 ) following a polynomial function ( R 2  = 0.99) with an average relative change of + 3.45% per year (Online Resource 1 and Table  2 ).

The proportion of single authored publications steadily decreased from 3.85% in 2000 to around 1% in 2013 and further down to 0.43% in 2020, signifying a ~ ninefold reduction (Fig.  5 B and Table  3 ). A notable decline was observed in the proportion of articles with two to four authors from 36.02% to 10.16% (~ 3.6-fold reduction) and in those with five to seven authors from 37.93% to 24.24% (~ 1.6-fold reduction). Conversely, there was a substantial increase in articles with eleven or more authors by ~ sixfold from 7.32% in 2000 to 43.47% in 2020 (Fig.  5 B and Table  3 ).

Randomized controlled trials

With 5.72 in 2000, the average number of authors of randomized controlled trials (RCTs) was similar as in clinical trials, but the increase to 8.74 until 2020 (+ 53%, p  < 0.0001) was flatter (Fig.  5 C and Table  2 ) following the same second order polynomial trend ( R 2  = 0.99). The single authorship trend was congruent with that of clinical trials showing a steady ~ sixfold decline from 3.78% in 2000 to 0.63% in 2020 (Fig.  5 D and Table  3 ). RCTs with two to four authors decreased ~ 1.9-fold, while those with eight to ten and 11 + authors increased 1.48-fold and 3.25-fold, respectively. The proportion of RCTs with five to seven authors remained relatively constant over time (~ 38%).

Multicenter studies

The number of multicenter studies increased from ~ 5000 to ~ 20,000 and had the highest average number of authors per paper of all analyzed publication types with 7.37 in 2000 and 12.92 in 2020 (+ 75%, p  < 0.0001; Fig.  5 E and Table  2 ) with an average increase of 2.86% per year (Table  2 ). following a polynomial function ( R 2  = 0.99; Online Resource 1). Single authorships dropped ninefold from 4.30% in 2000 to less than 0.47% in 2020 (Fig.  5 F and Table  3 ), closely resembling trends observed in clinical trials and RCTs. Particularly evident is the high proportion of studies with eleven or more authors already in 2000, accounting for 19.85%. The proportion surged to 50.04% in 2020, marking a steepening around 2010. Multicenter studies with two to four and five to seven authors decreased ~ threefold and ~ 1.5-fold, respectively, while multicenter studies with eight to ten authors consistently accounted for ~ 22% (Fig.  5 F).

Meta-analyses

Along with an almost 20-fold increase in meta-analyses from 838 in 2000 to 16,417 in 2020, author counts increased by 63% from 4.16 to 6.77 ( p  < 0.0001; Fig.  6 A and Table  2 ) following a sigmoidal function ( R 2  = 0.99; Online Resource 1) with an average relative increase of 2.56% per year (Table  2 ). Single authorships peaked in 2001 (13.57%) and thereafter massively dropped ~ 12.5-fold to 0.85% in 2020 (Fig.  6 B and Table  3 ). Until 2008, meta-analyses with two to four authors accounted for more than 50% (59.6% in 2003). Subsequently, there was a transition towards more than five authors per publication.

Systematic reviews

Similar as meta-analyses, the count of systematic reviews strongly increased by ~ 17-fold from 1401 in 2000 to 24,414 in 2020. The mean author number per article linearly ( r 2  = 0.99) increased by + 3.31% per year on average from 3.18 to 6.05 (+ 91%, p  < 0.0001) (Fig.  6 C and Table  2 ). The trends in single-, co- and multiauthorships are comparable with those found for meta-analyses. A particularly noticeable aspect is the strong ~ 11-fold decline in single authorships from 11.71% in 2000 to 1.02% in 2020 (Fig.  6 D and Table  3 ).

Discussion of results, literature review

The results of this study including over 17 million PubMed entries support the hypothesis that between 2000 and 2020 there had been a significant increase in the mean number of authors of biomedical publications and a decrease of single-authored papers over time, in general and for analyzed publication types (summarized in Tables  2 and 3 ). Overall, the arithmetic mean (average) author number per publication increased from 3.99 to 6.25 (+ 57%). The steepest increase was observed for clinical trials (+ 97%), the mildest increase for case reports (+ 24%). Overall, the increase followed a linear trend with + 2.28% per year, while individual publication types such as reviews, editorials and different types of trials and clinical studies showed a curved, polynomial upward trend (Online Resource 1). The proportion of single authorships decreased ~ threefold overall. With a 12.5-fold and ~ 1.75-fold reduction, meta-analyses and editorials showed the strongest and mildest decline in solo authorships, respectively. For reviews, editorials, and systematic reviews, a dramatic, up to 12.3-fold increase in the percentage of eleven or more authors was found, while solo authorship dropped to 1% and below for clinical trials, randomized controlled trials, multicenter studies, and meta-analyses. The drop in solo authorship in editorials—although relatively small compared to other publication types and the shift towards two to four authors per article is especially remarkable, given that this article type has traditionally been single-authored.

In general, and across the different publication types, the arithmetic and geometric mean author counts per publication follow the same trend, although the geometric means are lower and time courses are less steep, owing to their resilience against outliers. This difference is particularly noticeable for editorials in 2020 and meta-analyses in 2012, and generally becomes more pronounced over time, reflecting the increasing skewness of the data due to the increasing prevalence of multi- and hyper-authored articles over the years.

The present study covered all major (common) publication types in PubMed, the largest publicly accessible database of biomedical literature with currently more than 33 million citations, without restrictions, e.g., limitation to specific journals and/or research fields/topics or clinical specialties. We chose the PubMed article-based classification versus a journal-based approach as e.g., underlying Scopus broad and narrow fields (Elsevier, 2023 ; Thelwall & Maflahi, 2022 ). Consequently, it is presumed that the study provides comprehensive and unbiased data on the overall trend of authorship growth in biomedical literature over the last two decades, both generally and for individual publication types. During the past decades, authorship trends have been surveyed in several studies, mostly using more confined study concepts as compared to the comprehensive approach used here. With consistency, however, all these studies have evidenced increasing author numbers per paper over time and decreasing proportions of solo authorships. To put the present study in context with published data, Table  4 summarizes the central outcomes, inclusion criteria and the periods of analysis of relevant previous studies (without claim for completeness). Besides the primary outcome, i.e., change (increase) in number of authors as presented in Table  4 , the following findings are worth mentioning. Along with generally increasing numbers of authors, (i) less authors have been observed in long-established journals compared to newer journals (Lutnick et al., 2021 ), (ii) advanced academic degrees and a shift in the proportion of articles from North America and United Kingdom to continental Europe and the Far East (Camp & Escott, 2013 ) with highest numbers of authors for paper from Japan, China, Italy, and, Germany (Chow et al., 2015 ) could be observed, (iii) most of the studies addressing the authors’ sex at the first (and last) author position found (significantly) increasing proportions of female authors (Chien et al., 2020 ; Gu et al., 2017 ; Hsu et al., 2021 ; Seetharam et al., 2018 ; Sheridan et al., 2018 ), (iv) multidisciplinary and multi-institutional affiliations increased, and, (v) higher average author numbers were found on more influential papers, i.e., those with higher relative citations rates (An et al., 2018 ).

Taken together, despite variations in study designs, inclusion criteria, and study durations, the findings on authorship trends biomedical literature compiled in Table  4 demonstrate a strong uniformity in the steady rise of average author numbers per publication throughout the years. Notably, the two extensive studies mentioned earlier, conducted by Thelwall and Maflahi (Thelwall & Maflahi, 2022 ) and Wuchty et al. (Wuchty et al., 2007 ), examining 88 million papers from 1900 to 2020 and 20 million research articles published between 1950 and 2000, respectively, showed nearly identical findings. Despite their use of a distinct science-wide methodology relying on the Scopus and ISI Web of Science databases, in contrast to our study confined to biomedical literature using PubMed, the results were remarkably similar.

In the present study from 2000 to 2020, the highest increases (absolute mean authors >  + 5.5) were observed for clinical trials and multicenter studies (Table  2 ). Among the different clinical specialties, the highest increases were found in orthopedics with + 3.5 (Camp & Escott, 2013 ; Lutnick et al., 2021 ), radiation oncology with + 4.8 (Ojerholm & Swisher-McClure, 2015 ) and cardiothoracic surgery with + 6.1 authors (Modi et al., 2008 ). However, since these studies spanned longer periods, it is unsurprising to observe more pronounced absolute increases (Table  4 ).

The upward trend in average author counts per publication coincides with a gradual reduction in the prevalence of single authorships, observed across all publication types, research disciplines, and clinical specialties. Regarding different publication types, within the last 20 years, the strongest declines in solo authorships were evident for clinical trials, randomized controlled trials, multicenter studies, meta-analysis, and systematic reviews (Table  3 ). Among different clinical disciplines and subspecialities, the most striking drops observed in previously published studies were cardiothoracic surgery (> –90%) (Modi et al., 2008 ), plastic surgery (up to − 75%) (Durani et al., 2007 ) and in biomechanics (Knudson, 2012 ), where solo authorships completely vanished within 20 years (Table  4 ). These results imply, that the “demise of the lone author” (Greene, 2007 ), is not a recent phenomenon, but has been in progress at least since the 1930s. On the other end of the scale, from the early 2000s until 2020, hyperauthored articles with 100 or more authors (Cronin, 2001 ) have doubled from less than 100 articles per year before 2004 to over 200 as of 2015 (Fig.  2 C). The apparent leveling off between 2016 and 2019 and subsequent decline in 2020 were not continued, as evidenced by a surge to over 300 PubMed articles with 100 + authors in 2021 (data not shown). In addition to the overall rise in authorship, this consistent upward trend in hyperauthorship warrants careful examination in the future.

Limitations of the study

This study has several limitations. Regarding research fields, the generalized study approach does not allow for analyzing discipline-specific differences, i.e., those that may exist between biological specialties or (clinical) medicine, nursing to name a few. When choosing PubMed as the database source, changes of authorship were the focus of investigation rather concerning the type of publication. In addition to that, other factors possibly affecting author counts could also not be considered such as authorship demographics (geographical differences in authorship trends as e.g., surveyed in Camp & Escott, 2013 ; Chien et al., 2020 ; Gu et al., 2017 )), academic degree (as e.g., shown in (Camp & Escott, 2013 )), or funding sources (as e.g., in (Dotson et al., 2011 )). Specifically, in future, there should be more focus on sex/gender-specific aspects and changes in authorship trends as e.g., in the study by Gu et al. ( 2017 ) in hand surgery, by Chien et al. in ophthalmology (Chien et al., 2020 ), by Hsu et al. in Radiology (Hsu et al., 2021 ), or by Seetharam et al. and Sheridan et al. in orthopedics (Seetharam et al., 2018 ; Sheridan et al., 2018 ). Similar to other affiliation-, geographic-, or funding-related details, retrieving authors' sex/gender in large-scale studies involving millions of entries is challenging. An accurate determination would necessitate focused, smaller-scale methodologies involving the scrutiny of individual publications and verification of gender through targeted internet searches, possibly utilizing science-specific online platforms like ResearchGate. Footnote 6

Another shortcoming of the present study is the ambiguity that arises from the assignment of individual articles to more than one publication type, which is an inherent property of the PubMed database most likely resulting from the attempt to uniformly reclassify all articles originally classified by the host journal using its own (divergent) nomenclature. On average, this affects 64.48% of articles across all years and publication types with a decreasing tendency over the two decades. E.g., a review or case report may also be classified as a journal article, or a systematic review or meta-analysis may also be categorized as a review, leading to the possibility of a single article being included in the analysis for multiple publication types.

Regarding the assumption that questionable authorship practices contribute to or are partly responsible for the escalation of author lists beyond justifiable extents (refer to the discussion below), this descriptive study, by its nature, cannot offer evidence on the specific quantitative impact of these practices, nor can it identify individual articles with inappropriate authorship.

Eventually, articles without author information (on average, 0.84% of articles across all years) and “rare” publication types were excluded (focus on the n  = 13 most frequent and well-known types). Given their low frequency, both sets of articles should not influence the overall results.

Authorship proliferation and related ethical issues

Despite long-standing ethical guidelines on authorship (ICMJE, 2022 ), this and other referenced studies illustrate the persistent expansion of authorship in biomedical publications. This trend raises concerns regarding data integrity and quality control, as well as the individual author's contribution and accountability for the published work (Claxton, 2005 ; Cronin, 2001 ; Grobbee & Allpress, 2016 ; Kornhaber et al., 2015 ; Rahman et al., 2021 ). Reason for that are legitimate concerns that (some) researchers define authorship quite loosely and that authorship inflation leads to unjustified citations and consequently to a dilution of the intrinsic value of authorship (Baethge, 2008 ; Drenth, 1996 ).

The increases in author numbers have been explained by the requirement of multidisciplinary and multicenter collaborations in face of increasing research complexity and methodological advances which goes along with a growth in team sizes and a progressive transition to “team science” (Cronin, 2001 ; Greene, 2007 ; Grobbee & Allpress, 2016 ; Larivière et al., 2015 ; Lin & Lu, 2023 ; Ojerholm & Swisher-McClure, 2015 ; Singh Chawla, 2019 ; Thelwall & Maflahi, 2022 ; Tilak et al., 2015 ; Wuchty et al., 2007 ). Considering more laborious techniques, higher sample sizes and larger patient cohorts, multicenter approaches etc., this argument seems more than plausible from an epistemic point of view and undoubtedly justify the listing of a greater number of co-authors than in previous years. In addition, larger teams can produce more frequently cited work and more influential high-impact research than individual researchers or smaller groups (Larivière et al., 2015 ; Wuchty et al., 2007 ). However, a direct association between work-intensive research technologies and increasing author numbers has also be questioned, suggesting that behavioral and cultural practices like conferral of co-authorship could also play a role or even be more relevant (Epstein, 1993 ; Lin & Lu, 2023 ). Of note, correcting for self-citations decreased the relative impact of research teams especially in biomedical science (Wuchty et al., 2007 ), which suggests other than legitimate mechanisms acting in the background to drive co-authorship inflation.

Certainly, the mounting pressure faced by researchers to publish, driven by prevalent criteria for academic advancement and funding reliant on publication records and citation-based metrics, seems to be a key contributor to authorship inflation and the adoption of ethically dubious authorship practices (An et al., 2020 ; Baethge, 2008 ; Cronin, 2001 ; Greene, 2007 ; Grieger, 2005 ; Grobbee & Allpress, 2016 ; Kornhaber et al., 2015 ; Lee, 2009 ; Levsky et al., 2007 ; Rahman et al., 2021 ; Tilak et al., 2015 ). Indeed, multiple surveys have highlighted academic advancement and the pressure to publish, among other concerns associated with the interpretation of authorship (e.g., inadequate understanding and adherence to authorship criteria, or ambiguous definitions of authorship) as factors contributing to unwarranted authorship and (unethical) authorship-related practices (Rahman et al., 2021 ; Slone, 1996 ).

Common types of authorship-related irregularities are listed in Table  5 . Notably, a form of unethical authorship behavior notoriously contributing to authorship inflation involves the bestowal of gift, honorary, or guest authorships. These individuals are typically not engaged in the work's conception, data acquisition, manuscript writing, or final approval, thus failing to meet any of the ICMJE criteria (ICMJE, 2022 ). Consequently, they cannot be held accountable for the published work (Bennett & Taylor, 2003 ; Claxton, 2005 ; Lee, 2009 ; Strange, 2008 ). The pressure to publish (or perish) has been shown to be the main motivation for these forms of undeserved authorship. It drives scientists (often junior scientists) to listing, for example, renowned senior colleagues as co- or senior authors for putative advantages in the peer-review process to increase the chance of the work to be published (Baethge, 2008 ; Bennett & Taylor, 2003 ; Grieger, 2005 ; Grobbee & Allpress, 2016 ). Similarly, mutual support authorship is an attempt to inflate publication lists by placing each other’s names on papers without a substantial contribution to the work that would justify a co-authorship, again mainly motivated by the pressure to publish (Claxton, 2005 ).

Several surveys have assessed self-reported prevalence of authorship irregularities in biomedical publications. Undeserved or honorary authorship has been reported in the range between eleven and 60%, with differences depending on the definition of honorary authorship, the chosen cohort of articles or journals and the period investigated (Flanagin et al., 1998 ; Kennedy et al., 2014 ; Shah et al., 2018 ; Slone, 1996 ; Varghese & Jacob, 2022 ; Wislar et al., 2011 ). A meta-analysis of 14 survey studies showed an average of 29% of researchers reporting their own or others’ experience with misuse of authorship (Marusic et al., 2011 ). Compared to other types of scientific misconduct (plagiarism, falsification, manipulation etc.), authorship misconduct and especially gift authorship was the most frequently reported form of research fraud (Dhingra & Mishra, 2014 ; Reisig et al., 2020 ). Notably, the proliferation in authorship, facilitated by gift authorship, primarily impacted senior authors' occupancy of first and last author positions (Drenth, 1998 ). The pressure to publish affecting senior colleagues might even lead to coercive authorship, where individuals misuse their authority or supervisory roles to secure authorship without making appropriate contributions to the published work (Bennett & Taylor, 2003 ; Claxton, 2005 ; Strange, 2008 ).

While authorship irregularities are generally perceived as problematic, these practices are evaluated differently from an ethical perspective. Many scientists classify these practices as questionable, low-grade misbehavior or as happening in a grey zone. Others unmistakably rate them as abusive, damaging authorship practices and outright scientific misconduct (Anderson et al., 2007 ; Dhingra & Mishra, 2014 ; Grieger, 2005 ; Shah et al., 2018 ; Sharma & Verma, 2018 ; Strange, 2008 ), along with fabrication, falsification, and plagiarism—grievous practices that clearly deviate from accepted rules within the scientific community (Martyn, 2003 ). In an attempt to correct authorship misuse, the editors of BMJ asked all corresponding authors to sign that the ICMJE criteria were met by all authors and that no eligible contributors were excluded from the list, but this measure did not lead to any changes in authorship behavior (Smith, 1997 ). It is possible that some researchers, despite acknowledging the four ICMJE criteria individually, object to the mandate that all four must be fulfilled by every author, perceiving this condition as overly stringent (Bennett & Taylor, 2003 ; Smith, 1997 ). Therefore, some suggest reimagining authorship guidelines to promote equity and fairness in co-produced research more flexibly (Lin, 2023 ; Miles et al., 2022 ) and call for even clearer rules on crediting co-authorship, especially when it comes to gift authorship (Singh Chawla, 2020 ).

Accurate authorship assignment is essential for maintaining the integrity of biomedical science. Undeserved and unjustified authorship not only misguides various stakeholders including journal editors, publishers, funding organizations, and those responsible for personnel decisions, but also creates an unfair advantage for certain scientists over those who adhere to authorship guidelines and potentially disadvantages fields where smaller teams remain prevalent (An et al., 2020 ; Baethge, 2008 ). Furthermore, the increase in authorship alongside fraudulent authorship practices undermines standard academic and scientific reward structures, which exclusively depend on publication and citation counts, as it provokes scientists to “game of the system” by inflationary assignment of authorship (Greene, 2007 ). This exacerbates the issue in a cycle of positive reinforcement. Ultimately, as expressed by Grieger, it is science that loses out (Grieger, 2005 ).

Alternatives—measures against authorship proliferation

Apparently, explicit authorship guidelines from institutions like the ICMJE (ICMJE, 2022 ) do not significantly influence the authors’ (mis)conduct. So, what could possibly change the present situation? Several remedies against authorship proliferation resulting from questionable authorship practices have been proposed and partially put into practice.

Alphabetical order: Some journals recommend arranging authors alphabetically, as commonly seen in fields like economics, mathematics, and business (Fernandes & Cortez, 2020 ). However, this method eliminates the ability to discern individual contributions, and the first and last authors’ positions are merely coincidental with respect to their initials (Fernandes & Cortez, 2020 ; Pell, 2019 ). This practice has proven unappealing in biomedical science, where author position traditionally denotes contribution, leading to authors with late-alphabet surnames avoiding these journals (Bennett & Taylor, 2003 ).

To align the number of authors per paper with their genuine contributions, it is advised to differentiate between authors and contributors more distinctly. Tasks that do not inherently meet authorship criteria, such as general supervision, advisory roles, funding acquisition, administrative support, writing assistance/proofreading, or material provision, should be acknowledged rather than included as authorship credits (Baethge, 2008 ; ICMJE, 2022 ; Lee, 2009 ; Strange, 2008 ).

Authorship agreements: The selection of authors should be a collaborative decision among co-authors, ideally made before project initiation and any practical work (Baerlocher et al., 2007 ; Bennett & Taylor, 2003 ; Claxton, 2005 ; Dotson et al., 2011 ; ICMJE, 2022 ; Sharma & Verma, 2018 ; Strange, 2008 ), possibly curbing authorship inflation during a project. To emphasize the significance and accountability of authorship, Strange recommended a written authorship agreement at the project's outset (Strange, 2008 ). However, this approach might be counterproductive, possibly favoring those who promise in advance over those who eventually deliver (Bhopal et al., 1997 ).

Corporate names (group authors): Large multi-author collectives are advised to adopt corporate names, functioning as an author entity in the byline (Grobbee & Allpress, 2016 ; ICMJE, 2022 ), and individual authors meeting the criteria are noted in footnotes on the first manuscript page or in acknowledgements, per journal preference, not in the byline (Liesegang et al., 2010 ). In PubMed (and the MEDLINE as integral part it), the individual group members appear as collaborators (investigators), but not in the author list. Footnote 7 Although this strategy seems viable and fair, its effectiveness is limited as publications are not recognized in systems like Web of Science which greatly reduces the appeal of this approach (Grobbee & Allpress, 2016 ). Corporate authorship remains uncommon, representing at most 3% of annual publications in select journals between 1980 and 2000 (Weeks et al., 2004 ). However, this approach may gain traction over time; for instance, in neuroscience, with 4.1%, group authorship surpassed single-authored papers in 1919/1920, indicating a potential trend (Lin & Lu, 2023 ).

Credit systems: Such systems aim to clearly define author contributions. For instance, Rahman et al. proposed a categorization system known as the “Author Performance Index (API)’ which utilizes designations such as first, co, principal, or corresponding author (Rahman et al., 2021 ). This tool is designed to offer a more objective approach to assessing contributor credit. Despite its objective aim, this model heavily relies on previous contributions, which does not necessarily reflect an individual’s input to a current project.

Author contribution statements: Journals following the ICMJE guidelines (ICMJE, 2022 ) increasingly request author contribution statements that list individual roles in a project, covering tasks from study conceptualization to manuscript writing, data analysis and supervision, and include information on supervision and funding acquisition. Such statements guide editors and readers in assessing the role of each contributor (Grieger, 2005 ; Smith, 1997 ). In practice, however, these statements still leave a margin of interpretation and creativity, and the impact of journal policy changes remains unverified by authorship trend studies. For example, Dong et al. found that leading gastroenterology journals saw a continued increase in the number of authors per publication, even after implementing author contribution requirements (Dong et al., 2016 ).

Obviously, there are no simple solutions for the problem—generally speaking, “there are almost never technical solutions to social problems” [in analogy to problems of the peer review system (Ferguson et al., 2014 )]. Despite widespread implementation, the ICMJE criteria do not seem to have affected the authors’ conduct and all the aforementioned attempts to counteract authorship inflation have their limitations and shortcomings and have so far been of little avail.

An outlook on authorship

Guidelines and proposed measures against authorship proliferation are aimed at the researchers’ ethical commitment—they are aimed at the heart of the researcher, but probably not at the heart of the problem. Authorship, while providing recognition and personal fulfillment, serves as the primary metric for evaluating researchers, influencing their promotion, tenure, salary, and funding (Grobbee & Allpress, 2016 ; Lutnick et al., 2021 ; Pell, 2019 ). Evaluation systems primarily relying on citation metrics unintentionally fuel unethical conduct and “gaming of the system” (Greene, 2007 ) by inflationary assignment of authorship, which is extremely difficult to identify or sanction. Combined with a lack of disincentive for e.g., honorary authorship, such behavior is inevitably a result of current evaluation systems. To escape this vicious circle, in which author proliferation erodes the unique value of authorship and perverts the system, a rethinking of outdated academic assessment systems, reward frameworks, and funding agency policies is urgently needed (Grieger, 2005 ; Lutnick et al., 2021 ). Authorship proliferation highlights the need to develop alternative metrics other than purely counting publications to evaluate scientific productivity (Lutnick et al., 2021 ). For a paradigm shift to occur, evaluation policies that prioritize the quality of publications over sheer quantity, hereby reinforcing the ethical dedication of scientists, are required (Grieger, 2005 ). To restore the value of academic authorship, decisions-makers on career and funding should evaluate applicants based on the quality and specific contribution to the body of work, not solely the quantity of their publication record, as emphasized by Shapiro et al. (Shapiro et al., 1994 ). One strategy could involve applicants highlighting their most impactful publications within a designated timeframe (Drenth, 1996 ), as already implemented by certain funding bodies. The approach additionally entails introducing new academic performance metrics. A credit system could be implemented for collaborators whose contributions do not meet the rigorous criteria for authorship, holding them accountable for career advancement and funding acquisition. Acknowledging the roles of technical/methodological contributors, writers, and supervisors based on their specific inputs would not only address issues of excessive authorship and overcrowded bylines but also foster mutual respect and teamwork.

Practical relevance and educational implications

Although most research institutions have established guidelines on research ethics, there is a critical need to integrate authorship-related ethical considerations into education curricula, particularly in bioethics courses (Strange, 2008 ). Academic institutions should incorporate courses focused on research integrity in both graduate and postgraduate programs, addressing authorship issues along with other forms of scientific misconduct such as fabrication, falsification, and plagiarism (Anderson et al., 2007 ; Martyn, 2003 ; Shah et al., 2018 ). Early exposure to the ICMJE recommendations can raise the students’ awareness and ability to recognize unethical practices (Shah et al., 2018 ), underscoring the necessity of including authorship guidelines and ethical discussions within existing and future Bachelor’s, Master’s, and Ph.D. curricula. To develop a generation of ethical scientists, current leaders must lead by example and realize their actions have a “trickle-down effect” on those they train, as An et al. stated (An et al., 2020 ). Hence, also senior colleagues would profit from dedicated mentoring, training, and formal discussion on authorship, since it is primarily the senior leaders who must exhibit the courage and willingness to change the system (Lee, 2009 ). Moreover, institutions should self-reflect on institutional hierarchies and not coerce junior scientists into including non-authorship seniors e.g., as honorary authors. Ombudspersons must be in charge to support junior staff members in such situations (Shah et al., 2018 ). Given the possibility of rendering a paper unpublishable, institutions must establish clear protocols for resolving authorship disputes (Strange, 2008 ).

Taken together, integrating authorship policies into scientific culture through good scientific practice (GSP) guidelines and educational initiatives may curb authorship misconduct and limit further proliferation linked to questionable practices. However, awareness and training alone might still be insufficient (Shah et al., 2018 ) if external incentive and award systems continue to rely on current values (authorship, publications, “impact”). “Even with the establishment of well-defined authorship guidelines and mechanisms for resolving and preventing problems though, authorship abuse will still occur” as realistically noted by Strange (Strange, 2008 ).

Conclusions

Examining over 17 million PubMed articles in a recent period of 20 years, the study affirmed authorship proliferation in the biomedical literature. Notably, all analyzed publication types showed highly significant increases, especially multicenter studies, and clinical trials, with a concurrent decline in single-authored papers.

Credible explanations for this seemingly unstoppable trend are increasing research complexity, increasingly sophisticated methodology, multidisciplinary research, larger research units, internationalization, and multicenter collaborations. Although the current study design did not allow to test for the contribution of such explanations to the observed trend, the reported high frequency of authorship misconduct (honorary authorship etc.) suggest additional and different factors driving authorship inflation: mainly the (increasing) pressure to publish primarily induced by current academic performance assessments, promotion policies and reward structures which themselves mainly focus on quantitative citation metrics and publication counting. Against this background, this paper discusses possible approaches to limit authorship proliferation, to maintain its value, and how to sustainably embed a more sensitive attitude towards the ethical aspects of authorship. Achieving successful and enduring changes requires a joint effort of all stakeholders involved in scholarly communication, knowledge dissemination, and funding, including scientists, authors, contributors, journal editors, publishers, funding bodies, personnel decision makers and, finally, those we as authors primarily publish for—the readers.

http://www.icmje.org . As of February 2023, more than 7,900 journals state to follow the ICMJE recommendations in their authorship policies ( http://www.icmje.org/journals-following-the-icmje-recommendations/ ).

https://www.scopus.com/home.uri

https://pubmed.ncbi.nlm.nih.gov

https://www.nlm.nih.gov/bsd/difference.html

https://dataguide.nlm.nih.gov/edirect/archive.html

https://www.researchgate.net

https://www.nlm.nih.gov/bsd/policy/authorship.html .

Abbott, B. P., Abbott, R., Abbott, T. D., Acernese, F., Ackley, K., Adams, C., Adams, T., Addesso, P., Adhikari, R. X., Adya, V. B., Affeldt, C., Afrough, M., Agarwal, B., Agathos, M., Agatsuma, K., Aggarwal, N., Aguiar, O. D., Aiello, L., Ain, A., & Woudt, P. A. (2017). Multi-messenger observations of a binary neutron star merger. The Astrophysical Journal . https://doi.org/10.3847/2041-8213/aa91c9

Article   Google Scholar  

An, J. Y., Baiocco, J. A., & Rais-Bahrami, S. (2018). Trends in the authorship of peer reviewed publications in the urology literature. Urology Practice, 5 (3), 233–239. https://doi.org/10.1016/j.urpr.2017.03.008

An, J. Y., Marchalik, R. J., Sherrer, R. L., Baiocco, J. A., & Rais-Bahrami, S. (2020). Authorship growth in contemporary medical literature. SAGE Open Med, 8 , 2050312120915399. https://doi.org/10.1177/2050312120915399

Anderson, M. S., Horn, A. S., Risbey, K. R., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). What do mentoring and training in the responsible conduct of research have to do with scientists’ misbehavior? Findings from a National Survey of NIH-funded scientists. Academic Medicine, 82 (9), 853–860. https://doi.org/10.1097/ACM.0b013e31812f764c

Baerlocher, M. O., Newton, M., Gautam, T., Tomlinson, G., & Detsky, A. S. (2007). The meaning of author order in medical research. Journal of Investigative Medicine, 55 (4), 174–180. https://doi.org/10.2310/6650.2007.06044

Baethge, C. (2008). Publish together or perish: the increasing number of authors per article in academic journals is the consequence of a changing scientific culture. some researchers define authorship quite loosely. Deutsches Ärzteblatt International, 105 (20), 380–383. https://doi.org/10.3238/arztebl.2008.0380

Bennett, D. M., & Taylor, D. M. (2003). Unethical practices in authorship of scientific papers. Emergency Medicine (Fremantle), 15 (3), 263–270. https://doi.org/10.1046/j.1442-2026.2003.00432.x

Bhopal, R. S., Rankin, J. M., McColl, E., Stacy, R., Pearson, P. H., Kaner, E. F., Thomas, L. H., Vernon, B. G., & Rodgers, H. (1997). Team approach to assigning authorship order is recommended. BMJ, 314 (7086), 1046–1047.

Google Scholar  

Camp, M., & Escott, B. G. (2013). Authorship proliferation in the orthopaedic literature. Journal of Bone and Joint Surgery American, 95 (7), e44. https://doi.org/10.2106/JBJS.L.00519

Changa, Y.-W., Huang, M.-H., & Chiu, M.-J. (2019). Hyperauthorship: a comparative study of genetics and high-energy physics research. Malaysian Journal of Library & Information Science, 24 (1), 23–44. https://doi.org/10.22452/mjlis.vol24no1.2

Chien, J. L., Wu, B. P., Nayer, Z., Grits, D., Rodriguez, G., Gu, A., Ghassibi, M. P., Chien, G. F., Oliveira, C., Stamper, R. L., Van Tassel, S. H., Muylaert, S., & Belyea, D. A. (2020). Trends in authorship of original scientific articles in journal of glaucoma: An analysis of 25 years since the initiation of the journal. Journal of Glaucoma, 29 (7), 561–566. https://doi.org/10.1097/IJG.0000000000001503

Chow, D. S., Ha, R., & Filippi, C. G. (2015). Increased rates of authorship in radiology publications: A bibliometric analysis of 142,576 articles published worldwide by radiologists between 1991 and 2012. American Journal of Roentgenology, 204 (1), W52-57. https://doi.org/10.2214/AJR.14.12852

Claxton, L. D. (2005). Scientific authorship. Part 2. History, recurring issues, practices, and guidelines. Mutation Research/reviews in Mutation Research, 589 (1), 31–45. https://doi.org/10.1016/j.mrrev.2004.07.002

Covidsurg Collaborative GC. (2021). SARS-CoV-2 vaccination modelling for safe surgery to save lives: Data from an international prospective cohort study. British Journal of Surgery, 108 (9), 1056–1063. https://doi.org/10.1093/bjs/znab101

Cronin, B. (1996). Research brief rates of return to citation. Journal of Documentation, 52 (2), 188–197. https://doi.org/10.1108/eb026967

Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? JASIST, 52 (7), 558–569. https://doi.org/10.1002/asi.1097

Dhingra, D., & Mishra, D. (2014). Publication misconduct among medical professionals in India. Indian Journal of Medical Ethics, 11 (2), 104–107. https://doi.org/10.20529/IJME.2014.026

Dong, Y., Wang, P., Guo, L., & Liu, H. (2016). “Listing author contribution” does not alter the author inflation in the publications in basic research in four major gastroenterology journals in 10 years. Scientometrics, 107 (3), 1501–1507. https://doi.org/10.1007/s11192-016-1923-4

Dotson, B., McManus, K. P., Zhao, J. J., & Whittaker, P. (2011). Authorship and characteristics of articles in pharmacy journals: Changes over a 20-year interval. Annals of Pharmacotherapy, 45 (3), 357–363. https://doi.org/10.1345/aph.1P610

Drenth, J. P. H. (1996). Proliferation of authors on research reports in medicine. Science and Engineering Ethics, 2 (4), 469–480. https://doi.org/10.1007/BF02583933

Drenth, J. P. (1998). Multiple authorship: The contribution of senior authors. JAMA, 280 (3), 219–221. https://doi.org/10.1001/jama.280.3.219

Durani, P., Rimouche, S., & Ross, G. (2007). ’How many plastic surgeons does it take to write a research article?—authorship proliferation in and internationalisation of the plastic surgery literature. Journal of Plastic, Reconstructive & Aesthetic Surgery, 60 (8), 956–957. https://doi.org/10.1016/j.bjps.2006.08.002

Elsevier. (2023). Content coverage. Retrieved August 2, from https://www.elsevier.com/solutions/scopus/how-scopus-works/content

Epstein, R. J. (1993). Six authors in search of a citation: Villains or victims of the Vancouver convention? BMJ, 306 (6880), 765–767. https://doi.org/10.1136/bmj.306.6880.765

Ferguson, C., Marcus, A., & Oransky, I. (2014). Publishing: The peer-review scam. Nature, 515 (7528), 480–482. https://doi.org/10.1038/515480a

Fernandes, J. M., & Cortez, P. (2020). Alphabetic order of authors in scholarly publications: A bibliometric study for 27 scientific fields. Scientometrics, 125 (3), 2773–2792. https://doi.org/10.1007/s11192-020-03686-0

Flanagin, A., Carey, L. A., Fontanarosa, P. B., Phillips, S. G., Pace, B. P., Lundberg, G. D., & Rennie, D. (1998). Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. JAMA, 280 (3), 222–224. https://doi.org/10.1001/jama.280.3.222

Greene, M. (2007). The demise of the lone author. Nature, 450 (7173), 1165. https://doi.org/10.1038/4501165a

Grieger, M. C. (2005). Authorship: An ethical dilemma of science. Sao Paulo Medical Journal, 123 (5), 242–246. https://doi.org/10.1590/S1516-31802005000500008

Grobbee, D. E., & Allpress, R. (2016). On scientific authorship: Proliferation, problems and prospects. European Journal of Preventive Cardiology, 23 (8), 790–791. https://doi.org/10.1177/2047487316642383

Gu, A., Almeida, N., Cohen, J. S., Peck, K. M., & Merrell, G. A. (2017). Progression of authorship of scientific articles in the Journal of Hand Surgery, 1985–2015. The Journal of Hand Surgery . https://doi.org/10.1016/j.jhsa.2017.01.005

Hsu, A. L., Konner, M., Muttreja, A., Lee, C. H., Chien, J. L., & Irish, R. D. (2021). A comprehensive analysis of authorship trends in Skeletal Radiology since inception from 1976 to 2020. Skeletal Radiology, 50 (12), 2519–2523. https://doi.org/10.1007/s00256-021-03810-y

ICMJE. (2022). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. nternational Committee of Medical Journal Editors. Retrieved October 17, from https://www.icmje.org/recommendations/

Kans, J. (2022). Entrez Direct: E-utilities on the Unix Command Line. National Center of Biotechnology Information (NCBI). Retrieved February 11, from https://www.ncbi.nlm.nih.gov/books/NBK179288/

Kennedy, M. S., Barnsteiner, J., & Daly, J. (2014). Honorary and ghost authorship in nursing publications. Journal of Nursing Scholarship, 46 (6), 416–422. https://doi.org/10.1111/jnu.12093

Khachatryan, V., Sirunyan, A. M., Tumasyan, A., Adam, W., Bergauer, T., Dragicevic, M., Ero, J., Fabjan, C., Friedl, M., Fruhwirth, R., Ghete, V. M., Hammer, J., Hansel, S., Hoch, M., Hormann, N., Hrubec, J., Jeitler, M., Kasieczka, G., Kiesenhofer, W.,… Weinberg, M. (2010). First measurement of Bose-Einstein correlations in proton-proton collisions at radicals = 0.9 and 2.36 TeV at the LHC. Physical Review Letters . 105(3), 032001. https://doi.org/10.1103/PhysRevLett.105.032001

King, J. T., Jr. (2000). How many neurosurgeons does it take to write a research article? Authorship proliferation in neurosurgical research. Neurosurgery, 47 (2), 435–440. https://doi.org/10.1097/00006123-200008000-00032

Knudson, D. (2012). Twenty-year trends of authorship and sampling in applied biomechanics research. Perceptual and Motor Skills, 114 (1), 16–20. https://doi.org/10.2466/11.PMS.114.1.16-20

Kornhaber, R. A., McLean, L. M., & Baber, R. J. (2015). Ongoing ethical issues concerning authorship in biomedical journals: An integrative review. International Journal of Nanomedicine, 10 , 4837–4846. https://doi.org/10.2147/IJN.S87585

Larivière, V., Gingras, Y., Sugimoto, C. R., & Tsou, A. (2015). Team size matters: Collaboration and scientific impact since 1900. Journal of the Association for Information Science and Technology, 66 (7), 1323–1332. https://doi.org/10.1002/asi.23266

Lee, S. S. (2009). Authorship: Pride and proliferation. Liver International, 29 (4), 477. https://doi.org/10.1111/j.1478-3231.2009.01978.x

Leung, W., Shaffer, C. D., Reed, L. K., Smith, S. T., Barshop, W., Dirkes, W., Dothager, M., Lee, P., Wong, J., Xiong, D., Yuan, H., Bedard, J. E., Machone, J. F., Patterson, S. D., Price, A. L., Turner, B. A., Robic, S., Luippold, E. K., McCartha, S. R.,… Elgin, S. C. (2015). Drosophila muller f elements maintain a distinct set of genomic properties over 40 million years of evolution. Bethesda , 5(5), 719–740. https://doi.org/10.1534/g3.114.015966

Levsky, M. E., Rosin, A., Coon, T. P., Enslow, W. L., & Miller, M. A. (2007). A descriptive analysis of authorship within medical journals, 1995–2005. Southern Medical Journal, 100 (4), 371–375. https://doi.org/10.1097/01.smj.0000257537.51929.4b

Liesegang, T. J., Schachat, A. P., & Albert, D. M. (2010). Defining authorship for group studies. Archives of Ophthalmology, 128 (8), 1071–1072. https://doi.org/10.1001/archophthalmol.2010.159

Lin, Z. (2023). Modernizing authorship criteria: Challenges from exponential authorship inflation and generative artificial intelligence. Preprint. https://doi.org/10.31234/osf.io/s6h58

Lin, Z., & Lu, S. (2023). Exponential authorship inflation in neuroscience and psychology from the 1950s to the 2020s. American Psychologist . In press. https://doi.org/10.31234/osf.io/vfz9q

Lutnick, E., Cusano, A., Sing, D., Curry, E. J., & Li, X. (2021). Authorship proliferation of research articles in top 10 orthopaedic journals: A 70-year analysis. Journal of the American Academy of Orthopaedic Surgeons Global Research and Reviews . https://doi.org/10.5435/JAAOSGlobal-D-21-00098

Martyn, C. (2003). Fabrication, falsification and plagiarism. QJM, 96 (4), 243–244. https://doi.org/10.1093/qjmed/hcg036

Marusic, A., Bosnjak, L., & Jeroncic, A. (2011). A systematic review of research on the meaning, ethics and practices of authorship across scholarly disciplines. PLoS ONE, 6 (9), e23477. https://doi.org/10.1371/journal.pone.0023477

Miles, S., Renedo, A., & Marston, C. (2022). Reimagining authorship guidelines to promote equity in co-produced academic collaborations. Global Public Health, 17 (10), 2547–2559. https://doi.org/10.1080/17441692.2021.1971277

Modi, P., Hassan, A., Teng, C. J., & Chitwood, W. R., Jr. (2008). “How many cardiac surgeons does it take to write a research article?” Seventy years of authorship proliferation and internationalization in the cardiothoracic surgical literature. Journal of Thoracic and Cardiovascular Surgery, 136 (1), 4–6. https://doi.org/10.1016/j.jtcvs.2007.12.057

NIH–NLM. (2020). Number of authors per MEDLINE®/PubMed® citation. National Institute of Health–National Library of Medicine. Retrieved January 11, from https://www.nlm.nih.gov/bsd/authors1.html#collective

Ojerholm, E., & Swisher-McClure, S. (2015). Authorship in radiation oncology: proliferation trends over 30 years. International Journal of Radiation Oncology Biology Physics, 93 (4), 754–756. https://doi.org/10.1016/j.ijrobp.2015.07.2289

Pell, H. (2019). From lone genius to wisdom of the crowd: Hyperauthorship in high-energy and astrophysics. Retrieved January 31, from https://www.aip.org/history-programs/niels-bohr-library/ex-libris-universum/lone-genius-wisdom-crowd-hyperauthorship#_ftnref10

Pintér, A. (2013). Changing trends in authorship patterns in the JPS: Publish or perish. Journal of Pediatric Surgery, 48 (2), 412–417. https://doi.org/10.1016/j.jpedsurg.2012.10.069

Rahman, M. T., Regenstein, J. M., Abu Kassim, N. L., & Karim, M. M. (2021). Contribution based author categorization to calculate author performance index. Accountability in Research, 28 (8), 492–516. https://doi.org/10.1080/08989621.2020.1860764

Reisig, M. D., Holtfreter, K., & Berzofsky, M. E. (2020). Assessing the perceived prevalence of research fraud among faculty at research-intensive universities in the USA. Accountability in Research, 27 (7), 457–475. https://doi.org/10.1080/08989621.2020.1772060

Sayers, E. (2017). A general introduction to the E-utilities. National Center for Biotechnology Information (NCBI). Retrieved March 03, from https://www.ncbi.nlm.nih.gov/books/NBK25497/

Seetharam, A., Ali, M. T., Wang, C. Y., Schultz, K. E., Fischer, J. P., Lunsford, S., Whipple, E. C., Loder, R. T., & Kacena, M. A. (2018). Authorship trends in the Journal of Orthopaedic Research: A bibliometric analysis. Journal of Orthopaedic Research, 36 (11), 3071–3080. https://doi.org/10.1002/jor.24054

Shah, A., Rajasekaran, S., Bhat, A., & Solomon, J. M. (2018). Frequency and factors associated with honorary authorship in Indian Biomedical Journals: Analysis of papers published from 2012 to 2013. Journal of Empirical Research on Human Research Ethics, 13 (2), 187–195. https://doi.org/10.1177/1556264617751475

Shapiro, D. W., Wenger, N. S., & Shapiro, M. F. (1994). The contributions of authors to multiauthored biomedical research papers. JAMA, 271 (6), 438–442. https://doi.org/10.1001/jama.1994.03510300044036

Sharma, H., & Verma, S. (2018). Authorship in biomedical research: A sweet fruit of inspiration or a bitter fruit of trade. Tropical Parasitology, 8 (2), 62–69. https://doi.org/10.4103/tp.TP_27_18

Sheridan, G., Wisken, E., Hing, C. B., & Smith, T. O. (2018). A bibliometric analysis assessing temporal changes in publication and authorship characteristics in The Knee from 1996 to 2016. The Knee, 25 (2), 213–218. https://doi.org/10.1016/j.knee.2018.01.014

Singh Chawla, D. (2019). Hyperauthorship: Global projects spark surge in thousand-author papers. Nature . https://doi.org/10.1038/d41586-019-03862-0

Singh Chawla, D. (2020). The gift of paper authorship—researchers seek clearer rules on crediting co-authors. Retrieved July 14, from https://www.nature.com/nature-index/news/gift-ghost-authorship-what-researchers-need-to-know

Slone, R. M. (1996). Coauthors’ contributions to major papers published in the AJR: Frequency of undeserved coauthorship. American Journal of Roentgenology, 167 (3), 571–579. https://doi.org/10.2214/ajr.167.3.8751654

Smith, R. (1997). Authorship: Time for a paradigm shift? BMJ, 314 (7086), 992. https://doi.org/10.1136/bmj.314.7086.992

Strange, K. (2008). Authorship: Why not just toss a coin? American Journal of Physiology-Cell Physiology, 295 (3), C567-575. https://doi.org/10.1152/ajpcell.00208.2008

Sugrue, C. M., & Carroll, S. M. (2015). Authorship proliferation in hand surgery research: How many hand surgeons does it take to write a research article? Journal of Hand and Microsurgery, 7 (1), 108–109. https://doi.org/10.1007/s12593-015-0175-5

Thelwall, M., & Maflahi, N. (2022). Research coauthorship 1900–2020: Continuous, universal, and ongoing expansion. Quantitative Science Studies, 3 (2), 331–344. https://doi.org/10.1162/qss_a_00188

Tilak, G., Prasad, V., & Jena, A. B. (2015). Authorship inflation in medical publications. Inquiry . https://doi.org/10.1177/0046958015598311

Varghese, J., & Jacob, M. V. (2022). Gift authorship: Look the gift horse in the mouth. Indian Journal of Medical Ethics . https://doi.org/10.20529/ijme.2022.028

Weeks, W. B., Wallace, A. E., & Kimberly, B. C. (2004). Changes in authorship patterns in prestigious US medical journals. Social Science and Medicine, 59 (9), 1949–1954. https://doi.org/10.1016/j.socscimed.2004.02.029

Weinberg, A. M. (1961). Impact of large-scale science on the United States: Big science is here to stay, but we have yet to make the hard financial and educational choices it imposes. Science, 134 (3473), 161–164. https://doi.org/10.1126/science.134.3473.161

Wislar, J. S., Flanagin, A., Fontanarosa, P. B., & Deangelis, C. D. (2011). Honorary and ghost authorship in high impact biomedical journals: A cross sectional survey. BMJ, 343 , d6128. https://doi.org/10.1136/bmj.d6128

Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316 (5827), 1036–1039. https://doi.org/10.1126/science.1136099

Download references

Open access funding provided by Paracelsus Medical University.

Author information

Authors and affiliations.

Center for Physiology, Pathophysiology and Biophysics, Institute for Physiology and Pathophysiology, Paracelsus Medical University, 5020, Salzburg, Austria

Martin Jakab, Eva Kittl & Tobias Kiesslich

Department of Internal Medicine I, University Hospital Salzburg, Salzburger Landeskliniken (SALK), Paracelsus Medical University, Salzburg, Austria

Tobias Kiesslich

You can also search for this author in PubMed   Google Scholar

Contributions

Conception and design of the work: MJ 40%, EK 20%, TK 40%; acquisition, analysis of data: MJ 75%, EK 25%; interpretation of data: MJ 75%, TK 25%. Drafting and critically revising the work for important intellectual content: MJ 75%, TK 25%. Approval of the version to be published: MJ 40%, EK 20% and TK 40%. Accountability for all aspects of the work: MJ 50%, EK 10% and TK 40%.

Corresponding author

Correspondence to Martin Jakab .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The manuscript of this study constitutes the empirical part of the master thesis of the first author as part of his university course ‘Health Sciences and Leadership’ at the Paracelsus Medical University (submitted in May 2022).

Supplementary Information

Below is the link to the electronic supplementary material.

11192_2024_4928_MOESM1_ESM.tiff

Supplementary file 1 (TIFF 916 KB) Median author numbers per publication over time. For all 13 publication types, cumulative (Overall) and for single publication types

11192_2024_4928_MOESM2_ESM.xlsx

Supplementary file 2 (XLSX 17 KB) Regressions for mean author numbers (arithmetic mean) per publication over time. For all 13 publication types, cumulative (Overall) and for single publication types. RCT, randomized controlled trial. Best fits with confidence bands are shown along with r2 and R2 values for linear- and non-linear fits, respectively

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Jakab, M., Kittl, E. & Kiesslich, T. How many authors are (too) many? A retrospective, descriptive analysis of authorship in biomedical publications. Scientometrics 129 , 1299–1328 (2024). https://doi.org/10.1007/s11192-024-04928-1

Download citation

Received : 15 February 2023

Accepted : 27 December 2023

Published : 25 January 2024

Issue Date : March 2024

DOI : https://doi.org/10.1007/s11192-024-04928-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Authorship inflation
  • Hyperauthorship
  • Multiauthorship
  • Authorship proliferation
  • Find a journal
  • Publish with us
  • Track your research

Educational resources and simple solutions for your research journey

A guide to authorship in research and scholarly publishing

A Guide to Authorship in Research and Scholarly Publishing

Scientific and academic authorship in research publishing is a critical part of a researcher’s career. However, the concept of authorship in research p ublication can be confusing for early career researchers, who often find it difficult to assess whether their or others’ contribution to a project are enough to warrant authorship. Today, there are more opportunities than ever to collaborate with researchers, not just across the globe but also across different disciplines and even those outside academia. This rapid growth in the number of global research collaborations, and has also led to an increase in the number of authors per paper. 1 For instance, a paper that was published on the ATLAS experiment at the Large Hadron Collider at CERN set the record for the largest author list with over 5,000 authors. 2 Such cases act as catalysts for ongoing discussions among the research community about authorship in research and who should and who shouldn’t be credited and held accountable for published research.  

Table of Contents

So how do you define authorship in research?

The most common definition of authorship in research is the one established by the International Committee of Medical Journal Editors (ICMJE). According to ICMJE’s guidelines, to be acknowledged as an author, a researcher should have met all of the following criteria. An author would have made major contributions to the research idea or study design, or data collection and analysis. They would have been part of the process of writing and revising the research manuscript and would be called on to give final approval on the version being published. Finally, an author must ensure the research is done ethically and accurately and should be willing to stand up and defend their work as needed.  

According to the Committee on Publication Ethics (COPE) the best time to decide authorship in research , in terms of who should be named authors and in what order , is before the research project is initiated. It recommends researchers create and keep written author agreements and to revisit the author list as the project evolves. 3 Consequently, any changes authorship in research either in a researcher’s level of involvement, or the addition or exclusion of members during the project must be approved by all involved and must reflect in the author byline.

number of authors in a research paper

Understanding the difference between author and contributor roles

Given the constant increase in scholarly publishing and the continuing pressures to “publish or perish,” many researchers are choosing to participate in multi-author projects. This makes it harder to decide on authorship in research as one needs to differentiate between authors, co-authors, and contributors and this often leads to confusion over accountabilities and entitlements.  

  • Lead authors or first authors in publication are those who undertake original research and also drafts and edits the research manuscript. They also play a major role in journal submission and must review and agree on the corrections submitted by all the authors.  
  • Co-authors are those who make a major contribution to and are also equally responsible for the research results; they work hand in hand with lead authors to help them create and revise the research paper for journal submission.  
  • Corresponding authors are those who sign the publishing agreement on behalf of all the authors and manage all the correspondence around the article. They are tasked with ensuring ethical guidelines are followed, author affiliations and contact details are correct, and that the authors are listed in the right order.  
  • Contributors are those who may have provided valuable resources and assistance with planning and conducting the research but may not have written or edited the research paper. While not assigned authorship in research papers, they are typically listed at the end of the article along with a precise description of each person’s contribution.  

Getting the order of listing authors right

The order of authorship in research being published plays an important role for scientific merit; probably as important to a researcher’s career as the number of papers they published. However, the practice of accrediting positions when deciding authorship in research differs greatly between different research streams and often becomes a bone of contention among authors.  

There are some common formats used to determine author listing in research. One common format is when authors are generally listed in the order of their contributions, with the main author of the paper being listed at the end. This honor is typically reserved for the head of the department in which the research was carried out. This kind of listing sometimes creates angst among authors, as they feel that the order does not reflect the significance of their contributions. Another common format is one where authors are listed alphabetically. While this might seem like a more equitable solution when listing authorship in research , it has its own disadvantages. If the main author’s name begins with a letter late in the alphabet, it is very likely to be overlooked when the paper is cited by others, clearly not a very happy scenario for the main author.  

Unfortunately, globally and across research arenas, there is still no uniform understanding or system for the ordering of author names on research papers. And journals do not normally step in to arbitrate such disputes on authorship in research . Individual authors and contributors are expected to evaluate their role in a project and attribute authorship in research papers in keeping with set publication standards. Clearly, the responsibility falls entirely on authors to discuss and agree on the best way to list authors.  

Avoiding unethical authorship in research  

Correctly conveying who is responsible for published scientific research is at the very core of scientific integrity. However, despite clearly outlined guidelines and definitions, scholarly publishing continues to be plagued by numerous issues and ethical concerns regarding the attribution of authorship in research . According to The International Center for Academic Integrity (ICAI), 4 instances of unethical authorship in research papers include:

  • Changing the order of authors in an unjustified and improper way
  • Using personal authority to add someone as an author without their contributing to the work
  • Eliminating contributor names from later publications
  • Adding a name as author without the person’s consent

A uthors need to be aware of and understand the nuances of ethical authorship in research papers to avoid confusion, conflict and ill-will among the co-authors and contributors. While researchers receive recognition and credit for their intellectual work, they are also held accountable for what they publish. It is important to remember that the primary responsibility of research authors is to preserve scientific integrity, which can only happen if research is conducted and documented ethically.  

  • Mazzocchi F. Scientific research across and beyond disciplines: Challenges and opportunities of interdisciplinarity. EMBO Reports, June 2019. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6549017/
  • Castelvecchi, D. Physics paper sets record with more than 5,000 authors. Nature, May 2015. https://www.nature.com/articles/nature.2015.17567
  • Dance, A. Authorship: Who’s on first?. Nature, September 2012. https://www.nature.com/articles/nj7417-591a
  • Unethical Authorship; How to Avoid? Blog – Canadian Institute for Knowledge Development, February 2020. https://icndbm.cikd.ca/unethical-authorship-how-to-avoid/

Related Posts

Paperpal - AI Academic Writing Toolkit

Paperpal Review: Key Features, Pricing Plans, and How-to-Use

structuring a dissertation

How to Structure a Dissertation? 

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Practical publication metrics for academics

Bethany a myers, katherine l kahn.

  • Author information
  • Article notes
  • Copyright and License information

Correspondence , Katherine L. Kahn, Division of General Internal Medicine and Health Services Research, David Geffen School of Medicine, University of California, 1100 Glendon Ave, Suite 1820‐1838, Los Angeles, CA 90095, USA. Email: [email protected]

Corresponding author.

Revised 2021 Apr 9; Received 2021 Feb 25; Accepted 2021 Apr 15; Issue date 2021 Sep.

This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.

Research organizations are becoming more reliant on quantitative approaches to determine how to recruit and promote researchers, allocate funding, and evaluate the impact of prior allocations. Many of these quantitative metrics are based on research publications. Publication metrics are not only important for individual careers, but also affect the progress of science as a whole via their role in the funding award process. Understanding the origin and intended use of popular publication metrics can inform an evaluative strategy that balances the usefulness of publication metrics with the limitations of what they can convey about the productivity and quality of an author, a publication, or a journal. This paper serves as a brief introduction to citation networks like Google Scholar, Web of Science Core Collection, Scopus, Microsoft Academic, and Dimensions. It also explains two of the most popular publication metrics: the h‐index and the journal impact factor. The purpose of this paper is to provide practical information on using citation networks to generate publication metrics, and to discuss ideas for contextualizing and juxtaposing metrics, in order to help researchers in translational science and other disciplines document their impact in as favorable a light as may be justified.

INTRODUCTION

As the scale of global research continues to increase, research organizations are becoming more reliant on quantitative approaches to determine how to recruit and promote researchers, allocate funding, and evaluate the impact of prior allocations. It has been common practice for funders; appointment, tenure, and promotion committees; academic administrations; publishers; and others to apply a variety of quantitative metrics to rank researchers, papers, journals, and even institutions and countries. 1 , 2 , 3 Many of these quantitative metrics are based on research publications. The total number of publications can be used to infer scientific output or productivity, whereas the number of citations to those publications may be used to infer the impact of the research. In aggregate, these “publication metrics” have potential to serve both researchers and those evaluating the work of researchers. 4 For the researcher, metrics can highlight the scope and strengths of one’s work, forming a useful starting point to answer the question, “what have I done?” Researchers can use metrics to structure their review of their past work, design efficient summaries of their prior research trajectory, and inform future decision making. For an evaluator (and many researchers eventually find themselves in the position of evaluating the scientific achievements of others), understanding a researcher’s publication and citation record provides context for judging their achievements and future potential.

Publication metrics are not only important for individual careers, but also affect the progress of science as a whole via their role in the funding award process. Funders that receive many grant applications may see quantitative publication metrics as a shortcut to assess research quality and impact. Researchers competing for grants may in turn strive to achieve a perceived threshold for certain metrics. An understanding of the origin and intended use of popular publication metrics can inform an evaluative strategy that balances the usefulness of metrics with the limitations of what they can convey about the productivity and quality of an author, a paper, or a journal.

This is especially important in translational science, a discipline created to improve patient and population outcomes. Translational science researchers are iteratively called upon by peers, funders, and their institutions to use their publication records to document their progress toward these outcomes, and translational science evaluators use publication metrics in their assessments. 5 , 6 The purpose of this paper is to briefly describe the most frequently used quantitative publication metrics, provide practical information on generating metrics, and discuss ideas for contextualizing and juxtaposing metrics, in order to help researchers in translational science and other disciplines document their impact in as favorable a light as may be justified.

CITATION NETWORKS

Citation counts represent the number of times a publication has been cited by other publications. Because citation counts represent a key constituent of the most frequently used publication metrics, understanding their source is necessary to effectively utilize publication metrics. Citation counts are usually provided by citation networks or indices, which are systems that connect each publication to every publication it cites, as well as every publication that has cited it. No single citation network functions as the dominant source of citation data. Instead, several citation networks exist and vary according to which publications they include. As a consequence, citation networks also vary in the citation numbers they generate for any given publication. Citation networks include traditional indexed databases, which contain article metadata ingested from publisher sources and accessed by users via a searchable interface; academic search engines, which scrape the web for relevant content and allow users to search the content via a web interface; and metadata datasets that can only be accessed computationally (e.g., via API [Application Programming Interfaces]). However, citation data are compiled, networks are created by connecting each citation reference in the publication’s bibliography to that reference’s record in the database. For example, if paper A cites another paper B, the citation network “reads” that citation and adds one cited reference to the existing total citation count of paper A. Users have a choice among multiple science citation networks. Six of the largest are described in Table  1 .

Descriptions of common citation networks

Abbreviation: API, Application Programming Interfaces.

Note that with the exception of Crossref, a not‐for‐profit 501(c)6 organization, all the “freely available” networks listed in this table are owned by for‐profit companies. The citation networks may be free to the user, but they likely generate revenue for their parent company by collecting user data from searches and academics’ profiles.

Practical applications

To select one or more citation networks, users may consider (1) the network’s coverage of publication types and areas of research, (2) its number of citation linkages between publications, (3) the user‐friendliness of its interface, and (4) its functionality when it comes to automatically generating metrics. Researchers should be aware that most citation networks are primarily comprised of journal articles. Therefore, it may be more difficult to assess the citation impact of gray literature such as white papers, reports, clinical trials, or other nontraditional publications. 7 Recent studies comparing various citation networks for accuracy and completeness may help inform the decision to choose a particular citation network. 8 , 9 , 10 , 11 , 12 Because publication metrics are derived from citation counts within citation networks, and citation counts vary depending on the network’s publication coverage, metrics derived for a given author from one network will not necessarily be concordant with metrics for the same author but derived from another network. A researcher may find that one citation network contains records for most of their publications, whereas another network may only have records for some of their publications. Although it is generally advantageous for researchers to find a network containing records for all of their publications, 8 researchers selecting among networks must also consider that networks’ bibliometric data vary according to the quality of the included data, in addition to the quantity of publications reported. For example, a researcher is likely to find more of their publications, and therefore a higher citation count and h‐index, by using Google Scholar. However, as described in Table  1 , Google Scholar may contain erroneous or duplicate records due to the way it collects publication data from the web. Although at first glance the researcher may think that Google Scholar offers a higher number and therefore a “better” metric, the accuracy of that metric may be questionable if it is based upon faulty bibliographic data. Precise documentation by researchers of the citation network(s) they select to inform their publication metrics provides the opportunity for their evaluators to assess the accuracy of their analyses.

If a researcher determines that metrics from multiple citation networks are useful to show context for their work, two or more different citation networks can be documented clearly to avoid confusion in interpreting their metrics. For example, a researcher working on a tenure and promotion dossier may decide to primarily use Web of Science Core Collection to search for their journal articles, and use Web of Science Core Collection’s citation counts and calculated h‐index to document their career’s published articles. This researcher may also use Google Scholar to find their gray literature publications, and decide to include the citation counts of those publications to promote their research that resulted in a white paper, report, or other nonarticle publication type. In this example, the dossier should clearly indicate that the metrics presented for the journal articles came from Web of Science Core Collection, while the metrics presented for the gray literature publications came from Google Scholar.

PUBLICATION‐LEVEL METRICS

Publication‐level (including both articles and nonarticle publications) metrics represent any quantitative number relating to an individual publication. Most commonly, this takes the form of citation counts: the number of citations to any given publication. In addition to citation counts, the number of article views and downloads are frequently listed on journal articles hosted on publisher websites. Other emerging metrics known as “alternative metrics” often seek to indicate social impact rather than solely scientific impact. 13 , 14 Although they may theoretically be applied to authors, institutions, journals, or other entities, in practice, the most prevalent implementation of alternative metrics is publication‐level. Examples include the number of times a publication has been shared on social media or blogs, the number of comments or “likes” it has received, or the number of times it has been mentioned in mass media. Due to their loosely defined and rapidly changing nature, alternative metrics are difficult to locate, although one company, Altmetrics, 15 has monetized centralizing various indicators into an “attention score.” Alternative metrics can add societal context and diversity to a research evaluation, 16 but researchers and evaluators should keep in mind that metrics reflecting public engagement may not correlate with scientific impact. 13 , 17

Citation counts for an individual publication can be generated by searching a title or Digital Object Identifier (DOI) in any of the citation networks described in Table  1 . All six citation networks display the number of citations to a particular publication on the search results page for that publication. Individual publication citation counts may be used to highlight particularly impactful citations, but a more creative approach for a researcher’s dossier might be to group publications together and write about the citation impact of the group. For example, a researcher may aggregate citations by time period (e.g., before or after getting a prior promotion or being awarded a grant), by their different research fields or subfields (e.g., clinical and basic science), or by authorship type (e.g., first vs. senior [last] author). This facilitates discussion of publication impact in context, and may be useful to assert the value of a previous grant investment, explain impact variation within different fields, or provide evidence that research leadership affected impact. Another approach for a researcher or evaluator might be to selectively use comparative metrics by comparing a single or group of publications to any of the following: other articles published in the same field, other articles published within the same journal, or other articles published by peer researchers.

One strategy for utilizing publication‐level metrics for a grouped set of publications is to use the mean number of citations per publication. This number may be higher or lower than the same author's h‐index (see below) depending on the distribution of citations within the body of work. Supplementing the mean number of citations for a large list of articles with the median and/or the standard deviation would help evaluators to understand the spread of the citation counts. Such measures of central tendency and variability could be used alongside, or instead of, direct citation counts for individual publications when presenting any of the previously discussed methods of grouping publications.

AUTHOR‐LEVEL METRICS

The h‐index 18 is the number ( N ) for an author such that at least N of the author’s publications have a minimum of N citations each. For example, imagine an author with any number of total publications, and at least 10 of their total papers have 10 or more citations. This author’s h‐index would be 10 (Table  2 ). The h‐index is a widely used and easily understood metric that demonstrates the citation impact across an author’s career. However, users of the h‐index should recognize several major limitations of this metric. The h‐index is consistently skewed toward researchers’ older papers, which have had more time to accumulate citations. A high h‐index is challenging to achieve for early career researchers. The h‐index also weights all authors equally regardless of authorship position, meaning it does not provide information about the relative contribution of authors. Additionally, h‐indices may be lower for researchers who have published extensively, but have only a limited number of highly cited publications compared with researchers whose papers’ citations are more evenly distributed. The h‐index is also vulnerable to extreme instances of self‐citation, or in‐group citation, which artificially inflate it. 19 , 20 Finally, and importantly, the h‐index should not be used to compare researchers across fields, as citation rates vary widely between disciplines. 21 As long as the drawbacks are understood, the h‐index can be a useful tool in an analysis comparing the total publication output of an author with the distribution of citations to their work. Numerous alternatives to the h‐index have been proposed that attempt to correct for such drawbacks, including variations on the h‐index itself, 22 , 23 , 24 , 25 the e‐index, 26 the g‐index, 27 and the m‐quotient, 18 , 28 but none have reached the popularity of the original h‐index.

Two different patterns of the distribution of authors’ total number of publications

An h‐index can be calculated manually from a list of an author's publications’ total citation counts. Ideally, citation counts should be generated from a single citation network; citation counts collected from multiple networks should be presented separately and not joined into a single h‐index. If the list of citation counts for each paper is sorted from highest to lowest, it is simple to spot the crossover point at which the number of citations meets or exceeds the number of publications (Table  2 ). If an author does not have a list of their publications at hand, an h‐index can also be generated by searching Web of Science Core Collection, Scopus, or Google Scholar. In Web of Science and Scopus, an author’s publications can be searched by author name, affiliation, or unique identifier (such as ORCID); and an h‐index may be generated from the result set. In Google Scholar, authors will need to create a profile page and add their publications to their account to have their h‐index displayed. Researchers should be aware that Google Scholar may display duplicate records for their publications. This can cause inflated citation counts, if duplicate records are counted separately as citing papers. It can also cause the total number of citations to one publication to be split across the duplicate records for that same publication, decreasing the author’s h‐index. Researchers are encouraged to verify their publication records in Google Scholar. As with the publication‐level metrics discussed previously, it may be useful to consider multiple h‐indices for groups of publications that represent temporal, thematic, or authorship responsibility to either argue for or evaluate specific impact.

JOURNAL‐LEVEL METRICS

The most popular journal metric is the journal impact factor (JIF), 29 created by the scientometrician Eugene Garfield. The JIF is the number of total “citable items” a journal published in a 2‐year period divided by the total number of citations over the same 2‐year period. The denominator is currently defined as articles, review articles, and proceedings papers, 30 whereas the numerator includes citations to all publications in a journal. The JIF is a proprietary metric owned by Clarivate Analytics, who publishes Journal Citation Reports (JCR; subscription required), a database of annually updated JIFs, journal rankings, and other journal‐level metrics. The JIF was originally designed to indicate a relationship between a journal’s publications and citations, but there have been many critiques of its evolution into a single‐number proxy for broad scientific value. 31 , 32 , 33

Responsible application of JIFs requires an understanding of how the impact factor is calculated. For example, because citable items are defined to include research papers but to exclude nonresearch publication types (e.g., letters and editorials), editors may restructure their publication types in order to publish research articles in sections that were classified by Clarivate as “editorial.” Similarly, reducing the number of items in JIF denominators increases the total JIF. To keep JIFs proprietary, Clarivate does not disclose information on journals’ citable item sections, making it impossible for users to know if the metric is fair or accurate. 34 Editors may also pursue a higher impact factor via their journal’s submissions by asking submitters to include more citations to the journal in their manuscripts, or by soliciting more highly cited article types. Review articles consistently receive more citations than original research articles, 35 so editors may be incentivized to focus on secondary rather than primary publications. The pursuit of citations contributes to publication bias wherein prestigious, and aspirational, journals reject incremental or replicative research in favor of novel results, whose findings may not be reliable. 36 As with h‐indices, JIFs are also susceptible to fraudulent citations. 37 Most importantly, the impact factor of a journal is not capable of conveying the quality, scientific accuracy, or impact of any particular article published within that journal. The impact factor reflects citation patterns to the journal title as a whole, not the impact of any individual publication. Other journal metrics include Eigenfactor, 38 Scopus CiteScore, 39 SciImago Journal Rank Indicator, 40 and various modifications of the JIF itself, that may be useful for researchers desiring to explore or verify journal metrics for a particular context. 41 , 42 However, the original JIF remains, by far, the most familiar journal‐level metric.

The journal impact factor for a particular journal title can be searched via JCR. Although some individual journals may list their impact factors on their websites, it is recommended that dates and JIFs be verified via JCR. Research evaluators may not be familiar with the relative prestige of journals outside their own discipline, so researchers may use this opportunity to make a compelling presentation of the JIFs of the journals where they have published. JCR contains journal ranking data, simplifying the process of comparative analysis. Researchers can compare the JIFs of the journals in which they have published to other journals in the same field. A journal without a sky‐high impact factor may still be in the top quartile of journals within one’s field. JCR also contains historical impact factor data, which may be useful for discussion of a researcher’s decision to publish in up‐and‐coming journals.

LIMITATIONS

Some of the key limitations of citation networks and their citation counts, the h‐index and the JIF, have been discussed in the present paper. However, other metric considerations, as well as the broader concept of quantitative publication metrics as a whole, should be further studied in the process of evaluation policies and procedures improvement. This paper is intended as an introduction to the most frequently used publication metrics in the context of research careers or grant evaluations, and not as a thorough analysis of all available metrics. Additionally, this paper seeks to present practical information on how to access and apply popular metrics and tools in the context of research evaluation. Many of the products mentioned in this paper require expensive subscriptions that may be beyond the budget of some institutions. Understanding how the “free” alternatives, which collect user data in lieu of subscriptions, compare to the major subscription databases may be helpful for researchers trying to understand their options for accessing and presenting their publication metrics. Those who wish to gain a deeper understanding of their local subscriptions, or who seek further information about scientometrics, are encouraged to contact their institution’s librarian.

The use of quantitative strategies as a proxy for the scientific productivity, impact, and quality of research publications has both strengths and limitations. 43 , 44 No metric can serve as a fully representative proxy for research quality. The research itself, which may include nonpublication outputs, must be evaluated based on scientific integrity, societal need, advancement of the field, and other potentialities that matter to the evaluators (such as emphasis on support for new or under‐represented researchers, or previously unfunded research topics). There is increasing recognition of the importance of utilizing publication metrics responsibly in research evaluation. 45 The San Francisco Declaration on Research Assessment and the Leiden Manifesto provide recommendations and principles for improving research assessment and the appropriate use of metrics. 46 , 47 Quantitative publication metrics may serve as one component of a holistic assessment. However, even when integrated into a peer‐reviewed evaluative process that also includes qualitative assessment, metrics can either overly inflate or miss the perceived “impact” of research. Nevertheless, publication metrics’ ubiquity demands that funders, authors, and the publishing industry have a solid grasp of the strengths and weaknesses of using numbers as a proxy for scientific impact. A prudent utilization of publication metrics requires a thoughtful approach that includes a realistic understanding of what individual and aggregate metrics are capable of conveying. When used as part of a larger narrative, publication metrics can provide insight into an article’s reach, a journal’s evolution, or a researcher’s career. Strategic application of metrics can empower researchers to tell a clearer and more holistic story of their work, and responsible interpretation of metrics can empower evaluators to more efficiently, fairly, and consistently determine the future of scientific funding and advancement. Future improvements in research evaluation strategies can incentivize Open Science and the greater dissemination of research outputs. 48 , 49 Ultimately, the considered and transparent application and interpretation of publication metrics may help address some of the social inequities in science, provide more opportunity for under‐represented researchers and research areas, improve the wellbeing of researchers caught in the burnout “publish or perish” cycle, and speed the most promising basic research to clinical and policy implementation, and improved outcomes.

CONFLICT OF INTEREST

The authors declared no competing interests for this work.

Funding information

This research was supported in part by NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number UL1TR001881.

  • 1. Research and Innovation Rankings . 2020. Accessed July 29, 2020. https://www.scimagoir.com/rankings.php
  • 2. Studies (CWTS) C for S and T . CWTS Leiden Ranking. Accessed July 29, 2020. http://www.leidenranking.com
  • 3. Nature Index ‐ County/territory outputs ‐ 1 June 2019 ‐ 31 May 2020. https://www.natureindex.com/country‐outputs/generate/All/global/All/score
  • 4. Carpenter CR, Cone DC, Sarli CC. Using publication metrics to academic productivity and research impact. Acad Emerg Med. 2014;21(10):1160‐1172. 10.1111/acem.12482 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 5. Llewellyn N, Carter DR, Rollins L, Nehl EJ. Charting the publication and citation impact of the NIH Clinical and Translational Science Awards (CTSA) program from 2006 through 2016. Acad Med. 2018;93(8):1162‐1170. 10.1097/acm.0000000000002119 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 6. Schneider M, Kane CM, Rainwater J, et al. Feasibility of common bibliometrics in evaluating translational science. J Clin Transl Sci. 2017;1(1):45‐52. 10.1017/cts.2016.8 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 7. Bonato S. Google Scholar and Scopus. J Med Libr Assoc. 2016;104(3):252‐254. 10.5195/jmla.2016.31 [ DOI ] [ Google Scholar ]
  • 8. Martín‐Martín A, Orduna‐Malea E, Thelwall M, Delgado L‐Cózar E. Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories. J Informetr. 2018;12(4):1160‐1177. 10.1016/j.joi.2018.09.002 [ DOI ] [ Google Scholar ]
  • 9. Anker MS, Hadzibegovic S, Lena A, Haverkamp W. The difference in referencing in Web of Science, Scopus, and Google Scholar. ESC Heart Failure. 2019;6(6):1291‐1312. 10.1002/ehf2.12583 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 10. Harzing A‐W. Two new kids on the block: how do Crossref and Dimensions compare with Google Scholar, Microsoft Academic, Scopus and the Web of Science? Scientometrics. 2019;120(1):341‐349. 10.1007/s11192-019-03114-y [ DOI ] [ Google Scholar ]
  • 11. Thelwall M. Microsoft Academic: a multidisciplinary comparison of citation counts with Scopus and Mendeley for 29 journals. J Informetr. 2017;11(4):1201‐1212. 10.1016/j.joi.2017.10.006 [ DOI ] [ Google Scholar ]
  • 12. van Eck NJ, Waltman L, Larivière V, Sugimoto C. Crossref as a new source of citation data: a comparison with Web of Science and Scopus. CWTS. Accessed July 20, 2020. https://www.cwts.nl/blog?article=n‐r2s234
  • 13. Bornmann L. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. J Informetr. 2014;8(4):895‐903. 10.1016/j.joi.2014.09.005 [ DOI ] [ Google Scholar ]
  • 14. Bornmann L, Haunschild R. Alternative article‐level metrics. EMBO Rep. 2018;19(12):e47260. 10.15252/embr.201847260 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 15. The donut and Altmetric Attention Score. Altmetric. Published July 9, 2015. Accessed July 20, 2020. https://www.altmetric.com/about‐our‐data/the‐donut‐and‐score/
  • 16. Piwowar H, Priem J. The power of altmetrics on a CV. Bull Am Soc Inform Sci Technol. 2013;39(4):10‐13. 10.1002/bult.2013.1720390405 [ DOI ] [ Google Scholar ]
  • 17. Warren HR, Raison N, Dasgupta P. The rise of altmetrics. JAMA. 2017;317(2):131‐132. 10.1001/jama.2016.18346 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 18. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA. 2005;102(46):16569‐16572. 10.1073/pnas.0507655102 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 19. Noorden RV, Chawla DS. Hundreds of extreme self‐citing scientists revealed in new database. Nature. 2019;572(7771):578‐579. https://www.nature.com/articles/d41586-019-02479-7 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 20. Bartneck C, Kokkelmans S. Detecting h‐index manipulation through self‐citation analysis. Scientometrics. 2010;87(1):85‐98. 10.1007/s11192-010-0306-5 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 21. Marx W, Bornmann L. On the causes of subject‐specific citation rates in Web of Science. Scientometrics. 2015;102(2):1823‐1827. 10.1007/s11192-014-1499-9 [ DOI ] [ Google Scholar ]
  • 22. Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometrics. 2006;68(1):179‐189. 10.1007/s11192-006-0090-4 [ DOI ] [ Google Scholar ]
  • 23. Sidiropoulos A, Katsaros D, Manolopoulos Y. Generalized h‐index for disclosing latent facts in citation networks. arXiv:cs/0607066. Published online July 13, 2006. Accessed July 21, 2020. http://arxiv.org/abs/cs/0607066
  • 24. Bornmann L, Mutz R, Daniel H‐D. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. J Am Soc Inform Sci Technol. 2008;59(5):830‐837. 10.1002/asi.20806 [ DOI ] [ Google Scholar ]
  • 25. Post A, Li AY, Dai JB, et al. c‐index and subindices of the h‐index: new variants of the h‐index to account for variations in author contribution. Cureus. 10(5):e2629. 10.7759/cureus.2629 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 26. Zhang C‐T. The e‐Index, complementing the h‐index for excess citations. PLoS One. 2009;4(5):e5429. 10.1371/journal.pone.0005429 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 27. Egghe L. An improvement of the h‐index: the g‐index. https://www.researchgate.net/publication/242393078_An_improvement_of_the_H‐index_The_G‐index
  • 28. Feb 2016 16:10 A‐WH‐S 6 . Reflections on the h‐index. Harzing.com. Accessed July 21, 2020. https://harzing.com/publications/white‐papers/reflections‐on‐the‐h‐index
  • 29. Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90. 10.1001/jama.295.1.90 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 30. About Citable Items. Accessed July 21, 2020. http://help.incites.clarivate.com/incitesLiveJCR/9607‐TRS/version/17
  • 31. Larivière V, Sugimoto CR. The journal impact factor: a brief history, critique, and discussion of adverse effects. In: Glänzel W, Moed HF, Schmoch U, Thelwall M, eds. Springer Handbook of Science and Technology Indicators. Cham, Switzerland: Springer Handbooks; 2019:3‐24. 10.1007/978-3-030-02511-3 [ DOI ] [ Google Scholar ]
  • 32. Neuberger J, Counsell C. Impact factors: uses and abuses. Eur J Gastro Hepatol. 2002;14(3):209‐211. https://journals.lww.com/eurojgh/fulltext/2002/03000/impact_factors__uses_and_abuses.1.aspx [ DOI ] [ PubMed ] [ Google Scholar ]
  • 33. Teixeira da Silva JA. The Journal Impact Factor (JIF): science publishing’s miscalculating metric. Acad Quest. 2017;30(4):433‐441. 10.1007/s12129-017-9671-3 [ DOI ] [ Google Scholar ]
  • 34. Davis, P. Citable items: the contested impact factor denominator. The Scholarly Kitchen. Published February 10, 2016. Accessed July 21, 2020. https://scholarlykitchen.sspnet.org/2016/02/10/citable‐items‐the‐contested‐impact‐factor‐denominator/
  • 35. Lei L, Sun Y. Should highly cited items be excluded in impact factor calculation? The effect of review articles on journal impact factor. Scientometrics. 2020;122(3):1697‐1706. 10.1007/s11192-019-03338-y [ DOI ] [ Google Scholar ]
  • 36. Brembs B. Prestigious science journals struggle to reach even average reliability. Front Hum Neurosci. 2018;12:37. 10.3389/fnhum.2018.00037 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 37. Davis, P. Visualizing citation cartels. The Scholarly Kitchen. Published September 26, 2016. Accessed July 21, 2020. https://scholarlykitchen.sspnet.org/2016/09/26/visualizing‐citation‐cartels/
  • 38. Eigenfactor: About. Accessed July 21, 2020. http://www.eigenfactor.org/about.php
  • 39. Metrics ‐ How Scopus Works ‐ Scopus ‐ Solutions | Elsevier. Accessed July 21, 2020. https://www.elsevier.com/solutions/scopus/how‐scopus‐works/metrics#Journal
  • 40. SJR : Scientific Journal Rankings . Accessed July 21, 2020. https://www.scimagojr.com/journalrank.php
  • 41. Kianifar H, Sadeghi R, Zarifmahmoudi L. Comparison between impact factor, eigenfactor metrics, and SCimago journal rank indicator of pediatric neurology journals. Acta Inform Med. 2014;22(2):103‐106. 10.5455/aim.2014.22.103-106 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 42. Yuen J. Comparison of impact factor, eigenfactor metrics, and SCImago journal rank indicator and h‐index for neurosurgical and spinal surgical journals. World Neurosurg. 2018;119:e328‐e337. 10.1016/j.wneu.2018.07.144 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 43. Aksnes DW, Langfeldt L, Wouters P. Citations, citation indicators, and research quality: an overview of basic concepts and theories. SAGE Open. 2019;9(1):2158244019829575. https://doi.org/10.1177%2F2158244019829575 [ Google Scholar ]
  • 44. Chapman CA, Bicca‐Marques JC, Calvignac‐Spencer S, et al. Games academics play and their consequences: how authorship, h‐index and journal impact factors are shaping the future of academia. Proc Royal Soc B Biol Sci. 1916;2019(286):20192047. 10.1098/rspb.2019.2047 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 45. Gadd E. Influencing the changing world of research evaluation. Insights. 2019;32(1):6. 10.1629/uksg.491 [ DOI ] [ Google Scholar ]
  • 46. San Francisco Declaration on Research Assessment. DORA. Accessed April 4, 2021. https://sfdora.org/read/
  • 47. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: the Leiden Manifesto for research metrics. Nature News. 2015;520(7548):429. https://www.nature.com/news/bibliometrics‐the‐leiden‐manifesto‐for‐research‐metrics‐1.17351 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 48. Working Group on Rewards Under Open Science . Evaluation of Research Careers Fully Acknowledging Open Science Practices : Rewards, Incentives and/or Recognition for Researchers Practicing Open Science [O'Carroll C, Rentier B, Cabello Valdes C, Esposito F, Kaunismaa E, Maas K, Metcalfe J, McAllister D, Vandevelde K, eds]. Luxembourg, Europe: Publications Office of the European Union; 2017. Accessed April 4, 2021. https://op.europa.eu/s/pbVp [ Google Scholar ]
  • 49. Morais R, Borrell‐Damián L. Open Access in European Universities: Results from the 2016/2017 EUA Institutional Survey. Brussels, Belgium: European University Association; 2018. Accessed April 4, 2021. https://eua.eu/resources/publications/324:open‐access‐in‐european‐universities‐results‐from‐the‐2016‐2017‐eua‐institutional‐survey.html [ Google Scholar ]
  • 50. Web of Science Core Collection. Web of Science Group . Accessed July 16, 2020. https://clarivate.com/webofsciencegroup/solutions/web‐of‐science‐core‐collection/
  • 51. About Scopus ‐ Abstract and citation database | Elsevier. Accessed July 20, 2020. https://www.elsevier.com/solutions/scopus
  • 52. About Google Scholar. Accessed July 20, 2020. https://scholar.google.com/intl/en/scholar/about.html
  • 53. Gusenbauer M. Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases. Scientometrics. 2019;118(1):177‐214. 10.1007/s11192-018-2958-5 [ DOI ] [ Google Scholar ]
  • 54. Heibi I, Peroni S, Shotton D. Software review: COCI, the OpenCitations Index of Crossref open DOI‐to‐DOI citations. Scientometrics. 2019;121(2):1213‐1228. 10.1007/s11192-019-03217-6 [ DOI ] [ Google Scholar ]
  • 55. OpenCitations Indexes Search Interface . Accessed July 20, 2020. https://opencitations.net/index/search
  • 56. Microsoft Academic . Microsoft Research. Accessed July 20, 2020. https://www.microsoft.com/en‐us/research/project/academic/
  • 57. Harzing A‐W. Microsoft Academic: is the phoenix getting wings? Scientometrics. 2017;110:371‐383. 10.1007/s11192-016-2185-x [ DOI ] [ Google Scholar ]
  • 58. Why did we build Dimensions. Dimensions. Accessed July 20, 2020. https://www.dimensions.ai/why‐dimensions/
  • 59. Hook DW, Porter SJ, Herzog C. Dimensions: building context for search and evaluation. Front Res Metr Anal. 2018;3:1–11. 10.3389/frma.2018.00023 [ DOI ] [ Google Scholar ]
  • View on publisher site
  • PDF (114.5 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 12 September 2018

Thousands of scientists publish a paper every five days

  • John P. A. Ioannidis 0 ,
  • Richard Klavans 1 &
  • Kevin W. Boyack 2

John P. A. Ioannidis is a professor of medicine at the Meta-Research Innovation Center at Stanford (METRICS), Stanford University, California.

You can also search for this author in PubMed   Google Scholar

Richard Klavans is a researcher at SciTech Strategies in Philadelphia, Pennsylvania, and New Mexico.

Kevin W. Boyack is a researcher at SciTech Strategies in Philadelphia, Pennsylvania, and New Mexico.

Illustration by David Parkins

Authorship is the coin of scholarship — and some researchers are minting a lot. We searched Scopus for authors who had published more than 72 papers (the equivalent of one paper every 5 days) in any one calendar year between 2000 and 2016, a figure that many would consider implausibly prolific 1 . We found more than 9,000 individuals, and made every effort to count only ‘full papers’ — articles, conference papers, substantive comments and reviews — not editorials, letters to the editor and the like. We hoped that this could be a useful exercise in understanding what scientific authorship means.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 561 , 167-169 (2018)

doi: https://doi.org/10.1038/d41586-018-06185-8

Wager, E., Singhvi, S. & Kleinert, S. PeerJ 3 , e1154 (2015).

Article   PubMed   Google Scholar  

Quan, W., Chen, B. & Shu, F. Preprint at https://arxiv.org/abs/1707.01162 (2017).

Hvistendahl, M. Science 342 , 1035–1039 (2013).

Nature 483 , 246 (2012).

Abritis, A., McCook, A. & Retraction Watch. Science 357 , 541 (2017).

Patience, G. S., Galli, F., Patience, P. A. & Boffito, D. C. Preprint at https://doi.org/10.1101/323519 (2018).

Drenth, J. J. Am. Med. Assoc. 280 , 219–221 (1998).

Article   Google Scholar  

Sauermann, H, & Haeussler, C. Sci. Adv. 3 , e1700404 (2017).

Kim, S. K. PLoS One 13 , e0200785 (2018).

Papatheodorou, S. I., Trikalinos, T. A. & Ioannidis, J. P. J. Clin. Epidemiol. 61 , 546–551 (2008).

Download references

Reprints and permissions

Supplementary Information

  • Supplementary Text and Figures
  • Supplementary Information Data

Related Articles

number of authors in a research paper

Authorship position should not serve as a proxy metric

Exposing predatory journals: anonymous sleuthing account goes public

Exposing predatory journals: anonymous sleuthing account goes public

Nature Index 22 OCT 24

How to win a Nobel prize: what kind of scientist scoops medals?

How to win a Nobel prize: what kind of scientist scoops medals?

News Feature 03 OCT 24

‘Substandard and unworthy’: why it’s time to banish bad-mannered reviews

‘Substandard and unworthy’: why it’s time to banish bad-mannered reviews

Career Q&A 23 SEP 24

Journals with high rates of suspicious papers flagged by science-integrity start-up

Journals with high rates of suspicious papers flagged by science-integrity start-up

News 22 OCT 24

Scientific papers that mention AI get a citation boost

Scientific papers that mention AI get a citation boost

News 17 OCT 24

Qiushi Chair Professor

Distinguished scholars with notable achievements and extensive international influence.

Hangzhou, Zhejiang, China

Zhejiang University

number of authors in a research paper

ZJU 100 Young Professor

Promising young scholars who can independently establish and develop a research direction.

Faculty Positions at the Center for Machine Learning Research (CMLR), Peking University

CMLR's goal is to advance machine learning-related research across a wide range of disciplines.

Beijing, China

Center for Machine Learning Research (CMLR), Peking University

number of authors in a research paper

Postdoctoral Research Fellows at Suzhou Institute of Systems Medicine (ISM)

ISM, based on this program, is implementing the reserve talent strategy with postdoctoral researchers.

Suzhou, Jiangsu, China

Suzhou Institute of Systems Medicine (ISM)

number of authors in a research paper

Career Opportunities at the Yazhouwan National Laboratory, Hainan, China

YNL recruits leading scientists in agriculture: crop/animal genetics, biotech, photosynthesis, disease resistance, data analysis, and more.

Sanya, Hainan, China

Yazhouwan National Laboratory

number of authors in a research paper

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

How to Order and Format Author Names in Scientific Papers

David Costello

As the world becomes more interconnected, the production of knowledge increasingly relies on collaboration. Scientific papers, the primary medium through which researchers communicate their findings, often feature multiple authors. However, authorship isn't merely a reflection of those who contributed to a study but often denotes prestige, recognition, and responsibility. In academic papers, the order of authors is not arbitrary. It can symbolize the level of contribution and the role played by each author in the research process. Deciding on the author order can sometimes be a complex and sensitive issue, making it crucial to understand the different roles and conventions of authorship in scientific research. This article will explore the various types of authors found in scientific papers, guide you on how to correctly order and format author names, and offer insights to help you navigate this critical aspect of academic publishing.

The first author

The first author listed in a scientific paper is typically the person who has made the most substantial intellectual contribution to the work. This role is often filled by a junior researcher such as a Ph.D. student or postdoctoral fellow, who has been intimately involved in almost every aspect of the project.

The first author usually plays a pivotal role in designing and implementing the research, including the formation of hypotheses, experimental design, data collection, data analysis, and interpretation of the findings. They also commonly take the lead in manuscript preparation, writing substantial portions of the paper, including the often-challenging task of turning raw data into a compelling narrative.

In academia, first authorship is a significant achievement, a clear demonstration of a researcher's capabilities and dedication. It indicates that the researcher possesses the skills and tenacity to carry a project from inception to completion. This position can dramatically impact a researcher's career trajectory, playing a critical role in evaluations for promotions, grants, and future academic positions.

However, being the first author is not just about prestige or professional advancement. It carries a weight of responsibility. The first author is generally expected to ensure the integrity and accuracy of the data presented in the paper. They are often the person who responds to reviewers' comments during the peer-review process and makes necessary revisions to the manuscript.

Also, as the first author, it is typically their duty to address any questions or critiques that may arise post-publication, often having to defend the work publicly, even years after publication.

Thus, first authorship is a role that offers significant rewards but also requires a strong commitment to uphold the principles of scientific integrity and transparency. While it's a coveted position that can be a steppingstone to career progression, the associated responsibilities and expectations mean that it should not be undertaken lightly.

The middle authors

The middle authors listed on a scientific paper occupy an essential, albeit sometimes ambiguous, role in the research project. They are typically those who have made significant contributions to the project, but not to the extent of the first author. This group often includes a mix of junior and senior researchers who have provided key input, assistance, or resources to the project.

The roles of middle authors can be quite diverse. Some might be involved in specific aspects of data collection or analysis. Others may bring specialized knowledge or technical skills essential to the project, providing expertise in a particular methodology, statistical analysis, or experimental technique. There might also be middle authors who have contributed vital resources to the project, such as unique reagents or access to a particular patient population.

In some fields, the order of middle authors reflects the degree of their contribution. The closer a middle author is to the first position, the greater their involvement, with the second author often having made the next largest contribution after the first author. This order may be negotiated among the authors, requiring clear communication and consensus.

However, in other disciplines, particularly those where large collaborative projects are common, the order of middle authors may not necessarily reflect their level of contribution. In such cases, authors might be listed alphabetically, or by some other agreed-upon convention. Therefore, it's crucial to be aware of the norms in your specific field when deciding the order of middle authors.

Being a middle author in a scientific paper carries less prestige and responsibility than being a first or last author, but it is by no means a minor role. Middle authors play a crucial part in the scientific endeavor, contributing essential expertise and resources. They are integral members of the research team whose collective efforts underpin the progress and achievements of the project. Without their diverse contributions, the scope and impact of scientific research would be significantly diminished.

The last author

In the listing of authors on a scientific paper, the final position carries a unique significance. It is typically occupied by the senior researcher, often the head of the laboratory or the principal investigator who has supervised the project. While they might not be involved in the day-to-day aspects of the work, they provide overarching guidance, mentorship, and often the resources necessary for the project's fruition.

The last author's role is multidimensional, often balancing the responsibilities of project management, funding acquisition, and mentorship. They guide the research's direction, help troubleshoot problems, and provide intellectual input to the project's design and interpretation of results. Additionally, they usually play a key role in the drafting and revision of the manuscript, providing critical feedback and shaping the narrative.

In academia, the last author position is a symbol of leadership and scientific maturity. It indicates that the researcher has progressed from being a hands-on contributor to someone who can guide a team, secure funding, and deliver significant research projects. Being the last author can have substantial implications for a researcher's career, signaling their ability to oversee successful projects and mentor the next generation of scientists.

However, along with prestige comes significant responsibility. The last author is often seen as the guarantor of the work. They are held accountable for the overall integrity of the study, and in cases where errors or issues arise, they are expected to take the lead in addressing them.

The convention of the last author as the senior researcher is common in many scientific disciplines, especially in the life and biomedical sciences. However, it's important to note that this is not a universal standard. In some fields, authors may be listed purely in the order of contribution or alphabetically. Therefore, an understanding of the specific norms and expectations of your scientific field is essential when considering author order.

In sum, the position of the last author, much like that of the first author, holds both honor and responsibility, reflecting a leadership role that goes beyond mere intellectual contribution to include mentorship, management, and accountability.

Formatting author names

When it comes to scientific publishing, details matter, and one such detail is the correct formatting of author names. While it may seem like a minor concern compared to the intellectual challenges of research, the proper formatting of author names is crucial for several reasons. It ensures correct attribution of work, facilitates accurate citation, and helps avoid confusion among researchers in the same field. This section will delve deeper into the conventions for formatting author names, offering guidance to ensure clarity and consistency in your scientific papers.

Typically, each author's full first name, middle initial(s), and last name are listed. It's crucial that the author's name is presented consistently across all their publications to ensure their work is correctly attributed and easily discoverable.

Here is a basic example following a common convention:

  • Standard convention: John D. Smith

However, conventions can vary depending on cultural naming practices. In many Western cultures, the first name is the given name, followed by the middle initial(s), and then the family name. On the other hand, in many East Asian cultures, the family name is listed first.

Here is an example following this convention:

  • Asian convention: Wang Xiao Long

When there are multiple authors, their names are separated by commas. The word "and" usually precedes the final author's name.

Here's how this would look:

  • John D. Smith, Jane A. Doe, and Richard K. Jones

However, author name formatting can differ among journals. Some may require initials instead of full first names, or they might have specific guidelines for handling hyphenated surnames or surnames with particles (e.g., "de," "van," "bin"). Therefore, it's always important to check the specific submission guidelines of the journal to which you're submitting your paper.

Moreover, the formatting should respect each author's preferred presentation of their name, especially if it deviates from conventional Western naming patterns. As the scientific community becomes increasingly diverse and global, it's essential to ensure that each author's identity is accurately represented.

In conclusion, the proper formatting of author names is a vital detail in scientific publishing, ensuring correct attribution and respect for each author's identity. It may seem a minor point in the grand scheme of a research project, but getting it right is an essential part of good academic practice.

The concept of authorship in scientific papers goes well beyond just listing the names of those involved in a research project. It carries critical implications for recognition, responsibility, and career progression, reflecting a complex nexus of contribution, collaboration, and intellectual leadership. Understanding the different roles, correctly ordering the authors, and appropriately formatting the names are essential elements of academic practice that ensure the rightful attribution of credit and uphold the integrity of scientific research.

Navigating the terrain of authorship involves managing both objective and subjective elements, spanning from the universally acknowledged conventions to the nuances particular to different scientific disciplines. Whether it's acknowledging the pivotal role of the first author who carried the project from the ground up, recognizing the valuable contributions of middle authors who provided key expertise, or highlighting the mentorship and leadership role of the last author, each position is an integral piece in the mosaic of scientific authorship.

Furthermore, beyond the order of authors, the meticulous task of correctly formatting the author names should not be underestimated. This practice is an exercise in precision, respect for individual identity, and acknowledgement of cultural diversity, reflecting the global and inclusive nature of contemporary scientific research.

As scientific exploration continues to move forward as a collective endeavor, clear and equitable authorship practices will remain crucial. These practices serve not only to ensure that credit is assigned where it's due but also to foster an environment of respect and transparency. Therefore, each member of the scientific community, from fledgling researchers to seasoned scientists, would do well to master the art and science of authorship in academic publishing. After all, it is through this collective recognition and collaboration that we continue to expand the frontiers of knowledge.

Header image by Jon Tyson .

IMAGES

  1. Number of authors and co-authors of peer reviewed papers published in...

    number of authors in a research paper

  2. Number of authors for paper

    number of authors in a research paper

  3. Number of authors contributed to educational research by year " Number...

    number of authors in a research paper

  4. The number of authors of research articles in six journals through...

    number of authors in a research paper

  5. Number of Authors in Research Articles across the Years (n=65)

    number of authors in a research paper

  6. Number of authors according to their publishing lifetime L

    number of authors in a research paper

VIDEO

  1. When Authors Research Sketchy Shit #authorsofbooktok

  2. eBook's the publishing of the future?

  3. How to edit the Number of authors in a Reference using Endnote #citation #author #endnote #number

  4. How to Write research paper Part-1 Title, Authors affiliation

  5. How many Co Authors we can add for a research paper?

  6. The problem of generalizing Price's law

COMMENTS

  1. Does the Number of Authors Matter? Data from 101,580 Research ...

    Here’s a summary of the key findings. 1. The median research paper had 6 authors, with 90% of papers having between 1 and 15 authors. 2. The median number of authors of a research paper increased, from 3 to 6, in the past 20 years.

  2. How many authors are (too) many? A retrospective, descriptive ...

    The percentage of eleven or more authors per publication increased ~ sevenfold, ~ 11-fold and ~ 12-fold for reviews, editorials, and systematic reviews, respectively. Confirming prior findings, this study highlights the escalating authorship in biomedical publications.

  3. Why research papers have so many authors - The Economist

    Over the period in question, the average number of authors per paper grew from 3.2 to 4.4. At the same time, the number of papers divided by the number of authors who published in a given...

  4. A Guide to Authorship in Research and Scholarly Publishing

    Authorship in research papers can be confusing; it can be difficult to decide who should and who shouldn’t be credited as an author and be held accountable for published research. This guide on authorship in research answers some most common questions about authorship.

  5. Practical publication metrics for academics - PMC

    The hindex 18 is the number (N) for an author such that at least N of the author’s publications have a minimum of N citations each. For example, imagine an author with any number of total publications, and at least 10 of their total papers have 10 or more citations.

  6. Physics paper sets record with more than 5,000 authors | Nature

    A physics paper with 5,154 authors has — as far as anyone knows — broken the record for the largest number of contributors to a single research article.

  7. Thousands of scientists publish a paper every five days - Nature

    Among the 265, 154 authors produced more than the equivalent of one paper every 5 days for 2 or more calendar years; 69 did so for 4 or more calendar years. Papers with 10–100 authors are...

  8. Maximum/reasonable number of authors for a (conference) paper

    What is a reasonable number of authors a small conference paper should have? In my case, this is an interdisciplinary research paper and several people are linked to this work in one way or another. Currently, there are 8 authors and I feel this may seem too high.

  9. How to Order and Format Author Names in Scientific Papers

    This article will explore the various types of authors found in scientific papers, guide you on how to correctly order and format author names, and offer insights to help you navigate this critical aspect of academic publishing.

  10. Patterns of authors contribution in scientific manuscripts

    The number of authors considered are: (a) 2 authors; (b) 3 authors; (c) 4 authors; and (d) 8 authors. Additional figures are shown in the Supplementary Information (Figs. S2 and S3). The five most frequent contributions can be classified into three distinct patterns by taking into the account the relationship between amount of contribution made ...