Share

Small research teams ‘disrupt’ science more radically than large ones

The current infatuation with large-scale scientific collaborations and the energy they can bring to a scientific domain owes much to the robust correlation that exists between citation impact and team size. This relationship has been well documented in the emerging ‘science of science’ field1. Writing in Nature, Wu et al.2 use a new citation-based index to nuance this conventional wisdom. They find that small and large teams differ in a measurable and systematic way in the extent of the ‘disruption’ they cause to the scientific area to which they contribute.

Scientists have long had a love–hate relationship with citation metrics. When it comes to recognizing and promoting individuals (or even teams), why would researchers ever rely on proxies of questionable validity, rather than engage with the scientific insights proposed in a paper or by a particular scientist? And yet, precisely because they encode the recognition of one’s peers, citations occupy a central place in the complex web of institutions and norms that allow for the smooth functioning of the scientific enterprise.

But what is the meaning of a citation? Scientists cite previous work for many reasons. Sometimes their purpose is to acknowledge an intellectual debt. More rarely, it is to criticize the work that came before them3. Citation behaviour can also reflect strategic considerations, such as currying favour with referees or editors, or status-based considerations, as when an author cites well-known authorities in the field without engaging with the substantive content of their work. Moreover, citation counts are obviously affected by field size and cross-domain citation norms, which makes it difficult to compare scientists across fields or subfields.

Some new citation metrics have been proposed since the turn of the century, such as the h index4 and the Relative Citation Ratio5, but these alternatives have their own drawbacks. The h index is defined only for authors, not individual papers, and understates the impact of an author’s most highly cited work. The Relative Citation Ratio normalizes an article’s citations by a measurement of ‘expected citations’ given the article’s field, but determining to which field an article belongs can be a subjective decision.

In this context, the article by Wu and colleagues comes as a breath of fresh air. The authors describe and validate a citation-based index of ‘disruptiveness’ that has previously been proposed for patents6. The intuition behind the index is straightforward: when the papers that cite a given article also reference a substantial proportion of that article’s references, then the article can be seen as consolidating its scientific domain. When the converse is true — that is, when future citations to the article do not also acknowledge the article’s own intellectual forebears — the article can be seen as disrupting its domain.

The disruptiveness index reflects a characteristic of the article’s underlying content that is clearly distinguishable from impact as conventionally captured by overall citation counts. For instance, the index finds that papers that directly contribute to Nobel prizes tend to exhibit high levels of disruptiveness, whereas, at the other extreme, review articles tend to consolidate their fields.

Armed with this new measure, Wu et al. document a robust and striking empirical fact: the type of work performed by large teams and small teams differs markedly, with small teams being much more likely than large teams to publish disruptive articles (Fig. 1). This finding holds for articles, patents and computer-code snippets deposited on the web-based hosting service GitHub. It holds across all quantiles of the citation distribution. In the case of articles, it also holds across scientific disciplines, from biology to the physical sciences, as well as the social sciences.

Figure 1 | Small teams make more-disruptive contributions to science than do large teams. Wu et al.2 show that median citations to scientific articles (red curve) increase with team size, whereas articles’ average disruption percentile (blue curve), as measured using a citation-based index6, decreases as team size increases. This analysis is based on 24,174,022 research articles published in 1954–2014 and indexed on the Web of Science database. Similar associations were seen for patents and software-code snippets (not shown). (Adapted from ref. 2.)

A sceptic could object that large and small teams might differ in unobserved ways that are correlated with disruptive potential. In particular, scientists who prefer to work in small teams might be predisposed to upset the intellectual apple cart in their domains. Strikingly, however, the relationship documented by Wu et al. also holds within the corpus of work of individual scientists. The authors’ analysis of a large sample of approximately 38 million name-disambiguated scholars and their published works shows that the same individual scientists participate in more consolidating projects when they operate in large teams than when they work in small teams.

These results are important in three respects. First, they provide us with a new, validated metric with which to evaluate the impact of policies or interventions that might affect the rate and direction of scientific progress, such as new funding mechanisms.

Second, they are a corrective to the zeitgeist that tends to view collaborations — across laboratories and especially across disciplines — as an inexorable trend that science funders should embrace and celebrate. Wu et al. invite us to recognize that sustained scientific progress requires both radical and incremental contributions, and that the investigations that lead to these contributions are probably better carried out by different types of team.

Third, the results show that researchers need not choose between a slavish devotion to citation metrics and ignoring citation data altogether. Rather, scientists should support the development of more-informative metrics and be careful about how these are interpreted and used.

As is the case with any new metric, the disruptiveness index should not be embraced uncritically. Because it relies on citations to articles, it can be calculated only after enough time has passed since publication for citations to accumulate. This limits the applicability of the index in areas in which citations build up slowly, or its use as a tool for evaluating the impact of recent policies. Moreover, Wu and colleagues’ article leaves open the question of mechanisms: why would small teams be more likely to perform disruptive work? How much overlap is there between the skills, backgrounds and experience of the members of small teams and those of large teams? Are differences in talent between collaborators more or less pronounced in small scientific teams than in large collaborations? These questions await further examination.