Scientometrics is a discipline that studies the evolution of science through the statistical processing of scientific information (the number of scientific articles published in a given period of time, citation, etc.).
Scientometrics has been used for 50 years as a basis for assessing performance and funding of institutions, teams, individuals.
And what has the scientometrics of modern pseudoscience shown? Analysis of millions of publications has shown that as the number of participants in a group grows, the ability of this group to find interesting ideas in science and technology decreases – but the ability to “chew” standard theories grows. The smaller the group is, the more likely it is to become a denier of standard theories. However, this will only be if the group is not funded by sponsors. In the presence of sponsors, only standard theories will always be “chewed”.
Interestingly, the theory is being promoted through the media that the future belongs only to large teams, while individuals and small groups are not able to make a big contribution to the progress of progress.
Comparing all these metrics with the number of authors of the publications studied, Scientists have seen that large research teams tend to produce articles, patents, and software that gains immediate impact, but the potential of these products diminishes monotonically as each new team member is added. For example, as teams grow from 1 member to 50, their articles, patents and software fall in disruptive by 70, 30 and 50%, respectively.
In subversive headlines, the words “enter”, “measure”, “change” and “move forward”, while the titles of conservative papers are more often used “approve”, “confirm”, “demonstrate”, “theory” and “model”.
The same results were obtained when scientists took only the most subversive and significant works. Single authors are just as influential (top 5% of citations) as teams with five members, but their articles are 72% more likely to be high
disruptive (top 5% disruptive). The work of teams of 10 people is 50% more likely to produce a high impact article, but their chances of being disruptive are much lower. Repeating the study on patents and software, the authors obtained the same results.
Further, the scientists posed the question of how large and small teams search the databases for ideas for their new articles, patents or software. To do this, the researchers compared the average relative age of articles in bibliographic lists of publications and “pop”, that is, how many more articles refer to the same sources as this article. It turned out that loners and small teams were much more likely to “dig deep”, ie. based their research on older and less popular ideas.
And teams with more people covering broader fields of knowledge are less likely to base their research around old and unpopular ideas – on the contrary, they are more likely to build on recent high-impact work … And as the team grows, this inclination also only grows. Therefore, large research groups receive more citations of their own work immediately, as they move in the mainstream of their current research program.
Small teams begin to receive citations much later.
The effect of large teams connecting combinations of ideas from different areas of expertise reaches the limit in teams of about 10 people, and then declines as the team grows further.