A Common Disagreement within the Community

Thank you very much for this valuable point. There are many possible interpretations of this finding. However, we also believe that the interpretation we provide is valid and based on the discussions and concerns provided by the authors within the physics community. We have clarified this article to secure and broaden this interpretation. It is a recognised limitation of our work that we are not in a position to distinguish between differences of opinion at paper and Community level. We do not believe that this limitation significantly affects our key results. In our discussion of the articles that issued or received the most dissenting citations, we find most of the results and their quote sentences reasonable, with the exception of the identified methodological artifacts. Consider the following four examples of sentences listed below (where (…) indicates the position of the cited references and […] additional text not cited here). The first is invalid because the signal term „conflict” refers to an object of study and not to a scientific dispute; the second sentence is also invalid because the term „contradictory” refers to the results of a single study, not between studies; the third sentence is not valid because the term „challenge” appears when the cited study is cited; The fourth and fifth sentences are examples of sentences that would be marked as valid. Similar trends can be observed for other signal terms, such as . B Challenge* (see Table S2 of Supplementary File 1).

We calculated d as the ratio of these two values. If they are greater than one, it indicates that newspapers received more citations than expected in the year following their citation related to disagreements. A value of less than one indicates that articles with a dissenting citation received fewer citations the following year. To further clarify our definition, we have added the above example to our definition of disagreements at Community level. Our approach has limitations. First, our method captures only a fraction of the textual disagreements in science. This is partly due to the fact that we favor precision over recall after removing low-validity benchmark phrases. Our lists of signal and filter terms are also not exhaustive, so their expansion would identify other cases of disagreement in future searches. Since we focus on quotations, we are unable to identify traces of disagreements that occur without explicit reference to previous literature, or those that can only be classified as disagreement with surrounding sentences as context. Some disagreements may also be too subtle or rely on jargon so they cannot be identified with our general signal terms. In addition, our action does not capture non-explicit disagreements or scientific disagreements that occur outside of quotes, e.B. in conferences, books, social media or in interpersonal interactions.

For these reasons, our level of disagreement may over- or under-represent disagreements in certain areas and should be taken into account when evaluating results. „For each query, two programmers were randomly selected from the seven authors of this article to manually comment on each citation as a valid or invalid case of disagreement” Based on these studies, we propose a new approach to investigating disagreements based on a series of manually validated benchmark sentences. We are conducting one of the first empirical studies on the specific concept of disagreement in science, and our integrative definition allows us to capture explicit disagreements between certain articles as well as traces of disagreement in a field. Our benchmark phrase-based approach is more transparent and reproducible than the black box machine learning methods commonly used in citation classification, and has also been widely validated with over 3,000 citation sentences representing a range of domains. We expand the scope of past analyses and identify cases of disagreement in more than four million scientific publications. Cases of disagreement, operationalized with the 23 validated queries, accounted for approximately 0.31% of all citation sets (citations) extracted from indexed articles published between 2000 and 2015 (Figure 2a). Disagreements were greatest in the social sciences and humanities (Soc & Hum; 0.61%), followed by biomedicine and health sciences (Bio & Health; 0.41%), life and earth sciences (Life & Earth; 0.29%); Natural sciences and engineering (Phys & Engr; 0.15%) and mathematics and computer science (Math & Comp; 0.06%). Our results mainly report the percentage of disagreement citations on all citations, including non-disagreements, that are already standardized. The exceptions are Figure 3 and Figure SI 2, which report the observed/expected ratios of the percentage of disagreement per mesoplan field or per query, which in turn are based on the previous (normalized) percentage of disagreement.

On the other hand, disagreement at Community level refers to the situation in which a citing publication, without explicitly disagreeing with a cited publication, rather draws attention to a controversy or lack of consensus in the wider literature. .