
The significance of science, technology, engineering, and mathematics (STEM)-related research on society is an interesting subject of debate. To what extent are the published research results from STEM areas truly impacting the advancements of technology and society? Funk and Owen-Smith first proposed the consolidation-or-destabilization (CD) index as a citation-based metric for detecting disruptive or unifying patents [1]. The CD index measures the extent to which a future work cites a focal work in its bibliography. In this paper, Yang et al. investigate the reliability of the CD index as a touchstone of research quality.
Yang et al. succinctly review the societal impacts of research works on patented inventions, clinical trials, noteworthy media outlet publications, and social media. Indeed, huge datasets from a variety of STEM fields are essential for experimental investigations into the reliability of the CD index. Consequently, they hypothesize and research the claims that: (1) “papers with higher CD index values are less likely to influence technology and society,” and (2) “papers with higher disruptive impact values are more likely to influence technology and society.”
The authors define the CD index as the difference between disruptive and verifiable citations without duplicate self-referencing articles, and graphically illuminate a sensitivity analysis of alternative CD index values. They explore the robustness of the CD index with almost 40 million research publications that cover different STEM disciplines and journals in the Microsoft Academic Graph dataset. They derive and apply a logistic regression equation to compute the probability of individual scientific papers to be cited in authentic publications with impacts on technology and society. The authors’ temporal examination reveals intriguing trends in the integration of science into public use.
The authors recognize the limitations of this study: (1) the use of only quantitative measures like the CD index and disruptive citations might fail to identify qualitative components embedded in disruptive research; (2) the lack of connections between scientific publications and other important societal factors; (3) the imprecise classification of paper publications to only technology and societal domains; and (4) the confounding effects of variables associated with disruptive research and social impact.
Taking such a multidisciplinary look at the quality of research publications is difficult, especially in light of the continued influences of artificial intelligence (AI) on information dissemination. STEM professionals, sociologists, psychologists, and statisticians should read this insightful paper.