[ad_1]
Discussion
The present study indicates that PubMed captures more fine-grained bibliographic data on scientific commentary than the WoS. PubMed comments are generally short and purposeful, written to appraise a specific study. In contrast, WoS comments containing more extended materials with the essence of commentary citations address related works on an issue. Also, commented studies in PubMed tend to be more highly cited (‘impactful’) and highly disputed articles (‘controversial’) than WoS, though they both surpass the baseline level. Accordingly, they are more influential in the network, exhibiting higher network centrality metrics than WoS-only studies. These different patterns lead to distinct network structures: PubMed displays decentralised sparse networks with deeply interactive communications, containing authors’ feedback and abundant triangular structures with valuable insights into initial research’s strengths and weaknesses, while WoS exhibits a broad, dense network concentrated on the one-way passive citation. In conclusion, PubMed is better suited for evidence appraisal based on argumentation analysis, excelling in capturing impactful studies and emphasising interactive discussion on controversial topics. WoS, by contrast, is more oriented towards knowledge linking via comprehensive commentary references.
Though clear quality standards of research are lacking, four factors—credibility, contributory, communicable and conforming—play a crucial role in evaluating research quality.25 Previous works have intensely researched publications’ inner (ie, research design) and outer (ie, affiliation, citation) characteristics across various bibliometric platforms. A comparison study on WoS and PubMed based on medical journals of former Yugoslav countries found that the variance in methodological and scientometric quality of clinical studies is higher in journals referenced only in WoS than those only in PubMed or in both.26 Also, studies reveal that, though exhibiting a higher coverage of articles than WoS and PubMed,27 Scopus has a lower coverage of funding information28 and a higher incidence of mislabeled review articles.29 In addition, Scopus has significantly fewer citations of articles (median 57.2% vs 70.5%), editorials (2.1% vs 5.9%) and letters (0.8% vs 2.6%) than WoS, but more citations from non-English-language sources (10.2% vs 4.1%).30 Citation provides a valuable perspective on the topic, content and sentiments (how peers cited) towards a paper, reflecting the controversy and reliability of the original article.
Nevertheless, citations, widely used in scientometrics for evidence integration and claim sortation, have limitations. First, the interpretation of citation counts is controversial; high counts are often considered peer recognition, but ‘importance’ and ‘quality’ lack clear definitions.31 Second, according to constructivist theories, instead of the inherent quality of cited work, the capacity of cited work to support their claims is the authors’ motivation to persuade the audience.32 This makes citation counts an inadequate measure of research quality. Moreover, this behaviour can lead to ‘citation bias’ or ‘citation errors’, which could be amplified over time,33 with a range of 1 in 5 to 1 in 10 citations supporting claims inconsistent with the cited paper. Citation bias occurs when citations are used to justify claims not factually supported by the cited documents,34 in which authors preferentially cite papers that support a claim over those that undermine it. A classic case is that human susceptibility to faulty reasoning and cognitive bias has undoubtedly contributed to the US opioid crisis.35 All these weakened the strength of citations in evidence evaluation. Comment, as a particular type of citation, offers new opportunities.
Note that the study uses Scite.ai for sentimental analysis, differing from traditional emotional analysis. Instead of categorising text into positive, negative or neutral sentiments based on the emotional tone, Scite.ai classifies citations into ‘supporting’, ‘contradicting’ and ‘mentioning’, focusing on the relationship of new research to previous studies rather than the emotional content. Therefore, Scite.ai is more accurately described as an analysis of scholarly support and opposition rather than conventional sentiment analysis.
Limitations remain in this study. First, we only compare bibliographic comment data from PubMed and WoS. Comments from other high-quality social media platforms significantly shape knowledge, and future studies can consider them. Second, there is data loss in citation sentiment extractions of Scite.ai, as only full-text accessible articles can be accessed and extracted by Scite.ai. However, it is relatively fair to whatever PubMed group, WoS group or the baseline dataset. Scite.ai’s results account for over 50% of all records in each group, making them relatively comparable. Third, no consensus exists in labelling papers across various databases36 and inherent article-labelled errors exist. A sample analysis reveals that the correctness of document type in WoS is 94%.37 Another study of mislabeled review articles found that the mislabeled incidences in PubMed (1.9%) and WoS (14.1%) are lower than the official websites (16.0%) and Scopus (29.3%).29 While relatively lower error rates in the chosen sources than in others, the limitation of inherent labelling errors exists in database comparison analysis. Fourth, the study is solely on COVID-19 and related data, limiting its representativeness for a broader comparison among two databases.
[ad_2]
Source link




