...

Producción Científica

 

 

An infection that can occur in any part of the urinary system is known as a urinary tract infection (UTI). The urinary system comprises the bladder, urethra, ureters, and kidneys. The majority of infections affect the lower urinary system’s bladder and urethra. This study presents a scientometric analysis of authorship patterns in Urinary Tract Infection (UTI) and Diabetes. The study focuses on Lotka’s law to understand the productivity and impact of authors in the field. For this study, 1149 documents were retrieved from the Web of Science database from 2009 to 2023. The USA leads in publications on UTIs and diabetes among all countries. Among all authors, Kuku K has been the most productive author. K-S test reveals that the current data set does not support Lotka’s law’s applicability to research on urinary tract infections and diabetes. The findings of the study suggest that there is a need for more research to be done to improve the understanding of the relationship between UTI and Diabetes.

 

 

Health communication is one of the crucial fields of contemporary research, especially in the post-Covid-19 epidemic era. In the last decade (2013-2022), however, no bibliometric performance analysis of the health communication (HC) overall has been conducted.To investigate research performance in HC, including performance of different countries/regions, institutions/groups, authors, journals, research areas, as well as status of collaboration and funding support. On January 25, 2023, a search for topic terms and article sources was carried out using Web of Science Core Collection. Then, duplicate data were removed, and a manual screening process was implemented. Bibliometric indicators were selected. Finally, data collection, analysis, and graph were performed using Excel 2019 and Origin 8.5.The quantity of publications in HC has steadily risen from 2013, and experienced rapid growth after 2019, whereas these publications’ overall impact has been on a decline. The articles are predominantly situated within the fields of social sciences, medicine, environmental science, and science and technology, for these areas benefit from the highest level of financial support. USA continues to hold an absolute leadership position in research productivity, reflected in the preponderance of prolific institutions and authors affiliated with USA. Additionally, among the top productive18 journals, half are OA, and importantly, their h-indices tend to be on par with non-OA journals. At last, collaboration between authors and institutions is widespread, however, the degree of international collaboration is relatively low. For HC research, diverse nations should strive to overcome cultural and political barriers, fostering stronger research collaborations. The diminishing influence of HC articles over the years may not be a reflection of a decline in scholarly quality; instead, it might be the integration of knowledge with substantial heterogeneity.

 

 

Motivation: Citations have a fundamental role in scholarly communication and assessment. Citation accuracy and transparency is crucial for the integrity of scientific evidence. In this work, we focus on quotation errors, errors in citation content that can distort the scientific evidence and that are hard to detect for humans. We construct a corpus and propose natural language processing (NLP) methods to identify such errors in biomedical publications. Results: We manually annotated 100 highly-cited biomedical publications (reference articles) and citations to them. The annotation involved labeling citation context in the citing article, relevant evidence sentences in the reference article, and the accuracy of the citation. A total of 3063 citation instances were annotated (39.18% with accuracy errors). For NLP, we combined a sentence retriever with a fine-tuned claim verification model to label citations as ACCURATE, NOT_ACCURATE, or IRRELEVANT. We also explored few-shot in-context learning with generative large language models. The best performing model—which uses citation sentences as citation context, the BM25 model with MonoT5 reranker for retrieving top-20 sentences, and a fine-tuned MultiVerS model for accuracy label classification—yielded 0.59 micro-F1 and 0.52 macro-F1 score. GPT-4 in-context learning performed better in identifying accurate citations, but it lagged for erroneous citations (0.65 micro-F1, 0.45 macro-F1). Citation quotation errors are often subtle, and it is currently challenging for NLP models to identify erroneous citations. With further improvements, the models could serve to improve citation quality and accuracy.

 

 

As the volume of scientific literature expands rapidly, accurately gauging and predicting the citation impact of academic papers has become increasingly imperative. Citation counts serve as a widely adopted metric for this purpose. While numerous researchers have explored techniques for projecting papers’ citation counts, a prevalent constraint lies in the utilization of a singular model across all papers within a dataset. This universal approach, suitable for small, homogeneous collections, proves less effective for large, heterogeneous collections spanning various research domains, thereby curtailing the practical utility of these methodologies. In this study, we propose a pioneering methodology that deploys multiple models tailored to distinct research domains and integrates early citation data. Our approach encompasses instance-based learning techniques to categorize papers into different research domains and distinct prediction models trained on early citation counts for papers within each domain. We assessed our methodology using two extensive datasets sourced from DBLP and arXiv. Our experimental findings affirm that the proposed classification methodology is both precise and efficient in classifying papers into research domains. Furthermore, the proposed prediction methodology, harnessing multiple domain-specific models and early citations, surpasses four state-of-the-art baseline methods in most instances, substantially enhancing the accuracy of citation impact predictions for diverse collections of academic papers.

 

 

This research scrutinizes the trends and dynamics of Intellectual Property Protection (IPP) of Intangible Cultural Heritage (ICH) in China, utilizing a dataset of 91 papers from the CNKI database spanning 2011 to 2020. The study uses CiteSpace software to visualise and analyse the literature across multiple dimensions, including article count, authorship, institutional affiliations, and keyword co-occurrence. Findings indicate a lack of robust collaboration among authors and institutions in IPP and ICH, with a scarcity of active cooperative groups. Critical research hotspots identified encompass intangible cultural heritage, intellectual property protection, inheritors, legal protection, copyright, intellectual property law, and geographical indications, with the legal safeguarding of ICH’s intellectual property, digital conservation, traditional cultural expressions, and original authentication emerging as the leading research frontiers. This investigation provides a holistic view of China’s IPP and ICH landscape, offering essential scientific insights for ongoing scholarly discourse. This study mainly benefits policymakers and stakeholders in the cultural heritage sector, underscoring the necessity of enhanced authorial and institutional collaboration and the prioritization of legal and digital protection mechanisms to safeguard China’s intangible cultural legacy for posterity. The analysis is critical, informing policy formulation and strategic planning to bolster ICH’s protection and sustainable management in China.

 

 

Background: Reviewers rarely comment on the same aspects of a manuscript, making it difficult to properly assess manuscripts’ quality and the quality of the peer review process. The goal of this pilot study was to evaluate structured peer review implementation by: 1) exploring whether and how reviewers answered structured peer review questions, 2) analysing reviewer agreement, 3) comparing that agreement to agreement before implementation of structured peer review, and 4) further enhancing the piloted set of structured peer review questions. Methods: Structured peer review consisting of nine questions was piloted in August 2022 in 220 Elsevier journals. We randomly selected 10% of these journals across all fields and IF quartiles and included manuscripts that received two review reports in the first 2 months of the pilot, leaving us with 107 manuscripts belonging to 23 journals. Eight questions had open-ended fields, while the ninth question (on language editing) had only a yes/no option. The reviews could also leave Comments-to-Author and Comments-to-Editor. Answers were independently analysed by two raters, using qualitative methods. Results: Almost all the reviewers (n = 196, 92%) provided answers to all questions even though these questions were not mandatory in the system. The longest answer (Md 27 words, IQR 11 to 68) was for reporting methods with sufficient details for replicability or reproducibility. The reviewers had the highest (partial) agreement (of 72%) for assessing the flow and structure of the manuscript, and the lowest (of 53%) for assessing whether interpretation of the results was supported by data, and for assessing whether the statistical analyses were appropriate and reported in sufficient detail (52%). Two thirds of the reviewers (n = 145, 68%) filled out the Comments-to-Author section, of which 105 (49%) resembled traditional peer review reports. These reports contained a Md of 4 (IQR 3 to 5) topics covered by the structured questions. Absolute agreement regarding final recommendations (exact match of recommendation choice) was 41%, which was higher than what those journals had in the period from 2019 to 2021 (31% agreement, P = 0.0275). Conclusions: Our preliminary results indicate that reviewers successfully adapted to the new review format, and that they covered more topics than in their traditional reports. Individual question analysis indicated the greatest disagreement regarding the interpretation of the results and the conducting and the reporting of statistical analyses. While structured peer review did lead to improvement in reviewer final recommendation agreements, this was not a randomized trial, and further studies should be performed to corroborate this. Further research is also needed to determine whether structured peer review leads to greater knowledge transfer or better improvement of manuscripts.

 

 

Objectives: Development of search queries for systematic reviews (SRs) is time-consuming. In this work, we capitalize on recent advances in large language models (LLMs) and a relatively large dataset of natural language descriptions of reviews and corresponding Boolean searches to generate Boolean search queries from SR titles and key questions. Materials and Methods: We curated a training dataset of 10 346 SR search queries registered in PROSPERO. We used this dataset to fine-tune a set of models to generate search queries based on Mistral-Instruct-7b. We evaluated the models quantitatively using an evaluation dataset of 57 SRs and qualitatively through semi-structured interviews with 8 experienced medical librarians. Results: The model-generated search queries had median sensitivity of 85% (interquartile range [IQR] 40%-100%) and number needed to read of 1206 citations (IQR 205-5810). The interviews suggested that the models lack both the necessary sensitivity and precision to be used without scrutiny but could be useful for topic scoping or as initial queries to be refined. Discussion: Future research should focus on improving the dataset with more high-quality search queries, assessing whether fine-tuning the model on other fields, such as the population and intervention, improves performance, and exploring the addition of interactivity to the interface. Conclusions: The datasets developed for this project can be used to train and evaluate LLMs that map review descriptions to Boolean search queries. The models cannot replace thoughtful search query design but may be useful in providing suggestions for key words and the framework for the query.

 

 

Psychosociology theories indicate that individual evaluation is integral to the recognition of professional activities. Building upon Christophe Dejours’ contributions, this recognition is influenced by two complementary judgments: the “utility” judgment from those in hierarchy and the “beauty” judgment from the peers. The aim of this paper is to elucidate how at INRAE individual assessment of scientists is conducted. This process follows a qualitative and multicriteria-based approach by peers, providing both appreciations and advice to the evaluated scientists (the “beauty” judgment). Furthermore, we expound on how INRAE regularly adapts this process to the evolving landscape of research practices, such as interdisciplinary collaboration or open science, assuring that assessments align with the current approaches of research activities.

 

 

Bibliometric analysis has recently become a popular and rigorous technique used for exploring and analyzing the literature in business and management. Prior studies principally focused on ‘how to do bibliometric analysis’, presenting an overview of the bibliometric methodology along with various techniques and step-by-step guidelines that can be relied on to rigorously conduct bibliometric analysis. However, the current body of evidence is limited in its ability to provide practical knowledge that can enhance the design and performance of bibliometric research. This claim is supported even by the fact that relevant studies refer to their work as ‘bibliometric analysis’ rather than ‘bibliometric research’. Accordingly, we endeavor to offer a more functional framework for researchers who wish to design/conduct bibliometric research on any field of research, especially business and management. To do this, we followed a twofold way. We first outlined the main stages and steps of typical bibliometric research. Then, we proposed a comprehensive framework for specifying how to design/conduct the research and under what headings the relevant stages (step-by-step) will be used and/or presented. Thus, the current paper is expected to be a useful source to gain insights into the available techniques and guide researchers in designing/conducting bibliometric research.

 

 

The past 20 years has seen a significant increase in articles with 500 or more authors. This increase has presented problems in terms of determining true authorship versus other types of contribution, issues with database metadata and data output, and publication length. Using items with 500+ authors deemed as mega-author titles, a total of 5,533 mega-author items were identified using InCites. Metadata about the items was then gathered from Web of Science and Scopus. Close examination of these items found that the vast majority of these covered physics topics, with medicine a far distant second place and only minor representation from other science fields. This mega-authorship saw significant events that appear to correspond to similar events in the Large Hadron Collider’s timeline, indicating that the projects for the collider are driving this heavy output. Some solutions are offered for the problems resulting from this phenomenon, partially driven by recommendations from the International Committee of Medical Journal Editors.

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.