FreshRSS

🔒
❌ Acerca de FreshRSS
Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerTus fuentes RSS

Development and validation of critical appraisal tool for individual participant data meta-analysis: protocol for a modified e-Delphi study

Por: Otalike · E. G. · Clarke · M. · Veroniki · A. A. · Tricco · A. C. · Moher · D. · Shea · B. · Doherty-Kirby · A. · Kandala · N.-B. · Gagnier · J.
Introduction

Individual participant data meta-analysis (IPD-MA) is regarded as the gold standard for evidence synthesis. However, diverse recommendations and guidance on its conduct exist, and there is no consensus-based tool for the critical appraisal of a completed IPD-MA. We aim to close this gap by systematically identifying quality items and developing and validating a critical appraisal checklist for IPD-MA.

Methods and analysis

This study will comprise three phases, as follows:

Phase 1: a systematic methodology review to identify potential checklist domains and items; this will be conducted according to the Cochrane methods for systematic reviews and reported following the Preferred Reporting Items for Systematic Reviews and Meta-analysis 2020 guidance. We will include studies that address methodological guides and essential statistical requirements for IPD-MA. We will use the proposed items to prepare a preliminary checklist for the e-Delphi study.

Phase 2: at least two rounds of an e-Delphi survey will be conducted among panels with expertise in IPD-MA research, consensus development, healthcare providers, journal editors, healthcare policymakers, patients and public partners from diverse geographic locations with experience in IPD-MA. Participants will use Qualtrics software to rate items on a 5-point Likert scale. The Wilcoxon matched signed rank test will estimate response stability across rounds. Consensus on including an item will be achieved if ≥75% of the panel rates the item as ‘strongly agree’ or ‘agree’ and items will be excluded if ≥75% rates it as ‘strongly disagree’ or ‘disagree’. A convenience sample of 10 reviewers with experience in conducting an IPD-MA will pilot-test the checklist to provide practical feedback that will be used to refine the checklist.

Phase 3: critical appraisal checklist validation: to improve confidence in the tool’s uptake, a subset of the e-Delphi participants and graduate students of epidemiology and biostatistics will conduct content validity and reliability testing, respectively, per the Consensus-based Standards for the Selection of Health Measurement Instruments.

Ethics and dissemination

Ethics approval has been obtained from the Western University Health Science Research Ethics Board in Canada. The validated checklist will be published in a peer-reviewed open-access journal and shared across the networks of this study’s steering committee, Cochrane IPD-MA group and the institutions’ social media platforms.

COVID-19-related research data availability and quality according to the FAIR principles: A meta-research study

by Ahmad Sofi-Mahmudi, Eero Raittio, Yeganeh Khazaei, Javed Ashraf, Falk Schwendicke, Sergio E. Uribe, David Moher

Background

According to the FAIR principles (Findable, Accessible, Interoperable, and Reusable), scientific research data should be findable, accessible, interoperable, and reusable. The COVID-19 pandemic has led to massive research activities and an unprecedented number of topical publications in a short time. However, no evaluation has assessed whether this COVID-19-related research data has complied with FAIR principles (or FAIRness).

Objective

Our objective was to investigate the availability of open data in COVID-19-related research and to assess compliance with FAIRness.

Methods

We conducted a comprehensive search and retrieved all open-access articles related to COVID-19 from journals indexed in PubMed, available in the Europe PubMed Central database, published from January 2020 through June 2023, using the metareadr package. Using rtransparent, a validated automated tool, we identified articles with links to their raw data hosted in a public repository. We then screened the link and included those repositories that included data specifically for their pertaining paper. Subsequently, we automatically assessed the adherence of the repositories to the FAIR principles using FAIRsFAIR Research Data Object Assessment Service (F-UJI) and rfuji package. The FAIR scores ranged from 1–22 and had four components. We reported descriptive analysis for each article type, journal category, and repository. We used linear regression models to find the most influential factors on the FAIRness of data.

Results

5,700 URLs were included in the final analysis, sharing their data in a general-purpose repository. The mean (standard deviation, SD) level of compliance with FAIR metrics was 9.4 (4.88). The percentages of moderate or advanced compliance were as follows: Findability: 100.0%, Accessibility: 21.5%, Interoperability: 46.7%, and Reusability: 61.3%. The overall and component-wise monthly trends were consistent over the follow-up. Reviews (9.80, SD = 5.06, n = 160), articles in dental journals (13.67, SD = 3.51, n = 3) and Harvard Dataverse (15.79, SD = 3.65, n = 244) had the highest mean FAIRness scores, whereas letters (7.83, SD = 4.30, n = 55), articles in neuroscience journals (8.16, SD = 3.73, n = 63), and those deposited in GitHub (4.50, SD = 0.13, n = 2,152) showed the lowest scores. Regression models showed that the repository was the most influential factor on FAIRness scores (R2 = 0.809).

Conclusion

This paper underscored the potential for improvement across all facets of FAIR principles, specifically emphasizing Interoperability and Reusability in the data shared within general repositories during the COVID-19 pandemic.

❌