Most scientists reading this probably assume that their research-integrity office has nothing to do with them. It deals with people who cheat, right? Well, it’s not that simple: cheaters are relatively rare, but plenty of people produce imperfect, imprecise or uninterpretable results. If the quality of every scientist’s work could be made just a little better, then the aggregate impact on research integrity would be enormous.
How institutions can encourage broad, incremental improvements is what I have been working to figure out. Two things are needed: a collective shift in mindset, and a move towards appropriate measurement.
Over the past 2 years, some 20 institutions in the United Kingdom have joined the UK Reproducibility Network (UKRN), a consortium that promotes best practice in research. They have created senior administrative roles to improve research and research integrity. I have taken on this job (on top of my research on evaluating stroke treatments) at the University of Edinburgh. Since then, I’ve focused on research improvement rather than researcher accountability. Of course, deliberate fraud should be punished, but a focus on investigating individuals will discourage people from acknowledging mistakes, and mean that opportunities for systems to improve are neglected.
At the University of Edinburgh, we have audits as part of projects to shrink bias in animal research, speed up publication and improve clinical-trial reporting. These are not the metrics that most researchers are used to. Many people are initially wary of yet another ‘external imposition’, but when they see that this is about promoting our own community’s standards — and that there are no extra forms to fill in — they usually welcome this shift in institutional focus.
Here’s what we are learning to look for at my university.
Integrity indicators. Counting papers published in Science or Nature or prizes received is a poor reflection of performance. Measures should reflect the integrity of research claims: for instance, the proportion of quantitative studies that also publish data and code, and that pre-register their hypothesis, study design and analysis plan. At the University of Edinburgh, we are focusing on the reporting of randomization and blinding in published animal studies that test biomedical hypotheses. Existing tools can be applied to such tasks. The DOIs of publications that match a series of ORCIDs (author IDs) can be identified, the open-access status ascertained through the Unpaywall database, and these details can be linked back to institutions, departments or even individual research groups.
I care more about how my institution is doing compared with last year than about how it performs relative to other organizations. That said, benchmarking can be useful — and working with other organizations can help to develop standard reporting tools without reinventing the wheel.
Evidence of impact. Having data in hand allows an institution to focus on what can be improved, and how. In 2019, only 55% of Edinburgh clinical trials were fully reported on the European Union Clinical Trials Register. Programmes to reach trial organizers (by e-mailing reminders and mentoring them through the process) increased this to 95% in 2021. To build on that, I am working with members of UKRN and others to develop institutional dashboards that will provide real-time data across a range of measures, such as clinical-trial reporting and the quality and timeliness of reporting animal research.
Evidence of effectiveness. When a simple, inexpensive intervention improves reporting from 55% to 95%, you don’t need a randomized controlled trial. But it’s important to make sure that more-involved interventions have the desired effect. The scientific skills needed to establish causality can be applied to assess efforts in and across institutions. For example, at the University of Edinburgh, we offer researchers free consultations on methodology as they write grant applications, and this requires both applicants and consultants to invest much more of their time. We are also designing randomized studies to see whether and how methods and award rates improve.
A culture of trust. Many scientists have been scarred by successive, energy-sapping evaluations. More than one university has based layoffs on counts of faculty members’ high-impact papers or high-value grants, a practice that will make researchers sceptical of claims about prioritizing quality. Approaches to improvement need to be open and transparent, and constructive rather than punitive.
Learning from each other. No institution should go this alone. UKRN members are collaborating to ease workloads and encourage standardization, for instance in deploying a common research-culture questionnaire. Creating standards is the best way to change norms, otherwise early-career researchers will be tempted to concentrate on impressing future employers rather than on their current role.
My goal is that institutions should focus on what they can do to increase research integrity, not on the integrity of their researchers.
"Stop" - Google News
November 24, 2021 at 07:09PM
https://ift.tt/3COTiTK
Want research integrity? Stop the blame game - Nature.com
"Stop" - Google News
https://ift.tt/2KQiYae
https://ift.tt/2WhNuz0
Bagikan Berita Ini
0 Response to "Want research integrity? Stop the blame game - Nature.com"
Post a Comment