“Reproducibly bad”: How Imperial’s Beyond Open Research project is reimagining ‘quality’

6 March 2023

Hamid Khan, Open Research Manager, Imperial College London

What really matters in research? What practices and behaviours lead to the highest-quality research? How can we recognise and incentivise those good practices?

These are some of the questions the Beyond Open Research (BOR) project has been exploring at Imperial College London since 2024. Led by Library Services, the project aims to transform the culture in which research is planned, carried out, disseminated and used in academia and society. In doing so, we are also learning valuable lessons about the role and expertise of the academic library.

We may believe Open Research is a public good in its own right, and we know about its benefits for addressing persistent threats to the integrity, relevance and usefulness of research: 

  • the reproducibility crisis
  • research waste
  • publication bias
  • perverse incentives. 

But we also know openness itself doesn’t imbue research with quality. We need to think beyond what open practices alone can achieve. At Imperial, we are doing this by asking how to address gaps in transparency and scrutiny of research data, software and real‑world applications, and how to refocus incentives away from “publish-or-perish” towards what actually matters for conducting and communicating high‑quality research. That’s what it means to go “Beyond” Open Research.

What do we do with Open Research to improve its quality? The broadest answer is “peer review”, but peer review of research data is underdeveloped, unstandardised and challenging. There are few accepted standards for evaluating its quality. We are going beyond Open Research by engaging researchers and research professionals to answer outstanding questions about the transparency and quality of research data.

To go beyond Open Research, we must identify the values and practices that matter for the conduct and communication of high-quality research. Yet this is difficult in a culture of “publish-or-perish”. We need to refocus incentives towards what actually matters, recognising the challenge laid down by DORA and the urgent need to make its commitments real and practical. We are engaging with researchers, HR, the Research Office and others to explore how Imperial can build a culture of progressive and inclusive research evaluation that recognises high-quality research practices in assessment, hiring, promotion and funding decisions.

Doing this has led to some startling conclusions. For one, perhaps we are thinking about quality all wrong. When we lament the “reproducibility crisis”, what are we really worried about? Is it the ability for one researcher to recreate someone else’s findings using their methods, data and analyses? Surely not. We know this from the way AI is used and misused in society. Models trained on biased or incomplete datasets, making weak assumptions about difficult-to-measure real-world phenomena and lacking meaningful validation can give the same useless result consistently. That is to say, they are reproducibly bad. Reproducibility is not enough, and perhaps it’s not even the right focus.

What’s more, professional staff and the public are too often treated simply as “non‑academics” in these discussions, and their voices systemically excluded. We have challenged this exclusion. Our hands-on workshops have brought together researchers at all career stages with professional, technical and operational staff and members of the public to identify the challenges and co-create practical solutions. We have created a collaborative space in which people can contribute experience, insight and expertise that researchers alone do not have. By emphasising parity of esteem and Team Science, we have empowered those traditionally excluded to shape research culture with their perspectives. 

That methodological innovation is crucial, because it demonstrates the shift we need in our thinking about research quality: towards usability, interpretability, contextual grounding, provenance and validity in real‑world decision contexts. In other words, we’re moving from reproducibility to reliability. Our approach gives me the confidence that our solutions possess trustworthiness, legitimacy and, above all, reliability.

Workshop participants agreed on this: what we need to do is embed reliability in research data – not incidentally but systemically and culturally. Our work has shown us:

  • there is a strong, shared commitment to data quality and public involvement
  • everyone who contributes to, and benefits from, research is motivated to address training and partnership gaps
  • Team Science provides a constructive way to rethink research assessment
  • our workshop format is a proven, scalable model for meaningful co‑creation
  • the academic library has a central role as a motivated and expert convenor of conversations about research quality and culture.

Using insights from the last two years of intensive discussion, consultation and co-creation, we have developed a training programme focused on embedding transparency and reliability in research data. The programme, developed in partnership with the MRC Laboratory of Medical Sciences (LMS) – a semi-autonomous Imperial research institute – and the UK Reproducibility Network, launched in February 2026.

A cohort of LMS postdoctoral researchers and early career fellows will be trained to identify and address the systemic factors affecting transparency and quality in their own research. Through peer-learning and a train-the-trainer approach, they will develop the confidence and skills to champion research data transparency and reliability in their groups and departments. This strand will culminate with the participants developing a collaborative, Team Science-based approach to data peer review – something not yet realised or embedded in research culture.

If we truly want to reform research culture, we must address reward and assessment concurrently. That’s where Team Science comes into its own. As defined by workshop participants, Team Science at Imperial is a “collective and collaborative approach to research that is enabled by knowledge-sharing, training, diversity and recognition”. Project participants are using the SCOPE Framework for Research Evaluation to develop ways to recognise and incentivise the work that goes into making research data transparent and reliable. This will lead to a pathway by which Imperial’s commitment to DORA can be made real, practical and beneficial to the research community. That remains a long-term ambition.

This is far from a solo effort. I am grateful to the entire project leadership team, particularly our external collaborators Fiona Booth (University of Bristol) and Lizzie Gadd (Loughborough University), whose work developing the Reproducibility by Design toolkit and the SCOPE Framework, respectively, underpins our programme. BOR is the first attempt to bring the two approaches together.

BOR is a pioneering and relentlessly practical project, involving researchers, professional staff and the public in co‑creating meaningful and lasting change in Imperial’s research culture. We are not merely talking about it. We have a rare opportunity to make a real difference, and I am proud and excited to be leading it.

Dr Hamid Khan is Open Research Manager at Imperial College London. He works with researchers, academics and professional staff to advocate transparency, equity, accessibility and reliability in research. To embed Open Research, Hamid is intensively focused on reforming research culture, encouraging the university community to refocus away from publication volume and towards quality. He also serves as Imperial’s Local Network Lead within the UK Reproducibility Network and, since January 2025, has been Vice‑Chair of SPARC Europe. A “recovering academic”, Hamid holds a PhD in nanomaterials chemistry.