Productive interactions without impact?

Magnus Gulbrandsen and Silje Maria Tellmann

This blog post is based on the Evidence & Policy article, Productive interactions without impact? An empirical investigation of researchers’ struggle to improve elderly’s oral health, part of the Special Issue: ‘Learning from Failures in Knowledge Exchange’.

Even if researchers do everything that is expected of them – collaborate with stakeholders, target important societal problems, engage in intensive science communication – societal impact may still not happen. What are the possible explanations?

A recurring observation in studies of the societal impacts of research is that substantial change typically involves a great deal of ‘productive interaction’ between stakeholders and researchers. However, not all interactions provide the desired societal impacts, as our empirical study of a cross-disciplinary research group focused on improving oral health in the elderly shows. In our Evidence & Policy article, we examined the subtleties of productive interactions and the intricate web of stakeholders, to shed light on the gaps that keep research efforts from having the desired societal impact.

We followed a group of researchers for six years, and even though they carried out many of the recommended activities to make impact happen, they were unable to achieve the expected outcomes. Even if many events took place that may – in an optimistic perspective – prepare the groundwork for future impact, no decisions in policy or practice targeting elderly’s oral health emerged. To analyse this process, we began by considering the oral health of the elderly as a problem area in which a wide range of stakeholders have a stake, but with varying interest, sense of urgency or capacity to make changes happen.

Continue reading

What can we learn from co-production approaches in voluntary sector evaluation work?


Louise Warwick-Booth, Ruth Cross and James Woodall

This blog post is based on the Evidence & Policy article, Obstacles to co-producing evaluation knowledge: power, control and voluntary sector dynamics’, part of the Special Issue: ‘Learning from Failures in Knowledge Exchange.

Co-production has been increasingly discussed as a positive and useful approach in health and social care research, based on principles such as partnership working, reciprocation, power sharing and the appreciation of all expertise. We have used co-production values to inform our evaluation work for many years, but in our Evidence & Policy article we reflect upon the challenges that such approaches bring, specifically in relation to sharing findings, known as knowledge exchange. Our article discusses evaluation work across three interventions that constitute perhaps the most challenging of our experiences in over a decade of such work. Conflict in evaluation work remains largely underreported, but we feel our experiences provide a useful contribution for readers.

Continue reading

Evidence & Policy Call for Papers – Special Issue on Learning through Comparison

Special Issue Editors: Katherine Smith, Valerie Pattyn and Niklas Andersen

Evidence & Policy is pleased to invite abstracts for papers that explicitly employ comparative analysis and/or that develop insights about evidence use in policy through comparison. Authors of selected abstracts will be invited to submit a full paper for consideration for inclusion in a special issue that is aiming to demonstrate the conceptual and empirical contribution that comparative research can offer scholarship on evidence and policy.

Continue reading

Why failure isn’t the f-word in knowledge brokering


Stephen MacGregor

This blog post is based on the Evidence & Policy article, Theorising a spectrum of reasons for failure in knowledge brokering: a developmental evaluation’, part of the Special Issue: ‘Learning from Failures in Knowledge Exchange.

Failure often gets a bad rap, especially in professional settings. It’s usually seen as a waste of time and resources, something to steer clear of. But failure is not just an unfortunate outcome; it can be a crucial learning opportunity.

Particularly in higher education, the pressure is on for academics and universities to show the real-world impact of research. Here, knowledge brokers play a critical role: they are the human force behind efforts to connect research production and use contexts. Yet, the challenges and failures that these professionals face are not often discussed.

My recent Evidence & Policy article aimed to shed light on the spectrum of reasons for failure in the professional practice of knowledge brokering, drawing on a set of semi-structured interviews with a network of knowledge brokers. To understand knowledge brokers’ experiences, two frameworks were integrated: (a) the integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework, and (b) Dr. Amy Edmondson’s Spectrum of Reasons for Failure framework.

Continue reading

Learning from failures in knowledge exchange: how hard can it be?


Peter van der Graaf, Ien van de Goor and Amanda Drake Purington

This blog post is based on the Evidence & Policy article, ‘Learning from failures in knowledge exchange and turning them into successes, which introduces the Special Issue: ‘Learning from Failures in Knowledge Exchange.

We don’t like talking about failures, as it signals loss of time, resources and reputation, but failures present opportunities for learning in knowledge exchange. However, this requires a ‘failure culture’ in academia and policy, in which failures are no longer avoided but actively encouraged. To learn how to turn failures into successes, we need to share and publish our failures, have early engagement with all stakeholders in the knowledge exchange process, and make more use of boundary spanners.

There are plenty of papers celebrating successes in knowledge exchange, but not many researchers and policy makers talk openly about their failures. However, learning from failures is just as important, if not more crucial, than celebrating successes. Allowing partners to reflect in a safe space on knowledge exchange practices and research projects gone wrong, in which communication broke down, partners did not engage or dropped out, and evidence was not taken up or ignored, will provide important lessons on how knowledge exchange practices and research can be improved.

At the 5th Fuse conference on knowledge exchange in public health, held in Newcastle, UK on 15-16 June 2022, we created such a space by bringing together over 100 academic researchers, policy makers, practitioners, and community members to share and reflect on thier failures and how to turn them into success. Our special issue brings together selected papers from the conference and papers that were submitted in response to an open call afterwards. From 23 original submissions from 14 different countries (including the UK, USA, Cananda, Norway, Switzerland, Kenya, Chile, South Korea, Canada and Portugal) and from a range of disciplines and areas of focus (Public Health, Primary Care, Oral Health, Sociology, Anthropology, Public Management, Policy-Making, and Community and Voluntary Sector), we invited four research papers and three practice papers for full submissions.

Continue reading

Breaking the Overton Window: on the need for adversarial co-production


Matthew Johnson, Elliott Johnson, Irene Hardill and Daniel Nettle

This blog post is based on the Evidence & Policy article, ‘Breaking the Overton Window: on the need for adversarial co-production’

Co-production has emerged as one of the key concepts in understanding knowledge-policy interactions and is associated with involvement of users of public services in their design and delivery. At a time of permacrisis, in which ever increasing numbers of Britons are exposed to financial insecurity, the need for transformative evidence-based policymaking is urgent and great. This is particularly important in highly distressed ‘left-behind’ communities targeted by the UK Government for Levelling Up, which constitutes an attempt to improve the infrastructural, economic, social and health environments of less affluent parts of the UK.

Often, policymakers regard the transformative policies capable of addressing these crises as beyond the ‘Overton Window’, which describes a range of policies in the political centre that are acceptable to the public. This window of opportunity can shift to encompass different policies, but movement is slow and policymakers generally believe that significant change lies outside it. This creates an Overton Window-based roadblock in evidence-based policymaking.

Continue reading

Science communication poses barriers in Congress for evidence-based policymaking, but less so for science and engineering fellows


K. L. Akerlof, Maria Carmen Lemos, Emily T. Cloyd, Erin Heath, Selena Nelson, Julia Hathaway and Kristin M. F. Timm

This blog post is based on the Evidence & Policy article, ‘Science communication in Congress: for what use?

A new model published in Evidence & Policy explains the factors that enable and constrain science communication in the U.S. Congress. We depict how the use of scientific information is most often called upon to support established positions, as opposed to formulating new policies, and that this changes the nature of the barriers to science communication. We studied this in the context of two types of Congressional staff: 1) science and engineering fellows who spend a year serving primarily in the personal offices of members (hereafter referred to as fellows), and 2) the legislative staff with whom they work. We found that fellows serving on the Hill experience fewer barriers to use of scientific information than legislative staff, which suggests the importance of scientific fluency for building congressional capacity.

Continue reading

Putting meat on the bones of data – how legislators define research evidence


Elizabeth Day

This blog post is based on the Evidence & Policy article, ‘How legislators define research evidence’.

When people ask about my research area, I answer that I study how policymakers use research evidence. Their response always follows a similar thread: ‘That sounds hard’ and ‘Ha! Do they even know what research is?’ These reactions align with a broader opinion in the United States that elected officials are clueless when it comes to using research evidence in the decision-making process.

Yet there are plenty of examples in research, legislation, and regulations where policymakers do use research in their work. My colleague Karen Bogenschneider and I wondered if this mismatch – assuming policymakers don’t use research when there are examples that they do – might have to do with a jingle-jangle problem: Do researchers and legislators actually mean the same thing when they say ‘research evidence’?

Continue reading

How to do knowledge mobilisation? What we know, and what we don’t


Hannah Durrant, Rosie Havers, James Downe and Steve Martin

This blog post is based on the Evidence & Policy article, ‘Improving evidence use: a systematic scoping review of local models of knowledge mobilisation’.

Knowledge mobilisation (KM) describes a process for enabling the use of research evidence in policymaking and public service design and delivery. Approaches to KM have evolved over the last two decades – away from one-directional efforts to push research out to decision makers towards a kaleidoscope of research-policy-practice engagement across overlapping phases of knowledge production and policy action. These processes are generally poorly understood at local levels of decision-making, where the specificities of policy and public service context can undermine generic ‘what works’ claims.

Our recent Evidence & Policy article, ‘Improving evidence use: a systematic scoping review of local models of knowledge mobilisation’, identifies three key features of local KM as well as highlighting the gaps in our understanding of how KM is done and with what effect. 

Our aim was to determine how KM is done ‘on-the-ground’, which can get obscured in frameworks that emphasise complexity while simplifying process. We argue that more detail is needed on these practices of KM to inform and improve process. Equally, attention is also needed on demand for and impact of evidence on policy and practice decisions.

Continue reading

Engaged scholarship entrepreneurship and policy impact


Kiran Trehan

This blog post is based on the Evidence & Policy article, ‘Compatible bedfellows? Engaged scholarship entrepreneurship and policy impact’.

In a rapidly evolving world, the role of entrepreneurship research and its impact on policy is more critical than ever. In this blog, I expand on my commentary on Johnson (2023) by exploring the intricate relationship between theory and its real-world application, shedding light on the uncertainty that has long surrounded entrepreneurship and small and medium-sized enterprise (SME) research. For years, the debate on how research can truly impact practice has been at the forefront of social science discussions (Beyer & Trice, 1982; Starkey and Tempest, 2005; Rynes, 2007; Trehan et al., 2018, 2022). This debate has emphasized the need for applied research in entrepreneurial scholarship that reflects the actual experiences of businesses.

Recognizing and appreciating the importance of research impact is not just a strategic concern for university business schools; it’s a measure of research’s real-world value. The gap between researchers and practitioners has significantly influenced how research is perceived, with academics focusing on ‘rigor’ and practitioners on ‘relevance’. Striking a balance between these expectations is crucial for both communities (Trehan, 2022). Edwards (2018) asserts that achieving policy impact is not only desirable but feasible, despite challenges such as engaging small business owners and the requirement for sustained interaction over time. Policy impact is attainable, but significant challenges persist, particularly in catering to the needs of small business owners and maintaining prolonged engagement.

Continue reading