top of page
Search

Junket or Gem? Reflections on the ARMS UK Impact Study Tour



In early June, I joined a group of intrepid antipodeans on the long haul to Old Blighty, to find out what impact, and impact measurement means to the UK’s research sector. The driver for this trip comes from the increasing push in Australia to place research impact front and centre. This is impact beyond the traditional bibliometric academic focus and measures, to designing and conducting research which makes a demonstrable contribution to the economy, society, culture, national security, public policy or services, health, the environment, or quality of life. The clanging arrival of the impact agenda has been increasing in volume for some time (like a red-wine hangover dully thudding in the corner of one’s Sunday morning head, some might say). And the introduction by the Federal Government this year of the Engagement and Impact (E&I) Assessment pilot, with the joy of the ‘full blown affair’ scheduled to run alongside the Excellence in Research Australia exercise next year, has pushed poor old impact fairly and squarely into the spotlight.

In the UK, Research Excellence Framework (REF) exercise, which last ran in 2014, included the requirement for impact case studies to be submitted across all fields of research. They were rated between one and four stars (four being quality that is world-leading in terms of originality, significance and rigour). Impact case studies are meant to prove or illustrate the achievement of impact and importantly, the weighting for the impact assessment part of the REF was 20% of the total assessment – influencing the allocation of around £1.6 billion worth of public funding over the following five years.

In Australia, we are dipping our toes in the water with the E&I Assessment, and the results of the 2018 exercise will not be related to funding allocation…yet…(watch this space, I predict). So, was there any value garnered from our visits to the eight or so UK instiutions we visited? In a word, yes. Two more words; in spades.

Here are the top five lessons from the trip:

  1. Without a culture of impact embedded into the DNA of the institution, there is a very real problem of impact focus being driven by selective reporting (i.e. we only plan for and value impact because we have to report on it and we cherry pick the best of the bunch). This is not a new insight, in fact it has been partly investigated through the Stern Review and commented on by researchers and non-researchers alike (see Richard Watermeyer’s excellent article in The Conversation last year for a summary: https://theconversation.com/stress-put-on-academics-by-the-ref-recognised-in-stern-review-63237). We run the very real risk here in Australia of falling into the trap of focusing on producing impact case studies because we have to, instead of ensuring the focus on impact continues to be mandated from the very top and driven bottom up. We need believers and facilitators, not box tickers.

  2. What the heck is impact anyway? As you can well imagine, impact coming from work out of The Rights Lab beacon at University of Nottingham.(http://rightsandjustice.nottingham.ac.uk/) tends to be a tad different to impact coming from the Kettle’s Yard art precinct at Cambridge (http://www.kettlesyard.co.uk/). Defining impact across the disciplines and research effort is tricky and contentious. Proving it is a monumental task too. Here in Australia we must accept the nuance of defining impact, giving it loose feathery boundaries, rather than a rigid box. Not only will this enable researchers to innovate outside fixed confines, but it will inevitably lead to more cross, inter and multi-disciplinary teams than ever before. (Notice the emphasis on teams people.) In the UK there is also a huge drive beyond traditional bibliometrics, towards ‘responsible metrics' (i.e only measuring those things which are actually measurable in both E&I and research assessment more generally), which I expect Australia will make a shift towards as well. We must also remember not all research can be easily envisioned as having impact, neatly touched on in this recent article: https://theconversation.com/academics-fear-the-value-of-knowledge-for-its-own-sake-is-diminishing-75341 - but is vital nonetheless in the human endeavour.

  3. Public engagement in the UK, as part of the impact agenda, is more important than in Australia. Some of this appears to be driven from the public’s fatigue of ‘experts’ and the divide between the public and universities. There is a perception battle being waged against public perception of higher education institutions as irrelevant, out of touch, unaccountable and a waste of tax-payer money. They are also considered to be secretive, untrustworthy, elitist and reinforcing inequality. Of course, the Brexit mess hasn’t helped. Our engagement challenge if you will, is more with industry and government than with the public.

  4. UK universities, supported I must say by the research councils and pots and pots of funding (eg, Impact Acceleration Accounts http://www.esrc.ac.uk/funding/funding-opportunities/impact-acceleration-accounts/), are strategically investing in people and resources whose sole purpose is to define, facilitate, elicit, lubricate (too much?), and generally help researchers, industry, government and civil society, achieve the best impact possible. Scale aside, in Australia this just isn’t the case. A clever government (I know, oxymoron) will acknowledge that impact is hard to deliver and it is unrealistic to place the burden on an already stretched sector without targeted support. The National Innovation and Science Agenda is a good start though.

  5. Leading on from point four is the fact that industry, government, civil society and the research sector have a much more symbiotic relationship than here in Australia. Rather than the mystery and at times master/servant relationship which exists here; in the UK there seems to be a history of mutual trust and value (okay, maybe I’m stretching it a bit) – but honestly the relationships appear deeper than just the next widget and the next election cycle. Our lesson is to stop talking about the need to engage and teach researchers, government, industry and civil society how to engage. We get the theory – now can we practice, practice, practice, please.

The wrap up

This might all sound as though I’ve returned from the study tour starry-eyed about the UK and secretly investigating how I can emigrate (impossible as it turns out), but there’s an up-side for us with the UK being further down the road than we are. We get to learn all the lessons from the UK and hopefully not step in the same mess it has in terms of REF-driven activity being at odds with genuine impact-driven research; and we get to use some of their excellent ideas to smooth the path ahead.In summary, we need an impact culture to be embedded into the DNA of our research institutions top down and bottom up; we need to be able to define and value impact in all its guises – this means hard conversations on a regular basis; we need to invest in supporting researchers to achieve impact (not just do research), and I mean seriously invest; and we must stop talking about engagement and truly be engaged – this can only be achieved though capability-building and resourcing.Australia has some of the best researchers and the best research in the world. This is a true fact (as my nine-year old would say). Let’s make sure they have every possible chance of achieving impact through support, leadership, capacity-building and engagement – doing the research is the easy part (!) – getting it out there and used is the challenge.

* Thanks to ARMA UK, University of Sheffield, University of Nottingham, Loughborough University, Cambridge University, Cranfield University, Kings College London – The Policy Institute, The London School of Economics and the National Centre for Universities and Business; for their time, candour and insights.

bottom of page