Evaluating CEED’s impact
What is CEED’s impact?
If CEED was an academic its impact might be summarised by its publication record. To date, CEED research has produced more than 800 journal publications over the last six years. One in every 28 journal papers are considered ‘highly-cited’, meaning they have received enough citations to place them in the top 1% of their respective fields. Collectively, these publications have an H-index of 45 (see more on the H-index below).
But real-world impact (or the knowledge transfer and application of CEED’s many outputs) is much harder to pin down than a simple publication index. CEED has dissected 87 of its most iconic research projects, and 58 of these engaged over 100 end-users and stakeholders based in government or industry positions.
The wide scope and sheer volume of CEED research makes evaluating its impact a considerable task. What’s more, there are unique challenges in evaluating the impacts of environmental sciences, particularly due to the lag time between the results being made available, to contributing to a policy change, and then further, to being able to measure the difference the change has made to our society.
We are starting to build evidence that CEED has achieved more impact in environmental decision making than would have been possible by the sum of all the individual, smaller research centres that make it up.
In the second half of June, David Pannell from UWA and CEED’s Director Kerrie Wilson convened a workshop in Perth with CEED Chief Investigators and external reviewers to discuss the impact that CEED-funded research has had (and continues to have) on our world.
“There is very little published work on how to evaluate the impact of environmental science research,” says David Pannell. “The existing thinking and evidence on the benefits of research is largely focused on agricultural science. We’ve had to adapt that to some degree to suit environmental research, but some of the key factors are consistent. These include identifying what new knowledge or systems have been generated by the research, what decisions or practices they will change, the extent of that change, the importance of the change, and the time lags involved. Our work evaluating CEED’s impact will, we hope, demonstrate a process and contribute to a better understanding of the various drivers at play.
“One of the things we’ve been trying to thrash out is whether CEED has had an effect that is ‘greater than the sum of its parts’. That is, what extra might have been gained from the ARC funding CEED as a centre, compared to if they had instead individually given funding to each CI.”
Part of the process will involve interviewing a cross section of stakeholders involved in CEED’s research. These professionals will be asked about their perspectives, views, and other forms of evidence to support CEED’s claims of impact.
Ideas that are being explored in this evaluation include the adoption of CEED-inspired terminology, a change in mindset within an organisation in terms of raising awareness of environmental decision making tools, methods and approaches, and the extent to which CEED research has contributed to policy change, the prioritisation of funding or the establishment of programs.
“We are starting to build evidence that CEED has achieved more impact in environmental decision making than would have been possible by the sum of all the individual, smaller research projects led by a group of individuals working in isolation,” observes Kerrie Wilson.
“This is partly due to the networks and collaborations that have been formed as a result of CEED. As such, the evaluation will also look into how CEED contributed to the effectiveness of postgraduate research training programs, research communication, and the career development of researchers.
“And, on top of this, if CEED can demonstrate a process by which an environmental research network can effectively demonstrate its own impact, well that will be one more important legacy we leave behind.”
More info: Tammie Harold firstname.lastname@example.org
Would you be happy with an H-index of 45?
Does an academic who produces 200 papers have twice the impact of another academic who only produces 100? What if someone had only published 5 papers but each one was cited 1,000 times in other papers (whereas each of the 200 papers published by the first academic had only each been cited in one other paper)? Citations, of course, are one measure of the influence of your research; the more times you are cited the greater your influence. Sometimes it’s suggested that the number of papers you publish is an indication of quantity whereas the number of times each paper is cited is an indicator of quality.
Which is more important (quantity or quality)? The answer (as it always is in science), is it depends. It depends on what the papers were about (whether they solved something deemed important), how the research was done and how many people used the results. Impact is a very relative thing.
Measuring the impact of research has been a long running challenge and there have been many efforts to produce publication indexes to reflect the impact of individual researchers. Indexes are needed* because raw publication data can be quite misleading. Numbers of papers and numbers of citations per paper are the two key bits of information but how to combine them in a simple and meaningful way has always been tricky.
One the most common indexes used these days is the H-index (first suggested by the physicist Jorge Hirsch in 2005). A scholar with an index of h has published h papers each of which has been cited in other papers at least h times. For example, a researcher who has published 20 papers would need each of them to be cited 20 times to score an H-index of 20. If this researcher was then to publish a 21st paper, she (or he) would not increase their H-index until all 21 had been cited 21 times.
The H-index reflects both the number of publications and the number of citations per publication. And while it is relatively easy for a competent academic to reach an index of 10 (10 papers cited at least 10 times), it becomes increasingly difficult to increase your H-index the higher you get.
An H-index of 45 (45 papers cited at least 45 times) would be excellent for a high achieving mid-career academic. Having said that, it’s impossible to compare an individual’s track record to a research network like CEED. The complexity surrounding the concept of ‘impact’ for a network of researchers is enormous, which helps explain why no universal index exists to reflect such impact.
*As a side note, CEED has done a bit of research on what are the ingredients of an effective index (see Decision Point #56) and the related idea of how to build a strong prioritisation metric (see Decision Point #82).