Embarking on a career within academia is a challenging endeavour. The number of fixed positions are limited and in many cases, the number of suitable candidates exceeds the number of opportunities. Furthermore, the context and demands toward academics are changing, and it is important to have selection processes in place based on previously established and transparent criteria that ideally reflect the full potential of early career researchers (ECRs).
Whilst the ‘quality’ or ‘impact’ of research cannot be objectively measured and are always debatable, recent years have seen a trend towards using numerical metrics to evaluate the potential of researchers. For this, various quantitative indicators have been developed and become popular. This includes the Journal Impact factor (JIF) or the H-Index that claim to represent the productivity and visibility of a researcher in a comprehensive manner. They depend on how much and where he or she has published, or how often the work of a researcher has been cited. And these indicators are widely used. According to a recent study by the European University Association, 53% of responding institutions consider measuring the research output based on number of publications and citations as ‘very important’ and another 29% consider this as ‘important’. 75% of the universities surveyed use the aforementioned JIF as a means to evaluate researchers and this widespread use of just a few indicators is quite a recent development. The JIF was first developed in the 1960s with the aim of advising librarians on which journals are most popular within a researcher’s community and should be subscribed to. The JIF was not intended as a measurement of the quality of the research and the performance of an individual researcher, as happens today. What used to measure the popularity of a journal now influences how research is done and communicated.
While indicators are easy to handle and promise to be transparent, they entail several problems. When only the journal and how much you publish matters, then content and impact beyond a certain indicator loses relevance. As a consequence, “publish or perish” becomes a dominant factor in career planning, leading to several problems. These include questionable research practices aimed at producing spectacular results and mental health issues for researchers under extreme pressures to publish enough and in the ‘right’ journals.
This leads to a situation where ECRs are forced to orientate their publication strategy based on quantitative indicators and not on the question of where their own work fits best. If an ECR decides to publish Open Access, he or she could be in fact ‘punished’ for this decision, as Open Access journals typically have a lower impact factor. The same applies for publications of books (especially doctoral theses) or transdisciplinary research. ECRs are also forced to focus their attention on publishing and not invest their time in other research-related activities, even when this is important for their future within or outside academia. This includes, for example, the collection and processing of data according to the FAIR principles, science communication, interacting with society, or teaching. A more holistic approach to researchers and research careers therefore requires the development of alternatives and a broader assessment system.
This is not always so easy - everybody needs to be ‘on board’. Universities, funders, senior researchers, as much as postdocs and doctoral candidates. This also entails a reflection on institutional evaluations, funding structures and rankings, which also focus on quantitative indicators. Alternatives must be developed and discussed with everybody involved and in this regard, we can already see that a lot has happened in the last decade. Several documents like the DORA declaration and the Leiden manifesto already suggest some ways of how to proceed. But there is still a long way to go: from needing declarations of specific principles, to their implementation in an institutional context. Universities are developing and sharing alternative approaches to (academic) career assessment and some of these were presented in a webinar series EUA organised this Spring.
It is, however, also important to be aware of possible ‘side effects’ and how to overcome them. By increasing the number of criteria for evaluation, planning a career may become even more complex than it is today. Institutions will develop different policies depending on the context that they find themselves, as well as their institutional mission. In the future, researchers may not only look at their impact factor or publication list in order to assess their chances in the academic market and the development of an individual researcher biography may therefore become more complex than ever before.
In light of these developments, it is of particular importance that ECRs inform themselves about assessment procedures and make informed decisions about how and where they publish. To support ECRs in this endeavour, doctoral schools can (and should) play an important and proactive role. They can provide training and support supervisors and doctoral candidates in navigating through the world of research assessment. This is not intended to ‘play the system’, but to understand where one’s own strengths and interests lie. This also will help in overcoming insecurities and stress about individual career prospects. Open Access is already part of such training in an increasing number of doctoral schools, and it is certainly a good idea to include the topic of research assessment in this training or even address it separately. Doctoral schools provide a space of reflection and mutual exchange, and this exchange is particularly valuable. The whole system is changing for everybody, also for senior academics. To adapt to these changes, it is critical to raise awareness of them, and especially important that ECRs share their perspective and provide input. At the end of this process, hopefully, the academic sector can provide better prospects for junior researchers and simultaneously serve the needs of society.
Dr. Alexander Hasgall is Head of the EUA Council for Doctoral Education, a network of more than 240 European Universities from 35 countries, dedicated to the further development and strengthening of doctoral education and research training in Europe.