The Australian Football League (AFL) has some rather curious customs. For example, although most football codes have a provision for sending players off when they engage in dangerous play, the AFL does not. The perpetrator can be reported, face a tribunal later in the week and be rubbed out of future matches. But during the game in which the incident occurred, they remain on the field. Then there’s the Brownlow Medal, the highest individual honour in the game and possibly the most boring television event on earth, but a huge deal for those invested in the sport. Unusually, votes for the Brownlow medal are cast by the umpires, not by players or fans. Even more unusually, the award citation is for the “fairest and best” player.
Not “best”.
Not “best and fairest”. But “fairest and best”.
Now I’d be the last person to suggest that science or academia should follow the AFL’s lead in its operations. After all there is even less diversity in appointments to decision-making positions in the AFL than there is in academia in general. On the other hand, perhaps there’s a lesson about linking the criteria of “fairest” and “best”.
Academic research has few if any incentives for “fairness”. Rather, the reverse. In a recent commentary on a paper that evaluated the probability of becoming principal investigator, success was associated with being male, selfish, elite and publishing in journals with high impact factors. By now we are all too familiar with the first of these associations, the gendered nature of science. Discrimination, prejudice, implicit bias, stereotype threat, sexism, harassment, lack of role models, poor mentoring and disproportionate caring roles inter alia contribute to the lack of progression and poorer pay for women.
But selfish? Why does the system select for selfishness?
Over the past 20 years or so, through the invention and evolution of more and more elaborate bibliometrics, it has become commonplace to assess a scientist’s worth by specific numbers, sometimes just one number, and generally relating to scientific publications. Number of papers. Number of authors. Author position. Journal impact factor. Citation rate, H-index.
Over this same period, the number of available academic positions has lagged behind the growth in the number of PhDs awarded (see for example data from the US). This set of circumstances has made for a unsustainable, hyper-competitive environment in research and academia. To survive, scientists in universities and research institutes are required to maximize the metrics by which they are measured. Specifically, they are driven to generate more papers, more first or senior author papers, and more papers in high impact factor journals.
Although publication outputs contribute to a criteria of “best”, there is no real assessment of “fairest” in this system. Since fairness is not assessed, there is little if any deterrent in crossing the boundaries of reasonableness and fair-mindedness to gain a potential advantage. Those who treat work colleagues as opponents in a competition, those who intimidate junior colleagues, those who coerce to secure a better author position are often rewarded by the system for these unreasonable and unfair behaviours.
In his wonderfully insightful comment on this issue in 2007, Peter Lawrence of the University of Cambridge wrote:
“…the advantages bestowed on those who are prepared to show off and to exploit others have acted against modest and gentle people of all kinds — yet there is no evidence, presumption or likelihood that less pushy people are less creative. As less aggressive people are predominantly women it should be no surprise that, in spite of an increased proportion of women entering biomedical research as students, there has been little, if any, increase in the representation of women at the top. Gentle people of both sexes vote with their feet and leave a profession that they, correctly, perceive to discriminate against them. Not only do we lose many original researchers, I think science would flourish more in an understanding and empathetic workplace”
One might think that a broader assessment of track record could overcome a reliance on publication-focused metrics. After all, the Australian Research Council (ARC) and the National Health and Medical Research Council (NHMRC) often suggest a number of other criteria for evaluating grants and fellowships including for example invited lectures, prizes and awards, research translation (including patents and IP licences), consultancies, policy advice, contributions to research training, contributions to professional activities, industry engagement. So where is the problem?
In my own experience on panels over the past 15 years, metrics relating to publications often override other contributions to a track record score. There may be many reasons for this, and I’ll highlight what I think are three. First, publication metrics contribute to national and international rankings of higher education institutions: a higher ranking leads to more international students, which brings in more money to the university, which supports more research, which leads to higher international rankings. In essence, publications are highly valued globally. It’s difficult to argue that other outputs are as important as publication metrics if these are not valued as highly in international metrics.
Second, an individual’s contributions to community outreach, undergraduate teaching, committee work and policy development are not only valued less than research outputs, they are more difficult to measure and assess. It’s easier to ignore those contributions in a track record assessment or to use them simply to discriminate between two closely-ranked applicants.
And third, the score for track record is often a one-line score. That means it’s up to the individual assessor to decide how to weight each of these contributions. How much weight do you give to mentoring compared to publications? Science communication compared to an early career researcher prize? Without specific direction on the weighting for each component, the one that is easiest to measure and compare – using bibliometrics - will dominate the scores provided by assessors.
One way to change the way things are is to change the drivers of behaviour. At present, we value “best” almost exclusively by publication outputs, at least for early career researchers.
In Australia, the ARC and NHMRC could take a lead in changing bibliometrics-dominated scoring of track record – and associated behaviours - by requiring separate scores for each of several track record contributions. Institutions too could modify their evaluations of staff – some even use single-number values to assess their employees?! - by incorporating measures of, for example, public good and community engagement. But the biggest change might come when international rankings of universities and research institutes include some measure of “fairest” as part of the ranking criteria. If institutions were ranked not just on publications, grant income and Nobel prizes, but also on their demonstrated supportive environment and inclusive organizational culture - the long-term success of their trainees, their engagement with the public, and the diversity of their Faculty - what behaviours would that drive? Would it lead to intense competition to be the best at supporting diversity and empathy as well as creativity?
Now that’s one countdown for “fairest and best” that I would definitely stay up late to watch.