4S open panel on Academic Evaluation in an Age of “Post truth”

I’m together with Mario Biagioli and Steve Woolgar convening a panel titled Academic Evaluation in an Age of “Post truth” at the upcoming 4S conference in Boston next week. It consists of three sessions with in all 13 presentations followed by Michèle Lamont as discussant.

Program: Academic Evaluation in an Age of “Post truth”

Thursday 31 August 2017, 9-10:30; 11-12:30; 2:00-3:30
Panel convened by Mario Biagioli, Claes-Fredrik Helgesson and Steve Woolgar

Panel abstract

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance. What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference? This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

Session 1:“Issues” – Chair: Claes-Fredrik Helgesson

Thu, August 31, 9:00 to 10:30am, Sheraton Boston, 3, Dalton

1.1 The reproducibility crisis and its critics: Scientists’ evaluations of the stability of their findings • Nicole C Nelson, University Of Wisconsin-Madison

1.2 Below the By-line: The Curious Practice of Evaluating Career ‘Trajectories’ in Academic Biomedicine • Björn Hammarfelt, University of Borås; Alex Rushforth, CWTS, Leiden University; Sarah de Rijcke, Centre for Science and Technology Studies (CWTS)

1.3 Peer Review in Mathematics: Degrees of Correctness? • Christian Greiffenhagen, The Chinese University of Hong Kong

1.4 Resistance, Opportunism, or Something Else? Gaming and Manipulation of Academic Metrics Systems • Jo Ann Oravec, University of Wisconsin Whitewater and Madison

1.5 Redefining “Publication” and “Evaluation” • Mario Biagioli, UC Davis STS Program & Law School

1.6 Open Q&A on all papers in session

Session 2: “Outlooks” – Chair: Mario Biagioli

Thu, August 31, 11:00am to 12:30pm, Sheraton Boston, 3, Dalton

2.1 Evaluating Valorization: Tensions between Truth and Use in Concepts and Practices of Science Policy • Jorrit Smit, Leiden University

2.2 Beyond Managerialism: Academic Rating and Ranking as a Solidarity Saving Device • Mikhail Sokolov, European university at Saint Petersburg, Russia

2.3 Efficiency as Conditional Truth in Research?: Rules of the Road from a Japanese Perspective • William S Bradley, Ryukoku University

2.4 Legitimacy Crises, Politicisation, and Normative STS: Seeking Sincerity in Public Climate Change Debates • Bernhard Isopp, York University

2.5 Mishaps and mistakes in academic evaluation • Claes-Fredrik Helgesson, Linköping University and Steve Woolgar, Linköping University & University of Oxford

2.6 Open Q&A on all papers in session 2

Session 3: “Fixes” – Chair: Steve Woolgar

Thu, August 31, 2:00 to 3:30pm, Sheraton Boston, 3, Dalton

3.1 Temporalities of Truth: Peer Review’s Futurity in Terms of Credit and Debt, Co-Existence vs. Competition • Alexa Faerber, HafenCity University

3.2 Crafting Transparency and Accountability: Evaluation of Models, Metrics and Platforms of the Gates Foundation • Manjari Mahajan, New School University

3.3 Evaluative Inquiry: Toward Experimental Modes of Assessing the Values of Academic Work • Sarah de Rijcke, Centre for Science and Technology Studies (CWTS); Thomas Franssen, University of Amsterdam; Maximilian Fochler, University Of Vienna
Tjitske Holtrop; Thed Leeuwen, Centre for Sciencs & Technology Studies (CWTS), Leiden University; Alex Rushforth, CWTS, Leiden University; Clifford Tatum, CWTS – Leiden University; Paul Wouters, Centre for Science and Technology Studies, Leiden University

3.3 Open Q&A on all papers in session 3

3.4 Discussant: Michèle Lamont, Harvard University, on all three sessions

3.5 Open discussion on the whole panel

 

 

Who do you work with? – A reflection on pedigree and the relational in research

“Who do you work with?” The question was posed to me more than 20 years ago. It as asked just as I had sat down with a prominent male professor at a prestigious US university. I had got the appointment by emailing him, and was more than ready to talk about my PhD project and to get his insights into how I could to develop it. I had sent him a page outlining my work, to spare him the trouble to read through the full thesis proposal I had just completed before leaving for my 4 month stay in the US. And then this question. I had no idea what it meant, and answered that I had conceived of the project myself and that it had certainly not been handed down to me by some senior scholar. As a visiting PhD student I had just begun to sense that there actually might be a difference between being a Swedish PhD student, treated almost as faculty at home, and an American grad student.

It took more than a decade until I understood that I had given the wrong answer, and that I had completely misunderstood the question. He had asked for my pedigree, because that was a way to assess whether I was someone worthwhile spending time on. I had thought he asked about the provenance of the project. Hence, I had insisted that this was my project and not a project concocted by some professor for whom I worked. I guess that what he heard was that there were no one vouching. Hence, in hindsight, he must have pretty quickly concluded that I was not someone worth spending time on. I do not remember anything else from the meeting. He might actually have given me some suggestions for how to proceed with my work. Yet, what I remember was the feeling that he lost interest already when I answered his first question.

The pivotal moment for my understanding of this question came when I was in the US for a conference. In the lobby during a coffee break I overheard the same question. This time it was uttered by a US-based female professor and at the other end of the question was a PhD student who apparently had an appointment with her. Before the student had had time to answer, the professor had added: “I guess what I’m asking is, whose ‘kid’ are you?” This utterance flashed me back in time to that previous encounter and finally explained to me that the question was a not so subtle probing into a fledgling scholar’s pedigree.

I guess I must be considered to having been simple-minded. I had, at the very least, been incapable of a sufficient degree of reflexivity. I had, after all, already then been well exposed to ideas of networks and relationships. I was well familiar with notions such as the “strength of weak ties” and “structural holes” already before entering into this professor’s office. I had moreover read “Science in action” and other actor-network classics. Hence, the idea that actors and agency can be understood in terms of networks and relations was far from strange to me, at least as analytical concepts. Yet, it is clear that I had not been able to make any such a connections when asked that simple question. I had not been able to put any of the network paraphernalia to work when I took in and answered the question.

I am probably not much smarter now. Yet, I do now have a clearer opinion on what I think about this specific question. The short version is that I think it is an inappropriate question and that it directs attention in the wrong direction.

I firmly believe that research is a social activity. All we do as scholars is relational. To be meaningful, our research needs to tie to previous work by others as well as be related to in subsequent contributions. In short, work need to be part of ongoing conversations. Research is a relational endeavour. Yet, and this is crucial, I do think that the question “Who do you work with?” is very poor way to evoke research as a relational endeavour. It is poor because it uses relations for establishing status and worth rather than to engage with ideas and how these might relate to the ideas of others. To put it bluntly: It performs what we could call an “aristocratic” form of a relational view on research, that is, a form where the worth of ideas and people are seen as determined by their pedigree.

Panel proposal for 4S about academic evaluation

A few days before New Year’s Eve, I submitted together with Steve Woolgar and Mario Biagioli a proposal for an open panel about academic valuation for the upcoming 4S conference in Boston, Aug 30 – Sep 2. We hope that it, if accepted, will attract a wide variety submissions about the multifaceted valuation practices within academia. (We will get notice of acceptance within a few weeks.) Here is the proposal text:

Academic evaluation in an age of “post truth”

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance.

What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference?

This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

The communal work we expect of one another

There must be a special hell for people who submit articles to journals, publish in them, but refuse to review for these journals.

Zeynep Arsel on Twitter, 18 Dec 2016

The above recent tweet by Zeynep Arsel, a colleague in Canada, resonated with me. Not only did the specific annoyance she articulated resonate with me. Her tweet further resonated with me in how it pointed to the precarious way in which communal work is allocated within academia. The tweet therefore points to the crucial aspect in academia of how some of the work is allocated and done for the benefit of collectives that are not defined by any single organisation or hierarchy. This is highly appealing in an idealist sense. Yet, as the tweet articulates, there might be instances where the allocation of tasks do not work as expected. The crux, moreover, is that there are no other sanctions than hoping that a special hell has been appropriately prepared to host those who appear to not play to maintain this precarious arrangement.

The tweet by Zeynep Arsel provides a great opportunity to reflect on the communal work we expect of one another in academia. Let me first think about the idea that we share a workload within collectives rather than only within organisations. Second, I would like think about annoyances that apparently can arise from this and what it might tell us about important aspects of the arrangement. I will stick to the topic of journal publishing in this post, but I think the theme of how we distribute and share communal work in academia is highly relevant other areas of academic practice as well. I recently wrote an editorial note in Valuation Studies that used the valuation practices entailed in scholarly journal publishing as an example of how different valuation practices may be interrelated to one another in intricate ways. Looking at the peer review process as work to be distributed within a collective provides another angle from which to examine scholarly publishing and academia more broadly.

It is, when you think of it, an interesting aspect of academia that we both recognise and accept the idea that we can hand one another work assignments without being in the same organisation or even knowing one another. When submitting a manuscript for peer review we expect editors to take on the tasks of assessing it, appointing reviewers and to furthermore ask them to take on the work of reading and assessing something they might not otherwise have chosen to read. Furthermore, this assignment of work in peer review would not be considered appropriate if it was done within a closed circuit of friends trading favours. In fact, such a closed setup for performing these tasks would raise suspicion that the review process indeed was inappropriately executed. Hence, we the distribution of the workload has in some sense to be done within a dispersed collective to really work. The absence of hierarchy or bilateral reciprocities are important for it to work. Yet, it also what makes it a weak arrangement if individuals do not play along.

I’ve heard musings from colleagues to the effect that one ought to contribute twice as many review assignments to the system as one submit manuscripts to it. The rationale here is that this would roughly make you participate in the review chore in proportion to how much you ask others to do it for your manuscripts. I think it is a reasonable rule of thumb, especially for more established scholars. The notion of “twice as many review assignments as submissions” is also nice as it makes clear how much work we are actually asking of one another for having a working academic system.  If we think of academia as dispersed work collectives, it is clear that it cannot operate without the sharing of the workload in some fashion.

On to the annoyances, annoyances related to how people participate (or not) in the communal work of journal publishing. What can be seen as irritating behaviour to the extent that we, in the spur of the moment, would like a special hell for certain people? Here are five suggestions in addition to the one caused by someone refusing to review for a journal despite publishing in it.

  • When you, as a reviewer, think that an editor has failed to pre-screen a manuscript before asking you to review it: “Why should I, as a reviewer, read this half-baked manuscript, if the editor clearly hasn’t bothered to give it a proper look?”
  • When you, as an author, you are expected to re-write your paper so that it becomes the paper the reviewer would have written: “Your job was to perform the task of assessing the manuscript, not to enroll and transform it so as to fit your own particular research agenda! The task to review is a service to the journal, editor, and author, not a service to your own ego!”
  • When you, again as an author, you are expected to align diverging reviewer comments without any guidance from the editor. “How should I be able to respond to all their contradictory concerns and simultaneously improve the manuscript? The editor needs to give me a break! – Or, at least, some indication of his or her own opinion on critical issues!”
  • When you, as an editor, are expected to be happy to consider a modestly re-edited version of a manuscript as a substantially revised version: “Are you seriously thinking that I should ask the reviewers to have a new look at this version? If you as an author is strapped for time, don’t you think the same is true for me and the reviewers?”
  • When you, as an author, editor or reviewer, you realise exactly how much a journal charges for a digital copy of an article you all have worked to develop for free: “Is there no limit to how much these guys think they can financially profit from our collegially performed work?”

Here we then already have the contours of more special hells. One special hell for editors failing to pre-screen manuscripts, maybe time-shared with editors not taking on the task of arbitrating between contradictory reviews. Another hell would be for reviewers hi-jacking the review assignments for their own agenda, and yet another one special hell would be for authors not responding to requests for substantive revisions of their manuscripts.

If anything, these annoyances point to precariousness with which this kind of work is distributed and performed within academia. The ease by which I could identify such annoyances suggest that there are not only expectations of everyone to participate and do our fair share. These annoyances suggest that there are expectations as to how we do go about doing the work so as to maintain these dispersed collectives.

These academic practices can easily be understood as expressions of a moral economy of journal peer review in academia (as per Lorraine Daston and, not the least, Robert Kohler). Yet, I’m not certain the most interesting question is what norms, if any, that holds it all together. Maybe, more interesting is to ask with what means we, as individuals and as collectives, can work to sustain and develop good practices. And no, I do not think measures that “incentivise” certain practices, like making a metric out of everyones review to submission ratio, would do the trick. Better then to let out steam when you see what you take as foul play. If we do not try to cultivate some beautiful ideals, who should?

//platform.twitter.com/widgets.js

Thomson Reuters Giveth and Thomson Reuters Taketh Away

awardbadge_highlycited“Dear Claes-Fredrik,” Last Friday afternoon I was notified in an email from Thomson Reuters that I had been awarded the distinction of being a “Highly Cited Researcher.” I was selected since my work had “been identified as being among the most valuable and significant in the field.” The email further stated that very few earn this distinction and that the process of identifying me involved something called “Essential Science Indicators” and a ranking of the top 1% most cited works for a given subject field.

The award included a downloadable badge for use on my website, LinkedIn profile, and email signature. (Their suggested uses.) The email further provided a link where I could request a physical personalised letter and certificate for display. Finally, the email suggested that I could join the conversation on social media about this award using the hashtag #HighlyCited.

Not bad news for a Friday afternoon when I was desperately crunching tasks so as to not make too many of them reappear on next week’s to-do list. The nice email ended with a really warming sentence in the direct voice of the signer, Vin Caraher:

“I applaud your contributions to the advancement of scientific discovery and innovation and wish you continued success.”

Yet, what Thomson Reuters can give, it can as easily take away. Three and half hours laters I received a second mail from Thomson Reuters. This time it was not addressed to me personally, but to “Dear Researcher” and moreover signed by the more anonymous “Clarivate Analytics” than by Vin Caraher. Anyhow, the gist of the mail was to inform me that the previous mail was sent in error. Here is the full email:

“Dear Researcher,

We recently sent you an email about being named a Highly Cited Researcher. This was sent in error. Please accept our sincere apologies.

We’ve identified the error in our system that caused this and were able to resolve it quickly, ensuring it won’t be repeated.

Highly Cited Researchers derive from papers that are defined as those in the top 1% by citations for their field and publication year in the Web of Science. As leaders in the field of bibliometrics we appreciate the effort required to reach this achievement and celebrate those who have done so this year.

Sincerely, Clarivate Analytics”

No mention of downloadable badges, personalised certificates, or indeed if I should join (or refrain from joining) the conversation on social media. Furthermore, the final paragraph is a bit hurtful since it really underlines the need to celebrate those who truly are “Highly Cited Researchers”. All of us, now anonymous recipients, of this retraction email are not belonging to this category. Ouch! A quick search online further indicated that I was not alone in having received this award, only to have it retracted a few hours later.

What to think and what to do about this small incident? To me this underlines how academia is completely submerged in a variety of valuation practices. (Not surprising given my interests.) This particular assessment is also interesting since it is performed by a firm who have taken it upon itself to annually perform this assessment as a kind of service to the scholarly community. Another example of a private entity performing such assessments in the area of higher education is the ranking of U.S. law schools done by U.S. News and World Report (as studied in the book Engines of Anxiety by Wendy Espeland and Michael Sauder). The incident also reminds that it is interesting to examine not only how assessments are made, but how it is to be an object of assessment. (Valuation Studies will in a few weeks publish an article by Henrik Fürst where he examines how aspiring authors deal with rejection letters.) There is, finally, this theme with valuations going wrong. A blog post at the site Retraction Watch about this incident further noted that the previous highly cited list had included a scholar who since then had 18 retractions to his name. I’ve received an apology and there are apologies circulating on twitter. Yet, one can but wonder how else Thomson Reuters and the like take responsibility when their assessments go wrong. What if I had bought (and consumed) a bottle of champagne to celebrate “my award”?

What to do? I decided to keep the badge, only with a slight modification to indicate that I only was a proud holder of the award for a few hours.

Three reflections on being an editor

It is hard to capture what it means to be an editor in an academic endeavour beyond that it is hard and rewarding work. Yet experiences, like the volume on “value practices” I co-edited with Isabelle Dussauge and Francis Lee that came out last year (2015) and the work as one of the founding editors-in-chiefs of Valuation Studies, have generated some reflections I would like to share.

What is an editor? What does an editor do? Looking up the word editor in a dictionary, you find talk about the editor being someone who edits, or is in charge of “the running and contents” of, for instance, a periodical. It also includes the possibility to denote someone that is responsible for the content. In other words, according to the dictionary an editor is someone who edit, is in charge, and takes responsibility for the content. While succinct, they hardly give any real flesh to the editorial role. In short, it does not really describe the practice of being in charge of “the running and content.”

Drawing on my experiences of doing editorial work I sort my reflections using three questions: What is it like? Why do it? How to do it?

What is it like? I would favour a situational way of describing an editorial venture, highlighting that it includes the working with texts, people, ideas and an endless list of mundane and unpredictable aspects that comes with working with such matters. Editorial work is extremely interactive. It involves a large number of other parties (authors, reviewers, commissioning editors, copy-editors, co-editors, etc. etc. etc.). There are slices of solitary writing, reading and editing, but it more often involves interacting with others in meetings, through correspondence and so on. It further includes a fair share of plain administration and planning, the resolving of technical questions, etc. It appears as relational and multifaceted through and through.

Why do it? Being an editor has struck me as committing to a varied and unpredictable workload. (Yet, it is highly predictable that it means more work than predicted.) The broad contours of the end result is known. We all know, for instance, how a book broadly should look like. Yet, the shape of the end result is not known in any detail. The contingencies of the path to get there have in my experience furthermore been uncovered as part of the process and is actually a large part of the reward. What you with certainty sign up for is not a given end result, but rather an opportunity to the make and shape something together with others. This is a strong reason close to why it is a point of being an academic in the first place.

What can happen, then, is the creation of a process that generates new ideas. In such instances, being an editor is like taking part in a considerably extended brainstorm. For me, this is where the answer lies as to why do it: The reason to be an editor is for the moments where the process becomes generative for all involved parties.

How to do it? On the top of my incomplete list would be to not go about it alone. It is helpful to be able to divide certain editorial chores, but even more so to have one or a few others with which to share the caring for the overall development of the venture and the others involved. There are many instances where you will benefit from having arguments and drawing on the judgement of several people. It further means treating the effort as a form joint venture that develops a collective property.

Another helpful thing is to have good questions that can guide the work as well as to coordinate efforts as the project moves forward. It can be really generative to ask questions such as “What would a good book look like?” and “How can the whole and the parts be adapted to strengthen one another?” Involving contributors and editorial team in such working through such questions may not only improve the answers, but can also be part of realising them.

A third helpful thing is to cultivate your patience. Slowness is not necessarily a sign of failure. It might instead mean that serious work is being done that will show in the end result. Yet, you have at the same to be at it, to ensure progression and that the abundance of other commitments not just pulls everything apart.  All this means planning, communication, listening in, discussing, and so on. Over and over again.

A brief essay about quality in research

What is quality in research? What is good research? How can we now how to practice it and how to assess it? These questions are almost impossible to answer, and they are precisely for this reason all the more important to talk about.

I wrote a brief essay on the topic of research quality to be gathered together with similar essays written by my professorial colleagues at tema T. They were intended as conversation pieces for discussions about this both impossible and important topic. My own contribution was titled “On being part of a conversation: An essay to aid in talk about quality in research” and is available to download here.

I’ve been busy

I’ve kept myself busy with other stuff since my previous post last September (and others have given their helping hands in this). Apart from performing the administrative duties that come with being Head of unit of Technology and social change, I have worked on a few different research and writing projects, some of which are now done or near completion:

  1. The value practices volume I have edited together with Isabelle Dussauge and Francis Lee is finished in almost every sense now save for the actual printing. It is scheduled to be out at the end of January 2015. The making of this volume has been hard work and a collaborative labour of love. I’m really pleased with how it turned out.
  2. I have together with Johan Nilsson just now published an article in Journal of Marketing Management together where we do an epistemography of market research, that is, we investigate the indigenous epistemology of this particular field. (Link to PDF for download). It was a terrific experience where the guidance of reviewers and special issue editors in a rather short time helped us to significantly develop and improve the article.

The work to develop the journal Valuation Studies has further continued. Two more issues have been published since this time last year and another issue, 2(2), will be out before the holidays.

One of the projects going forward will be to write something accessible about the practices of academia and I aim to publish on such matters on this blog somewhat more regularly than I have been able to in the past year.

On the distinction between disagreeing and practicing asshole discord

Criticism and disagreement are essential to keep scholarly endeavours alive. We need it to develop or revise our ideas and arguments. Yet, all forms of attack are not equal in aiding in that respect. When I look back at the times when I have been served the most unhelpful criticism, they always seem to have been developed precisely to hit me hard while not engaging with the idea or position I have aimed to articulate. It has been given as truly condescending treatments felt as intended to belittle rather than to engage.

While this still happens now and again, I remember in particular one such time when I was a PhD student. It was while attending an internal workshop where I was scheduled to receive comments from a commentator who was a full professor. The long and the short of his comment was that he ‘totally disagreed with everything in this paper.’ No help there on how to develop the argument, if you for the sake of argument momentarily accepted its basic premise. No suggestion to develop or revise the premise, provided it was broken. The problem with such critique is not primarily that it takes a conflicting position, since that is a necessary part of any disagreement. The problem is that it is articulated in a way that totally blocks any further conversation and learning. What else can you reply than stating ‘I’m sorry to hear that’ or the less polite remark that you are ‘impressed to encounter such a senior colleague that wears his ignorance with such pride.’ (At the time, I do not think I had the presence to reply at all.)

Asshole critique then, could thus be defined as the non-stick, non-engaging, comments aimed to denigrate and produce discord rather than disagreement. The only difference from the silent treatment is that it aims to maim. Isaac Newton has apparently stated that “tact is the art of making your point without making an enemy.” The opposite, I guess would be practicing asshole discord, which would be the art of denigrating without making a point. Disagreement is far too important to be soiled by such practices.

Reflections on TASP and the passing of academic judgement

I have long been fascinated by how much academic practice centres on the passing of judgement on students and peers. We review articles, and our articles get reviewed. We send in grant applications, and review those of others. We comment on each other’s work in seminars, and we make evaluative comments (sometimes in a low voice) about conference presentations.

Given how important this is for academic life, it is striking how unsystematic we talk about these things. There are scattered resources on topics like article reviewing available on the net, but I have found little systematic literature to help in developing one skill making and communicating academic judgements. I love Michelle Lamont’s *How Professors Think*, where she among other things displays the intricacies of passing judgment on grant applications. It is a rewarding read. Yet, one could wonder why there is not more talk and resources about the skills of passing judgment.

A while ago I came across three fascinating expert judgements regarding applicants for a professorship at a Swedish university (not Linkoping!). The reason these were fascinating was the markedly different styles in how they communicated their judgement of each applicant. One of the experts made extensive use of tables for presenting the achievements of the top candidates. In these we learnt the number of monographs and articles published by each candidate as well as the number of citations according to ISI and Google scholar.

Another expert was more condensed in his communication, but indicates in his introduction that the assessment has taken many sources into account (such as a ranking of journals for assessing the publications on the various candidates).

It was, however, the statement of the third expert that really caught my eye. She used a set of abbreviations for summarising her judgement of each candidate. These abbreviations were presented at the beginning of her statement and included: “TASP (there are some problems)” and “HOML (high on my list).” In all, five such abbreviations were defined.

One take on this would be that it is terrible with such a varied and unstandardized way to make and communicate such an important academic judgement as this one. Another one would be that it is precisely because it is of such importance that we as academics have to have quite some room in how we make these judgements. Maybe, then, the lack of more advice is all for the better in the end.