4S open panel on Academic Evaluation in an Age of “Post truth”

I’m together with Mario Biagioli and Steve Woolgar convening a panel titled Academic Evaluation in an Age of “Post truth” at the upcoming 4S conference in Boston next week. It consists of three sessions with in all 13 presentations followed by Michèle Lamont as discussant.

Program: Academic Evaluation in an Age of “Post truth”

Thursday 31 August 2017, 9-10:30; 11-12:30; 2:00-3:30
Panel convened by Mario Biagioli, Claes-Fredrik Helgesson and Steve Woolgar

Panel abstract

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance. What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference? This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

Session 1:“Issues” – Chair: Claes-Fredrik Helgesson

Thu, August 31, 9:00 to 10:30am, Sheraton Boston, 3, Dalton

1.1 The reproducibility crisis and its critics: Scientists’ evaluations of the stability of their findings • Nicole C Nelson, University Of Wisconsin-Madison

1.2 Below the By-line: The Curious Practice of Evaluating Career ‘Trajectories’ in Academic Biomedicine • Björn Hammarfelt, University of Borås; Alex Rushforth, CWTS, Leiden University; Sarah de Rijcke, Centre for Science and Technology Studies (CWTS)

1.3 Peer Review in Mathematics: Degrees of Correctness? • Christian Greiffenhagen, The Chinese University of Hong Kong

1.4 Resistance, Opportunism, or Something Else? Gaming and Manipulation of Academic Metrics Systems • Jo Ann Oravec, University of Wisconsin Whitewater and Madison

1.5 Redefining “Publication” and “Evaluation” • Mario Biagioli, UC Davis STS Program & Law School

1.6 Open Q&A on all papers in session

Session 2: “Outlooks” – Chair: Mario Biagioli

Thu, August 31, 11:00am to 12:30pm, Sheraton Boston, 3, Dalton

2.1 Evaluating Valorization: Tensions between Truth and Use in Concepts and Practices of Science Policy • Jorrit Smit, Leiden University

2.2 Beyond Managerialism: Academic Rating and Ranking as a Solidarity Saving Device • Mikhail Sokolov, European university at Saint Petersburg, Russia

2.3 Efficiency as Conditional Truth in Research?: Rules of the Road from a Japanese Perspective • William S Bradley, Ryukoku University

2.4 Legitimacy Crises, Politicisation, and Normative STS: Seeking Sincerity in Public Climate Change Debates • Bernhard Isopp, York University

2.5 Mishaps and mistakes in academic evaluation • Claes-Fredrik Helgesson, Linköping University and Steve Woolgar, Linköping University & University of Oxford

2.6 Open Q&A on all papers in session 2

Session 3: “Fixes” – Chair: Steve Woolgar

Thu, August 31, 2:00 to 3:30pm, Sheraton Boston, 3, Dalton

3.1 Temporalities of Truth: Peer Review’s Futurity in Terms of Credit and Debt, Co-Existence vs. Competition • Alexa Faerber, HafenCity University

3.2 Crafting Transparency and Accountability: Evaluation of Models, Metrics and Platforms of the Gates Foundation • Manjari Mahajan, New School University

3.3 Evaluative Inquiry: Toward Experimental Modes of Assessing the Values of Academic Work • Sarah de Rijcke, Centre for Science and Technology Studies (CWTS); Thomas Franssen, University of Amsterdam; Maximilian Fochler, University Of Vienna
Tjitske Holtrop; Thed Leeuwen, Centre for Sciencs & Technology Studies (CWTS), Leiden University; Alex Rushforth, CWTS, Leiden University; Clifford Tatum, CWTS – Leiden University; Paul Wouters, Centre for Science and Technology Studies, Leiden University

3.3 Open Q&A on all papers in session 3

3.4 Discussant: Michèle Lamont, Harvard University, on all three sessions

3.5 Open discussion on the whole panel

 

 

Panel proposal for 4S about academic evaluation

A few days before New Year’s Eve, I submitted together with Steve Woolgar and Mario Biagioli a proposal for an open panel about academic valuation for the upcoming 4S conference in Boston, Aug 30 – Sep 2. We hope that it, if accepted, will attract a wide variety submissions about the multifaceted valuation practices within academia. (We will get notice of acceptance within a few weeks.) Here is the proposal text:

Academic evaluation in an age of “post truth”

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance.

What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference?

This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

The communal work we expect of one another

There must be a special hell for people who submit articles to journals, publish in them, but refuse to review for these journals.

Zeynep Arsel on Twitter, 18 Dec 2016

The above recent tweet by Zeynep Arsel, a colleague in Canada, resonated with me. Not only did the specific annoyance she articulated resonate with me. Her tweet further resonated with me in how it pointed to the precarious way in which communal work is allocated within academia. The tweet therefore points to the crucial aspect in academia of how some of the work is allocated and done for the benefit of collectives that are not defined by any single organisation or hierarchy. This is highly appealing in an idealist sense. Yet, as the tweet articulates, there might be instances where the allocation of tasks do not work as expected. The crux, moreover, is that there are no other sanctions than hoping that a special hell has been appropriately prepared to host those who appear to not play to maintain this precarious arrangement.

The tweet by Zeynep Arsel provides a great opportunity to reflect on the communal work we expect of one another in academia. Let me first think about the idea that we share a workload within collectives rather than only within organisations. Second, I would like think about annoyances that apparently can arise from this and what it might tell us about important aspects of the arrangement. I will stick to the topic of journal publishing in this post, but I think the theme of how we distribute and share communal work in academia is highly relevant other areas of academic practice as well. I recently wrote an editorial note in Valuation Studies that used the valuation practices entailed in scholarly journal publishing as an example of how different valuation practices may be interrelated to one another in intricate ways. Looking at the peer review process as work to be distributed within a collective provides another angle from which to examine scholarly publishing and academia more broadly.

It is, when you think of it, an interesting aspect of academia that we both recognise and accept the idea that we can hand one another work assignments without being in the same organisation or even knowing one another. When submitting a manuscript for peer review we expect editors to take on the tasks of assessing it, appointing reviewers and to furthermore ask them to take on the work of reading and assessing something they might not otherwise have chosen to read. Furthermore, this assignment of work in peer review would not be considered appropriate if it was done within a closed circuit of friends trading favours. In fact, such a closed setup for performing these tasks would raise suspicion that the review process indeed was inappropriately executed. Hence, we the distribution of the workload has in some sense to be done within a dispersed collective to really work. The absence of hierarchy or bilateral reciprocities are important for it to work. Yet, it also what makes it a weak arrangement if individuals do not play along.

I’ve heard musings from colleagues to the effect that one ought to contribute twice as many review assignments to the system as one submit manuscripts to it. The rationale here is that this would roughly make you participate in the review chore in proportion to how much you ask others to do it for your manuscripts. I think it is a reasonable rule of thumb, especially for more established scholars. The notion of “twice as many review assignments as submissions” is also nice as it makes clear how much work we are actually asking of one another for having a working academic system.  If we think of academia as dispersed work collectives, it is clear that it cannot operate without the sharing of the workload in some fashion.

On to the annoyances, annoyances related to how people participate (or not) in the communal work of journal publishing. What can be seen as irritating behaviour to the extent that we, in the spur of the moment, would like a special hell for certain people? Here are five suggestions in addition to the one caused by someone refusing to review for a journal despite publishing in it.

  • When you, as a reviewer, think that an editor has failed to pre-screen a manuscript before asking you to review it: “Why should I, as a reviewer, read this half-baked manuscript, if the editor clearly hasn’t bothered to give it a proper look?”
  • When you, as an author, you are expected to re-write your paper so that it becomes the paper the reviewer would have written: “Your job was to perform the task of assessing the manuscript, not to enroll and transform it so as to fit your own particular research agenda! The task to review is a service to the journal, editor, and author, not a service to your own ego!”
  • When you, again as an author, you are expected to align diverging reviewer comments without any guidance from the editor. “How should I be able to respond to all their contradictory concerns and simultaneously improve the manuscript? The editor needs to give me a break! – Or, at least, some indication of his or her own opinion on critical issues!”
  • When you, as an editor, are expected to be happy to consider a modestly re-edited version of a manuscript as a substantially revised version: “Are you seriously thinking that I should ask the reviewers to have a new look at this version? If you as an author is strapped for time, don’t you think the same is true for me and the reviewers?”
  • When you, as an author, editor or reviewer, you realise exactly how much a journal charges for a digital copy of an article you all have worked to develop for free: “Is there no limit to how much these guys think they can financially profit from our collegially performed work?”

Here we then already have the contours of more special hells. One special hell for editors failing to pre-screen manuscripts, maybe time-shared with editors not taking on the task of arbitrating between contradictory reviews. Another hell would be for reviewers hi-jacking the review assignments for their own agenda, and yet another one special hell would be for authors not responding to requests for substantive revisions of their manuscripts.

If anything, these annoyances point to precariousness with which this kind of work is distributed and performed within academia. The ease by which I could identify such annoyances suggest that there are not only expectations of everyone to participate and do our fair share. These annoyances suggest that there are expectations as to how we do go about doing the work so as to maintain these dispersed collectives.

These academic practices can easily be understood as expressions of a moral economy of journal peer review in academia (as per Lorraine Daston and, not the least, Robert Kohler). Yet, I’m not certain the most interesting question is what norms, if any, that holds it all together. Maybe, more interesting is to ask with what means we, as individuals and as collectives, can work to sustain and develop good practices. And no, I do not think measures that “incentivise” certain practices, like making a metric out of everyones review to submission ratio, would do the trick. Better then to let out steam when you see what you take as foul play. If we do not try to cultivate some beautiful ideals, who should?

//platform.twitter.com/widgets.js

Thomson Reuters Giveth and Thomson Reuters Taketh Away

awardbadge_highlycited“Dear Claes-Fredrik,” Last Friday afternoon I was notified in an email from Thomson Reuters that I had been awarded the distinction of being a “Highly Cited Researcher.” I was selected since my work had “been identified as being among the most valuable and significant in the field.” The email further stated that very few earn this distinction and that the process of identifying me involved something called “Essential Science Indicators” and a ranking of the top 1% most cited works for a given subject field.

The award included a downloadable badge for use on my website, LinkedIn profile, and email signature. (Their suggested uses.) The email further provided a link where I could request a physical personalised letter and certificate for display. Finally, the email suggested that I could join the conversation on social media about this award using the hashtag #HighlyCited.

Not bad news for a Friday afternoon when I was desperately crunching tasks so as to not make too many of them reappear on next week’s to-do list. The nice email ended with a really warming sentence in the direct voice of the signer, Vin Caraher:

“I applaud your contributions to the advancement of scientific discovery and innovation and wish you continued success.”

Yet, what Thomson Reuters can give, it can as easily take away. Three and half hours laters I received a second mail from Thomson Reuters. This time it was not addressed to me personally, but to “Dear Researcher” and moreover signed by the more anonymous “Clarivate Analytics” than by Vin Caraher. Anyhow, the gist of the mail was to inform me that the previous mail was sent in error. Here is the full email:

“Dear Researcher,

We recently sent you an email about being named a Highly Cited Researcher. This was sent in error. Please accept our sincere apologies.

We’ve identified the error in our system that caused this and were able to resolve it quickly, ensuring it won’t be repeated.

Highly Cited Researchers derive from papers that are defined as those in the top 1% by citations for their field and publication year in the Web of Science. As leaders in the field of bibliometrics we appreciate the effort required to reach this achievement and celebrate those who have done so this year.

Sincerely, Clarivate Analytics”

No mention of downloadable badges, personalised certificates, or indeed if I should join (or refrain from joining) the conversation on social media. Furthermore, the final paragraph is a bit hurtful since it really underlines the need to celebrate those who truly are “Highly Cited Researchers”. All of us, now anonymous recipients, of this retraction email are not belonging to this category. Ouch! A quick search online further indicated that I was not alone in having received this award, only to have it retracted a few hours later.

What to think and what to do about this small incident? To me this underlines how academia is completely submerged in a variety of valuation practices. (Not surprising given my interests.) This particular assessment is also interesting since it is performed by a firm who have taken it upon itself to annually perform this assessment as a kind of service to the scholarly community. Another example of a private entity performing such assessments in the area of higher education is the ranking of U.S. law schools done by U.S. News and World Report (as studied in the book Engines of Anxiety by Wendy Espeland and Michael Sauder). The incident also reminds that it is interesting to examine not only how assessments are made, but how it is to be an object of assessment. (Valuation Studies will in a few weeks publish an article by Henrik Fürst where he examines how aspiring authors deal with rejection letters.) There is, finally, this theme with valuations going wrong. A blog post at the site Retraction Watch about this incident further noted that the previous highly cited list had included a scholar who since then had 18 retractions to his name. I’ve received an apology and there are apologies circulating on twitter. Yet, one can but wonder how else Thomson Reuters and the like take responsibility when their assessments go wrong. What if I had bought (and consumed) a bottle of champagne to celebrate “my award”?

What to do? I decided to keep the badge, only with a slight modification to indicate that I only was a proud holder of the award for a few hours.