4S open panel on Academic Evaluation in an Age of “Post truth”

I’m together with Mario Biagioli and Steve Woolgar convening a panel titled Academic Evaluation in an Age of “Post truth” at the upcoming 4S conference in Boston next week. It consists of three sessions with in all 13 presentations followed by Michèle Lamont as discussant.

Program: Academic Evaluation in an Age of “Post truth”

Thursday 31 August 2017, 9-10:30; 11-12:30; 2:00-3:30
Panel convened by Mario Biagioli, Claes-Fredrik Helgesson and Steve Woolgar

Panel abstract

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance. What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference? This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

Session 1:“Issues” – Chair: Claes-Fredrik Helgesson

Thu, August 31, 9:00 to 10:30am, Sheraton Boston, 3, Dalton

1.1 The reproducibility crisis and its critics: Scientists’ evaluations of the stability of their findings • Nicole C Nelson, University Of Wisconsin-Madison

1.2 Below the By-line: The Curious Practice of Evaluating Career ‘Trajectories’ in Academic Biomedicine • Björn Hammarfelt, University of Borås; Alex Rushforth, CWTS, Leiden University; Sarah de Rijcke, Centre for Science and Technology Studies (CWTS)

1.3 Peer Review in Mathematics: Degrees of Correctness? • Christian Greiffenhagen, The Chinese University of Hong Kong

1.4 Resistance, Opportunism, or Something Else? Gaming and Manipulation of Academic Metrics Systems • Jo Ann Oravec, University of Wisconsin Whitewater and Madison

1.5 Redefining “Publication” and “Evaluation” • Mario Biagioli, UC Davis STS Program & Law School

1.6 Open Q&A on all papers in session

Session 2: “Outlooks” – Chair: Mario Biagioli

Thu, August 31, 11:00am to 12:30pm, Sheraton Boston, 3, Dalton

2.1 Evaluating Valorization: Tensions between Truth and Use in Concepts and Practices of Science Policy • Jorrit Smit, Leiden University

2.2 Beyond Managerialism: Academic Rating and Ranking as a Solidarity Saving Device • Mikhail Sokolov, European university at Saint Petersburg, Russia

2.3 Efficiency as Conditional Truth in Research?: Rules of the Road from a Japanese Perspective • William S Bradley, Ryukoku University

2.4 Legitimacy Crises, Politicisation, and Normative STS: Seeking Sincerity in Public Climate Change Debates • Bernhard Isopp, York University

2.5 Mishaps and mistakes in academic evaluation • Claes-Fredrik Helgesson, Linköping University and Steve Woolgar, Linköping University & University of Oxford

2.6 Open Q&A on all papers in session 2

Session 3: “Fixes” – Chair: Steve Woolgar

Thu, August 31, 2:00 to 3:30pm, Sheraton Boston, 3, Dalton

3.1 Temporalities of Truth: Peer Review’s Futurity in Terms of Credit and Debt, Co-Existence vs. Competition • Alexa Faerber, HafenCity University

3.2 Crafting Transparency and Accountability: Evaluation of Models, Metrics and Platforms of the Gates Foundation • Manjari Mahajan, New School University

3.3 Evaluative Inquiry: Toward Experimental Modes of Assessing the Values of Academic Work • Sarah de Rijcke, Centre for Science and Technology Studies (CWTS); Thomas Franssen, University of Amsterdam; Maximilian Fochler, University Of Vienna
Tjitske Holtrop; Thed Leeuwen, Centre for Sciencs & Technology Studies (CWTS), Leiden University; Alex Rushforth, CWTS, Leiden University; Clifford Tatum, CWTS – Leiden University; Paul Wouters, Centre for Science and Technology Studies, Leiden University

3.3 Open Q&A on all papers in session 3

3.4 Discussant: Michèle Lamont, Harvard University, on all three sessions

3.5 Open discussion on the whole panel

 

 

The communal work we expect of one another

There must be a special hell for people who submit articles to journals, publish in them, but refuse to review for these journals.

Zeynep Arsel on Twitter, 18 Dec 2016

The above recent tweet by Zeynep Arsel, a colleague in Canada, resonated with me. Not only did the specific annoyance she articulated resonate with me. Her tweet further resonated with me in how it pointed to the precarious way in which communal work is allocated within academia. The tweet therefore points to the crucial aspect in academia of how some of the work is allocated and done for the benefit of collectives that are not defined by any single organisation or hierarchy. This is highly appealing in an idealist sense. Yet, as the tweet articulates, there might be instances where the allocation of tasks do not work as expected. The crux, moreover, is that there are no other sanctions than hoping that a special hell has been appropriately prepared to host those who appear to not play to maintain this precarious arrangement.

The tweet by Zeynep Arsel provides a great opportunity to reflect on the communal work we expect of one another in academia. Let me first think about the idea that we share a workload within collectives rather than only within organisations. Second, I would like think about annoyances that apparently can arise from this and what it might tell us about important aspects of the arrangement. I will stick to the topic of journal publishing in this post, but I think the theme of how we distribute and share communal work in academia is highly relevant other areas of academic practice as well. I recently wrote an editorial note in Valuation Studies that used the valuation practices entailed in scholarly journal publishing as an example of how different valuation practices may be interrelated to one another in intricate ways. Looking at the peer review process as work to be distributed within a collective provides another angle from which to examine scholarly publishing and academia more broadly.

It is, when you think of it, an interesting aspect of academia that we both recognise and accept the idea that we can hand one another work assignments without being in the same organisation or even knowing one another. When submitting a manuscript for peer review we expect editors to take on the tasks of assessing it, appointing reviewers and to furthermore ask them to take on the work of reading and assessing something they might not otherwise have chosen to read. Furthermore, this assignment of work in peer review would not be considered appropriate if it was done within a closed circuit of friends trading favours. In fact, such a closed setup for performing these tasks would raise suspicion that the review process indeed was inappropriately executed. Hence, we the distribution of the workload has in some sense to be done within a dispersed collective to really work. The absence of hierarchy or bilateral reciprocities are important for it to work. Yet, it also what makes it a weak arrangement if individuals do not play along.

I’ve heard musings from colleagues to the effect that one ought to contribute twice as many review assignments to the system as one submit manuscripts to it. The rationale here is that this would roughly make you participate in the review chore in proportion to how much you ask others to do it for your manuscripts. I think it is a reasonable rule of thumb, especially for more established scholars. The notion of “twice as many review assignments as submissions” is also nice as it makes clear how much work we are actually asking of one another for having a working academic system.  If we think of academia as dispersed work collectives, it is clear that it cannot operate without the sharing of the workload in some fashion.

On to the annoyances, annoyances related to how people participate (or not) in the communal work of journal publishing. What can be seen as irritating behaviour to the extent that we, in the spur of the moment, would like a special hell for certain people? Here are five suggestions in addition to the one caused by someone refusing to review for a journal despite publishing in it.

  • When you, as a reviewer, think that an editor has failed to pre-screen a manuscript before asking you to review it: “Why should I, as a reviewer, read this half-baked manuscript, if the editor clearly hasn’t bothered to give it a proper look?”
  • When you, as an author, you are expected to re-write your paper so that it becomes the paper the reviewer would have written: “Your job was to perform the task of assessing the manuscript, not to enroll and transform it so as to fit your own particular research agenda! The task to review is a service to the journal, editor, and author, not a service to your own ego!”
  • When you, again as an author, you are expected to align diverging reviewer comments without any guidance from the editor. “How should I be able to respond to all their contradictory concerns and simultaneously improve the manuscript? The editor needs to give me a break! – Or, at least, some indication of his or her own opinion on critical issues!”
  • When you, as an editor, are expected to be happy to consider a modestly re-edited version of a manuscript as a substantially revised version: “Are you seriously thinking that I should ask the reviewers to have a new look at this version? If you as an author is strapped for time, don’t you think the same is true for me and the reviewers?”
  • When you, as an author, editor or reviewer, you realise exactly how much a journal charges for a digital copy of an article you all have worked to develop for free: “Is there no limit to how much these guys think they can financially profit from our collegially performed work?”

Here we then already have the contours of more special hells. One special hell for editors failing to pre-screen manuscripts, maybe time-shared with editors not taking on the task of arbitrating between contradictory reviews. Another hell would be for reviewers hi-jacking the review assignments for their own agenda, and yet another one special hell would be for authors not responding to requests for substantive revisions of their manuscripts.

If anything, these annoyances point to precariousness with which this kind of work is distributed and performed within academia. The ease by which I could identify such annoyances suggest that there are not only expectations of everyone to participate and do our fair share. These annoyances suggest that there are expectations as to how we do go about doing the work so as to maintain these dispersed collectives.

These academic practices can easily be understood as expressions of a moral economy of journal peer review in academia (as per Lorraine Daston and, not the least, Robert Kohler). Yet, I’m not certain the most interesting question is what norms, if any, that holds it all together. Maybe, more interesting is to ask with what means we, as individuals and as collectives, can work to sustain and develop good practices. And no, I do not think measures that “incentivise” certain practices, like making a metric out of everyones review to submission ratio, would do the trick. Better then to let out steam when you see what you take as foul play. If we do not try to cultivate some beautiful ideals, who should?

//platform.twitter.com/widgets.js