4S open panel on Academic Evaluation in an Age of “Post truth”

I’m together with Mario Biagioli and Steve Woolgar convening a panel titled Academic Evaluation in an Age of “Post truth” at the upcoming 4S conference in Boston next week. It consists of three sessions with in all 13 presentations followed by Michèle Lamont as discussant.

Program: Academic Evaluation in an Age of “Post truth”

Thursday 31 August 2017, 9-10:30; 11-12:30; 2:00-3:30
Panel convened by Mario Biagioli, Claes-Fredrik Helgesson and Steve Woolgar

Panel abstract

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance. What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference? This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

Session 1:“Issues” – Chair: Claes-Fredrik Helgesson

Thu, August 31, 9:00 to 10:30am, Sheraton Boston, 3, Dalton

1.1 The reproducibility crisis and its critics: Scientists’ evaluations of the stability of their findings • Nicole C Nelson, University Of Wisconsin-Madison

1.2 Below the By-line: The Curious Practice of Evaluating Career ‘Trajectories’ in Academic Biomedicine • Björn Hammarfelt, University of Borås; Alex Rushforth, CWTS, Leiden University; Sarah de Rijcke, Centre for Science and Technology Studies (CWTS)

1.3 Peer Review in Mathematics: Degrees of Correctness? • Christian Greiffenhagen, The Chinese University of Hong Kong

1.4 Resistance, Opportunism, or Something Else? Gaming and Manipulation of Academic Metrics Systems • Jo Ann Oravec, University of Wisconsin Whitewater and Madison

1.5 Redefining “Publication” and “Evaluation” • Mario Biagioli, UC Davis STS Program & Law School

1.6 Open Q&A on all papers in session

Session 2: “Outlooks” – Chair: Mario Biagioli

Thu, August 31, 11:00am to 12:30pm, Sheraton Boston, 3, Dalton

2.1 Evaluating Valorization: Tensions between Truth and Use in Concepts and Practices of Science Policy • Jorrit Smit, Leiden University

2.2 Beyond Managerialism: Academic Rating and Ranking as a Solidarity Saving Device • Mikhail Sokolov, European university at Saint Petersburg, Russia

2.3 Efficiency as Conditional Truth in Research?: Rules of the Road from a Japanese Perspective • William S Bradley, Ryukoku University

2.4 Legitimacy Crises, Politicisation, and Normative STS: Seeking Sincerity in Public Climate Change Debates • Bernhard Isopp, York University

2.5 Mishaps and mistakes in academic evaluation • Claes-Fredrik Helgesson, Linköping University and Steve Woolgar, Linköping University & University of Oxford

2.6 Open Q&A on all papers in session 2

Session 3: “Fixes” – Chair: Steve Woolgar

Thu, August 31, 2:00 to 3:30pm, Sheraton Boston, 3, Dalton

3.1 Temporalities of Truth: Peer Review’s Futurity in Terms of Credit and Debt, Co-Existence vs. Competition • Alexa Faerber, HafenCity University

3.2 Crafting Transparency and Accountability: Evaluation of Models, Metrics and Platforms of the Gates Foundation • Manjari Mahajan, New School University

3.3 Evaluative Inquiry: Toward Experimental Modes of Assessing the Values of Academic Work • Sarah de Rijcke, Centre for Science and Technology Studies (CWTS); Thomas Franssen, University of Amsterdam; Maximilian Fochler, University Of Vienna
Tjitske Holtrop; Thed Leeuwen, Centre for Sciencs & Technology Studies (CWTS), Leiden University; Alex Rushforth, CWTS, Leiden University; Clifford Tatum, CWTS – Leiden University; Paul Wouters, Centre for Science and Technology Studies, Leiden University

3.3 Open Q&A on all papers in session 3

3.4 Discussant: Michèle Lamont, Harvard University, on all three sessions

3.5 Open discussion on the whole panel

 

 

Reflections on TASP and the passing of academic judgement

I have long been fascinated by how much academic practice centres on the passing of judgement on students and peers. We review articles, and our articles get reviewed. We send in grant applications, and review those of others. We comment on each other’s work in seminars, and we make evaluative comments (sometimes in a low voice) about conference presentations.

Given how important this is for academic life, it is striking how unsystematic we talk about these things. There are scattered resources on topics like article reviewing available on the net, but I have found little systematic literature to help in developing one skill making and communicating academic judgements. I love Michelle Lamont’s *How Professors Think*, where she among other things displays the intricacies of passing judgment on grant applications. It is a rewarding read. Yet, one could wonder why there is not more talk and resources about the skills of passing judgment.

A while ago I came across three fascinating expert judgements regarding applicants for a professorship at a Swedish university (not Linkoping!). The reason these were fascinating was the markedly different styles in how they communicated their judgement of each applicant. One of the experts made extensive use of tables for presenting the achievements of the top candidates. In these we learnt the number of monographs and articles published by each candidate as well as the number of citations according to ISI and Google scholar.

Another expert was more condensed in his communication, but indicates in his introduction that the assessment has taken many sources into account (such as a ranking of journals for assessing the publications on the various candidates).

It was, however, the statement of the third expert that really caught my eye. She used a set of abbreviations for summarising her judgement of each candidate. These abbreviations were presented at the beginning of her statement and included: “TASP (there are some problems)” and “HOML (high on my list).” In all, five such abbreviations were defined.

One take on this would be that it is terrible with such a varied and unstandardized way to make and communicate such an important academic judgement as this one. Another one would be that it is precisely because it is of such importance that we as academics have to have quite some room in how we make these judgements. Maybe, then, the lack of more advice is all for the better in the end.