4S open panel on Academic Evaluation in an Age of “Post truth”

I’m together with Mario Biagioli and Steve Woolgar convening a panel titled Academic Evaluation in an Age of “Post truth” at the upcoming 4S conference in Boston next week. It consists of three sessions with in all 13 presentations followed by Michèle Lamont as discussant.

Program: Academic Evaluation in an Age of “Post truth”

Thursday 31 August 2017, 9-10:30; 11-12:30; 2:00-3:30
Panel convened by Mario Biagioli, Claes-Fredrik Helgesson and Steve Woolgar

Panel abstract

STS has made major contributions in respecifying the key concept of “values”. We can no longer take for granted that values are given or that they straightforwardly determine action. We know instead how much is involved in making, articulating, enacting and manipulating values. In academic work, such practices abound: we know that determinations of academic value involve contingent practices of evaluating, rating and ranking performance. What are the implications of this understanding of academic evaluation in the contemporary situation, where standards of truth are allegedly undergoing significant modification? In a situation of “post truth” (nominated as OD’s new word of 2016) what contributions can our pragmatist orientation to evaluation make, and how? Is it possible or important to retain symmetry, impartiality, and agnosticism with a phenomenon which so close to home? Is this simply to replay the contention that critique has run out of steam or are we witnessing the emergence of practices of evaluation that are inherently external to regimes of truth and thus of critique? Can STS make interventions that can make a difference? This panel invites papers which address the practices and transformations of academic evaluation in the age of post truth. These practices include, but extend considerably beyond, the use of diverse metrics and indicators. For example, the panel invites discussion of peer reviewing, grant proposal assessments, paper grading, appointments and promotions, awards and prizes, book endorsements and other professional practices. We welcome papers which discuss more (or less) appropriate future modes of academic evaluation.

Session 1:“Issues” – Chair: Claes-Fredrik Helgesson

Thu, August 31, 9:00 to 10:30am, Sheraton Boston, 3, Dalton

1.1 The reproducibility crisis and its critics: Scientists’ evaluations of the stability of their findings • Nicole C Nelson, University Of Wisconsin-Madison

1.2 Below the By-line: The Curious Practice of Evaluating Career ‘Trajectories’ in Academic Biomedicine • Björn Hammarfelt, University of Borås; Alex Rushforth, CWTS, Leiden University; Sarah de Rijcke, Centre for Science and Technology Studies (CWTS)

1.3 Peer Review in Mathematics: Degrees of Correctness? • Christian Greiffenhagen, The Chinese University of Hong Kong

1.4 Resistance, Opportunism, or Something Else? Gaming and Manipulation of Academic Metrics Systems • Jo Ann Oravec, University of Wisconsin Whitewater and Madison

1.5 Redefining “Publication” and “Evaluation” • Mario Biagioli, UC Davis STS Program & Law School

1.6 Open Q&A on all papers in session

Session 2: “Outlooks” – Chair: Mario Biagioli

Thu, August 31, 11:00am to 12:30pm, Sheraton Boston, 3, Dalton

2.1 Evaluating Valorization: Tensions between Truth and Use in Concepts and Practices of Science Policy • Jorrit Smit, Leiden University

2.2 Beyond Managerialism: Academic Rating and Ranking as a Solidarity Saving Device • Mikhail Sokolov, European university at Saint Petersburg, Russia

2.3 Efficiency as Conditional Truth in Research?: Rules of the Road from a Japanese Perspective • William S Bradley, Ryukoku University

2.4 Legitimacy Crises, Politicisation, and Normative STS: Seeking Sincerity in Public Climate Change Debates • Bernhard Isopp, York University

2.5 Mishaps and mistakes in academic evaluation • Claes-Fredrik Helgesson, Linköping University and Steve Woolgar, Linköping University & University of Oxford

2.6 Open Q&A on all papers in session 2

Session 3: “Fixes” – Chair: Steve Woolgar

Thu, August 31, 2:00 to 3:30pm, Sheraton Boston, 3, Dalton

3.1 Temporalities of Truth: Peer Review’s Futurity in Terms of Credit and Debt, Co-Existence vs. Competition • Alexa Faerber, HafenCity University

3.2 Crafting Transparency and Accountability: Evaluation of Models, Metrics and Platforms of the Gates Foundation • Manjari Mahajan, New School University

3.3 Evaluative Inquiry: Toward Experimental Modes of Assessing the Values of Academic Work • Sarah de Rijcke, Centre for Science and Technology Studies (CWTS); Thomas Franssen, University of Amsterdam; Maximilian Fochler, University Of Vienna
Tjitske Holtrop; Thed Leeuwen, Centre for Sciencs & Technology Studies (CWTS), Leiden University; Alex Rushforth, CWTS, Leiden University; Clifford Tatum, CWTS – Leiden University; Paul Wouters, Centre for Science and Technology Studies, Leiden University

3.3 Open Q&A on all papers in session 3

3.4 Discussant: Michèle Lamont, Harvard University, on all three sessions

3.5 Open discussion on the whole panel



Who do you work with? – A reflection on pedigree and the relational in research

“Who do you work with?” The question was posed to me more than 20 years ago. It as asked just as I had sat down with a prominent male professor at a prestigious US university. I had got the appointment by emailing him, and was more than ready to talk about my PhD project and to get his insights into how I could to develop it. I had sent him a page outlining my work, to spare him the trouble to read through the full thesis proposal I had just completed before leaving for my 4 month stay in the US. And then this question. I had no idea what it meant, and answered that I had conceived of the project myself and that it had certainly not been handed down to me by some senior scholar. As a visiting PhD student I had just begun to sense that there actually might be a difference between being a Swedish PhD student, treated almost as faculty at home, and an American grad student.

It took more than a decade until I understood that I had given the wrong answer, and that I had completely misunderstood the question. He had asked for my pedigree, because that was a way to assess whether I was someone worthwhile spending time on. I had thought he asked about the provenance of the project. Hence, I had insisted that this was my project and not a project concocted by some professor for whom I worked. I guess that what he heard was that there were no one vouching. Hence, in hindsight, he must have pretty quickly concluded that I was not someone worth spending time on. I do not remember anything else from the meeting. He might actually have given me some suggestions for how to proceed with my work. Yet, what I remember was the feeling that he lost interest already when I answered his first question.

The pivotal moment for my understanding of this question came when I was in the US for a conference. In the lobby during a coffee break I overheard the same question. This time it was uttered by a US-based female professor and at the other end of the question was a PhD student who apparently had an appointment with her. Before the student had had time to answer, the professor had added: “I guess what I’m asking is, whose ‘kid’ are you?” This utterance flashed me back in time to that previous encounter and finally explained to me that the question was a not so subtle probing into a fledgling scholar’s pedigree.

I guess I must be considered to having been simple-minded. I had, at the very least, been incapable of a sufficient degree of reflexivity. I had, after all, already then been well exposed to ideas of networks and relationships. I was well familiar with notions such as the “strength of weak ties” and “structural holes” already before entering into this professor’s office. I had moreover read “Science in action” and other actor-network classics. Hence, the idea that actors and agency can be understood in terms of networks and relations was far from strange to me, at least as analytical concepts. Yet, it is clear that I had not been able to make any such a connections when asked that simple question. I had not been able to put any of the network paraphernalia to work when I took in and answered the question.

I am probably not much smarter now. Yet, I do now have a clearer opinion on what I think about this specific question. The short version is that I think it is an inappropriate question and that it directs attention in the wrong direction.

I firmly believe that research is a social activity. All we do as scholars is relational. To be meaningful, our research needs to tie to previous work by others as well as be related to in subsequent contributions. In short, work need to be part of ongoing conversations. Research is a relational endeavour. Yet, and this is crucial, I do think that the question “Who do you work with?” is very poor way to evoke research as a relational endeavour. It is poor because it uses relations for establishing status and worth rather than to engage with ideas and how these might relate to the ideas of others. To put it bluntly: It performs what we could call an “aristocratic” form of a relational view on research, that is, a form where the worth of ideas and people are seen as determined by their pedigree.

A brief essay about quality in research

What is quality in research? What is good research? How can we now how to practice it and how to assess it? These questions are almost impossible to answer, and they are precisely for this reason all the more important to talk about.

I wrote a brief essay on the topic of research quality to be gathered together with similar essays written by my professorial colleagues at tema T. They were intended as conversation pieces for discussions about this both impossible and important topic. My own contribution was titled “On being part of a conversation: An essay to aid in talk about quality in research” and is available to download here.

Reflections on TASP and the passing of academic judgement

I have long been fascinated by how much academic practice centres on the passing of judgement on students and peers. We review articles, and our articles get reviewed. We send in grant applications, and review those of others. We comment on each other’s work in seminars, and we make evaluative comments (sometimes in a low voice) about conference presentations.

Given how important this is for academic life, it is striking how unsystematic we talk about these things. There are scattered resources on topics like article reviewing available on the net, but I have found little systematic literature to help in developing one skill making and communicating academic judgements. I love Michelle Lamont’s *How Professors Think*, where she among other things displays the intricacies of passing judgment on grant applications. It is a rewarding read. Yet, one could wonder why there is not more talk and resources about the skills of passing judgment.

A while ago I came across three fascinating expert judgements regarding applicants for a professorship at a Swedish university (not Linkoping!). The reason these were fascinating was the markedly different styles in how they communicated their judgement of each applicant. One of the experts made extensive use of tables for presenting the achievements of the top candidates. In these we learnt the number of monographs and articles published by each candidate as well as the number of citations according to ISI and Google scholar.

Another expert was more condensed in his communication, but indicates in his introduction that the assessment has taken many sources into account (such as a ranking of journals for assessing the publications on the various candidates).

It was, however, the statement of the third expert that really caught my eye. She used a set of abbreviations for summarising her judgement of each candidate. These abbreviations were presented at the beginning of her statement and included: “TASP (there are some problems)” and “HOML (high on my list).” In all, five such abbreviations were defined.

One take on this would be that it is terrible with such a varied and unstandardized way to make and communicate such an important academic judgement as this one. Another one would be that it is precisely because it is of such importance that we as academics have to have quite some room in how we make these judgements. Maybe, then, the lack of more advice is all for the better in the end.