Mon, 15 Mar 2004 20:27:08 GMT

Deans can count. Erik Duval, on measuring the quality of academic communications:

What we really ought to try and measure includes more subtle things,
like

  • how useful was this publication for others?
  • how much effect did it actually have on the field?
  • etc.
Of course, this is much harder to measure – though the
answers to both questions above would be “not at all” and “none whatsoever” for
the great majority of publications, I am afraid. Questions like those above hint
at much more relevant issues, I believe, but it seems like we prefer ease of
measurement over relevancy…

Well, maybe not all researchers have that preference, but the people
who administer academia sure do. William Arms, in his article Quality Control in Scholarly Publishing on the Web, quoted the saying “Our dean can't read, but he sure can count”…

[Seb's Open Research]

——-

i doubt anyone really wants to measure relevancy…. why? people have, and the curve isn't pretty at all… it is conic section, where a few things are the canon and then it slopes steeply away in short order, most papers never are cited again. only some books are frequently used, in short, there is alot of irrelevance, and in part this is how it should be when there are specializations in which there are less than 20 practitioners worldwide.