It also allows us to introduce a new publication metric based on the frequency with which works are taught, which we call the “teaching score.” The score is derived from the ranking order of the text, not the raw number of citations, such that a book or article that is used in four or five classes gets a score of 1, while “The Republic,” which is assigned 3,500 times, gets a score of 100.

Many academics are uncomfortable with this sort of numerical reduction of intellectual work. Taken in isolation, we share this concern. But there is a broader context here: At present, publication metrics basically involve counting the citations of a given academic work in other academic publications. And since it can take years for an influential work to accumulate citations, shortcuts have become popular, such as the “journal impact factor,” which scores journal articles based on journal rankings that are determined by the journals’ own frequency of citation in other journals.

Such metrics are controversial, and with good reason: They capture only a narrow range of the things academics do and value. Journals, in particular, exist mostly to advance the state of research in specialized fields, and therefore they privilege work that is often by necessity (though also sometimes just by ingrained habit) esoteric. This is not a bad thing, but when used to derive performance metrics, it creates a clear set of signals to academics about how they should be spending their time and whom they should be writing for.

IF you like the idea of a more publicly engaged academy, you need to look elsewhere for incentives. And that’s where we think our “teaching score” metric could be useful. Teaching captures a very different set of judgments about what is important than publication does. In particular, it accords more value to qualities that are useful in the classroom, like accessibility and clarity. A widely taught but infrequently cited article is an important achievement, but an invisible one to current impact metrics.

We don’t think either approach to measuring the influence of a work is better than the other. But we do think that the academy is better off when it has multiple methods for valuing the wide range of work academics do.

An important caveat about the Syllabus Explorer results: They reflect the collection of syllabuses that we have gathered so far, which is large enough to give interesting results but far from complete. It is a work in progress on many levels, and one that depends on a culture of open bibliographic data-sharing in the academy.

Because of a complex mix of privacy and copyright issues concerning syllabuses, the Open Syllabus Project publishes only metadata, not the underlying documents or any personally identifying material (even though these documents can be viewed on university websites). But we think that it is important for schools to move toward a more open approach to curriculums. As universities face growing pressure to justify their teaching and research missions, we doubt that curricular obscurity is helpful.

We think that the Syllabus Explorer demonstrates how more open strategies can support teaching, diversify evaluation practices and offer new perspectives on publishing, scholarship and intellectual traditions. But as with any newly published work, that judgment now passes out of our hands and into yours.