This article is part of Future Tense, which is a partnership of Slate, the New America Foundation, and Arizona State University. On Wednesday, April 30, Future Tense will host an event in Washington, D.C., on technology and the future of higher education. For more information and to RSVP, visit the New America Foundation website.

With characteristically good timing, I started preparing the lectures for my first-ever MOOC in early December of last year—a few days before the Washington Post ran a piece titled “Are MOOCs Already Over?”

Here is what the Post reported:

New data from a University of Pennsylvania Graduate School of Education study raises big questions about the future of MOOCs. The study, which, looked at the MOOC behavior of 1 million people who signed up for courses offered by the university on the Coursera platform from June 2012 to June 2013, found that only 4 percent completed the classes and that “engagement” of students falls dramatically in the first few weeks of a course.

Now that my MOOC, titled Buddhism and Modern Psychology, is wrapping up, I can report that the impending death of MOOCs—massive open online courses—is greatly exaggerated.

Not that I’m predicting they’ll revolutionize education. I’m not qualified to opine on that, in part because no one expects a course like Buddhism and Modern Psychology to revolutionize education in the first place. The great expectations are mainly about courses that impart knowledge of greater vocational value than, say, the Buddhist idea that the self doesn’t exist. You know: courses in computer science or math or accounting—courses that give poor kids in Africa and South Asia a chance to become anecdotes in a Thomas Friedman column.

No, my aim here is just to make one simple point: that the much-lamented and undeniably high “attrition rates” of MOOCs don’t really matter at all.

I don’t deny that, for the first few weeks of my six-week course, looking at my stats was depressing. Each lecture consisted of three or four segments, and the viewership for each segment was lower than that for the previous segment. So the bar graph I was seeing at midterm looked like this: down, down, down, down, down, down … and so on.

But then I realized: One reason it’s depressing to see a graph go down is that we like our graphs to go up—and this particular graph can’t go up. After all, lectures are viewed sequentially. A student starts with Segment 1 of Lecture 1 and goes from there. So even if there were zero attrition, the most you could hope for would be a flat line. And since some attrition is bound to happen, a downward slope is inevitable.

The fact that sequentially presented content pretty much always sees a declining participation rate is a grim truth that we’re in some contexts shielded from. I love to reflect on the fact that some of my books have sold in six figures, but one thing I’ve never seen is a chapter-by-chapter graph of actual readership. And I think I’d rather not, thanks. In 1985 Mike Kinsley, who later founded Slate, did an experiment to test the hypothesis that “much-discussed” books in Washington, D.C, don’t actually get read. He had an assistant visit local bookstores and insert, about three-fourths of the way through various books, a card with Kinsley’s phone number and the promise of a cash reward to anyone who called him. No money changed hands.

Of course, with an academic course there are ways you can limit the drop in participation. Here’s one: Make people pay a ton of money to attend your college and tell them they have to complete a given number of courses to graduate—which means that if they drop out of a course after five weeks, that’s five weeks of extra work they’ll have to do at some point before graduating. This is an effective incentive, and most college professors are familiar with one result: students in your class who would rather not be in your class. Looking at a downward-sloping bar graph, disconcerting though it is, is no more disconcerting than looking at such students.

With MOOCS, of course, the incentive structure is different. For starters, “enrolling” means you’ve clicked on a button that means, basically, “Sure, what the hell, send me an email when this course starts.” So it’s no surprise that, on average, nearly half of “enrolled” students don’t show up for class at all.

But even after that major culling, the downward slope continues to be pretty steep. So how steep is too steep? What’s an unacceptably high attrition rate? I maintain that there’s no such thing.

Here is what matters: How many students wind up absorbing how much material in your course? In my case the jury is still out, because the final lecture was posted a few days ago, and viewership for the lectures keeps growing for weeks. But it looks like, in the end, well more than 10,000 people will have watched all the lectures and about 20,000 will have watched half of them.

How many will complete the final writing assignment? Those numbers aren’t in yet. But more than 2,000 finished the midterm assignment, and in a sense that number is amazingly high. These students not only had to write an 800-word essay; because these essays are “peer-assessed,” each student who decided to write the essay was agreeing to evaluate the essays of five other students. That’s a lot of work—which explains why courses that assign peer-assessed essays have lower completion rates than the average MOOC.

And in exchange for all that work, those 2,000-plus students will get no diploma, no course credit, not even a “certificate of completion.” (Whether Coursera courses offer a certificate depends on the policies of the school where the course originates.) These students just wanted to do the assignment. If you’re a professor at a “real” college, the preceding sentence may not be a very familiar one.

Regardless of which number you want to focus on—students who watched all the lectures, or students who completed all the course work—and regardless of whether you consider the numbers for my course impressive, my point is just that, if you’re assessing the viability of MOOCs, this is the variable that matters: number of students still participating at the end, not what percentage of those who enrolled are participating at the end.

Why is “number still participating” the key variable? Because it says so much about the future supply of and demand for MOOCs.

First, on the demand side: The demand for MOOCs will depend on whether people see themselves benefiting from taking them. And the number of students who stick around for the whole course roughly captures the aggregate perceived benefit. After all, unlike at a “real” college, there’s no reason to finish a given course other than perceiving real, specific benefit from it.

As for the supply side: Though the downward-sloping participation curve is at first glance a downer, what most professors will, upon reflection, really care about is how many students they wound up reaching in a pretty thorough way. So the number of students still participating at the end is a good predictor of how many professors will consider it worthwhile to keep teaching these courses. Of course, professors may have various specific motivations for teaching an online course—they may assign books they’ve written, etc.—but the strength of all the specific motivations I can think of will correlate with number of students reached, not percentage of enrollees reached.

Here’s another way to look at why number of students reached is the metric that matters to professors. Suppose Coursera came to me and said: We’re thinking about expanding our marketing campaign, reaching out to vast numbers of people even though many of them are unlikely to wind up finishing your course; this will lead to an additional 1,000 students who love your course and watch all the lectures, but it will also net 10,000 who sign up and never show up for class, and 5,000 who sign up and watch the first lecture but don’t watch all of them.

How should I react? Should I say, “No, please don’t deliver another 1,000 satisfied students, because that’s going to make my downward curve even steeper, and then if anyone at a cocktail party ever asks about my MOOC slope I’ll feel embarrassed?”

Or suppose Coursera decided that students shouldn’t sign up for courses too casually and started saying students could sign up for only one course per month. That would weed out a lot of students who tried my course on a lark and bailed out during the first lecture. But it would also weed out some students who wound up loving my course. Faced with that prospect—making my downward curve less steep, but also making its endpoint lower—I’d say no thanks.

Lots of factors will determine whether MOOCs wind up being important—and MOOCs will in any event evolve, maybe to the point of being barely recognizable descendants of their current selves. But in the near term their viability will depend very heavily on whether students want to take them and whether capable professors want to teach them. And the best predictors of both of those are raw numbers, not percentages. (But if you must know: My initial enrollment—or, rather, my initial “enrollment”—was 59,000.)