Update: Because this post has gotten a bunch of views already, and I definitely don't want to spread any misinformation here, I've updated my conclusion (jump to the bottom if you've already read the protip).

The this keyword in JavaScript can be really confusing. That I won't dispute. And so a lot of developers I really respect actually advocate for avoiding it altogether, which is definitely possible. You can sort of ignore the existence of JavaScript's prototype system and follow your own object-building approach:

function createValueObject(value) { return { get: function() { return value; } }; }

Or you can embrace the prototype system:

function ValueObject(value) { this.value = value; } ValueObject.prototype.get = function() { return this.value; };

Obviously this a contrived example; it's just meant to concisely illustrate the two approaches I'm talking about.

The readability of these two examples is a debatable issue. There are certainly many valid reasons for preferring the former, including its avoidance of this . However, if performance is a serious concern, you should consider going with the second approach. Using a prototype to define the methods of an object is faster pretty much across the board, though how much faster depends on the browser.

What is the big difference here?

A Reddit user pointed out that my closing paragraph (below) implies the big difference between these two approaches has to do with the efficiency of method invocation. Re-reading the paragraph, I have to agree that it does seem like that's what I'm saying. But that is wrong. If you compare just the method invocations in both examples--calling both the factory method and the constructor beforehand--you'll see that they show basically the same performance.

The real difference between the factory approach and the prototype approach is that using a prototype speeds up object creation a lot. Which, when you think about it, is really not so surprising: it's simply the difference between defining methods one time and re-defining them over and over.

The connection I was trying to make with the link to Eric Lippert's post is really based on the second installment in the series, in which he implements virtual methods by creating a delegate field for every method of a class. In the final installment he makes this much more efficient by using a vtable instead.

To be fair, the connection--even now that I've clarified it--isn't perfect. In particular, what makes a vtable so preferable to delegate fields is that it is much more memory efficient, and memory efficiency is obviously a different animal from execution speed (which is what jsPerf directly measures). But the comparison was apples to oranges to begin with, since C# as a statically compiled language does not give you the ability to dynamically define methods in quite the same way JavaScript does (lambdas seem like that, but in reality they get compiled to generated classes that lift local variables into instance fields... but that's a whole other discussion!).

My original conclusion

The why is a question for another post. I can't speak with authority on that, but I have a strong suspicion it relates to JS engines' use of hidden classes internally and the efficiency of vtables. (To get an idea of what I'm talking about, I recommend reading Eric Lippert's series on implementing the virtual method pattern in C#, which goes over the efficiency considerations in designing a method lookup system. Clearly C# is not JavaScript, but I think similar principles may be at play here.)