A few days ago I wrote about the claims that Windows 7 was a memory hog, and that Windows 7 systems tended to be short on memory. The claims were made by "Craig Barth," CTO of Devil Mountain Software, a Florida-based company that has a small utility that collects Windows performance data and sends it to DMS's servers, where it is then collated and interpreted.

For its part, DMS was unimpressed with our coverage. "Company representatives" made a blog post "rebutting" my original coverage. Though that post has largely been redacted, it posted a number of graphs based on my system data, and confirmed that after "reviewing" my data, the reason that their software claimed my system to be low on memory is because it is. This seemed shady, to say the least—publishing data specific to my system in an attempt to point score. Clearly, the entire XPnet system offers vanishingly little privacy or security. Nonetheless, the "researchers" at DMS were adamant that I was low on memory.

The truly dishonest nature of DMS really became apparent this weekend. Larry Dignan at ZDNet uncovered that Craig Barth simply doesn't exist. Devil Mountain Software is actually operated by tech blogger Randall C. Kennedy who was, until the weekend, a paid writer for InfoWorld. "Was" because he's been let go after his failure to disclose his relationship with DMS.

When writing for InfoWorld, Kennedy regularly promoted DMS's XPnet. His writing routinely parroted DMS's scandalous memory-usage "findings" in what turned out to be shameless, but secretive, acts of self-promotion. InfoWorld too promoted DMS's software agent, under the branding Windows Sentinel.

For his part, Kennedy claims that InfoWorld knew all along about his dual identities—the implication being that at InfoWorld, it's perfectly cool to pimp your own company's software under the guise of "journalism"—and further, and far more excitingly, that this was a deliberately orchestrated, Microsoft-organized smear campaign. The multibillion dollar software giant is, apparently, so extraordinarily damaged by Kennedy's harmful allegations that it called in favors to get ZDNet to reveal that Randall C. Kennedy has lied about who he is (when Craig Barth was directly asked if he was Kennedy, the response was an outright denial) and misrepresented his relationship with companies he has written about. Given that the influence of his coverage appears minimal and that ZDNet has only now revealed the truth, this is more than a little outlandish. For its part, InfoWorld explicitly denies the claim that its editors were in on the deception (see Eric Knorr's comment in the comments section of the post).

The fact that Randall C. Kennedy was caught in a falsehood does not mean that he is wrong on technical issues. One can create personas, shill for one's own company when working as a paid blogger, and claim persecution by Microsoft, but these things do not in and of themselves mean he is wrong about the Windows 7 memory usage. No, he's wrong because he is misinterpreting and misunderstanding the statistics collected by Windows and using them to draw misleading conclusions.

As an added bonus, Kennedy now claims that the "low memory" warning that his software gave my system was a configuration error of some unspecified kind, and that after analyzing my data (making it the second time that DMS has examined my data), my system isn't low on memory at all!





There's no way I'm low on memory. Truly

Unfortunately, after verifying the configuration is indeed what Kennedy says it should be, my system is still being claimed to be low on memory, even though it isn't.

Clarifying the role of SuperFetch

There was one error in my original article. I still stand by the conclusions I made, but I was wrong to suggest SuperFetch was involved. The DMS software uses three counters to gauge memory usage: "committed bytes," "page in," and "pagefile usage" (which is a percentage). Windows has two pools of "memory" available to it; physical RAM, and pagefile space. The sum total of this storage is measured by a performance counter called "commit limit."

Whenever a program asks Windows to allocate some memory, the allocation is added onto the count of committed bytes. It's an accounting thing; Windows will only allocate (commit) as much memory as it can access through the pagefile and physical memory. In other words, committed bytes can never be larger than the commit limit. This is in contrast to some other OSes like Linux, which can "overcommit"; they can allocate more memory to applications than they actually have available to them.

The reason that this is safe is that in both Windows and other OSes, an allocation does not actually use any memory. The allocation only uses memory when the application tries to read or write from the allocated memory. Until that point, it's in a kind of limbo. Windows still accounts for it (because it could use memory if the application tries to use it), but it doesn't reduce the amount of physical memory that's in use, and it doesn't occupy any space in the pagefile.

Linux uses overcommit because it assumes that a reasonable proportion of allocated memory is never actually used, and so it would be wasteful to ensure that every allocated piece of memory has somewhere to store it. This is actually not a bad assumption, in general. For various reasons, it's quite common for programs to demand more memory than they use. With that in mind, it's normally safe to allocate more memory than the OS can get its hands on, because the OS will rarely be in the situation that it has to make good on its all its allocation. On those rare occasions where this does happen—where Linux has to tell the application, "You know that memory I said you could have? Turns out I was lying; it doesn't exist"—Linux is forced to terminate processes to free up some memory.

This can't happen on Windows, but the tradeoff is that Windows will tend to need a larger pagefile. Big enough to ensure that every memory allocation can be made real, just in case it has to do so. Given that disk space is normally plentiful and that Windows has long used flexible pagefiles rather than static swap partitions, this too is a reasonable tradeoff to make.

All this should explain in part why committed bytes is not a counter worth using. Committed bytes includes all those allocations made by applications that Windows has to account for (to prevent overcomitting) but which aren't actually being used.

My mistake was in believing that committed bytes included cached data (after all, physical memory being used as cache is indeed in-use, so it seemed reasonable to charge it against committed bytes). This was wrong; cached data lives in a special state. It isn't quite the same as free memory (since it does contain some information), but it isn't exactly in-use, either (since it can be instantly discarded if necessary).

The other two counters that DMS used are similarly lacking in relevance. The Page Ins counter indicates the number of pages of memory that has to be read from disk. Pages are one of the ways in which memory is organized by the OS; the OS can only allocate memory in page-size chunks (usually 4096 bytes), so any allocation is a whole number of pages, any read or write to the pagefile is a whole number of pages, and so on and so forth. It's true that a page in occurs when data has to be read from the pagefile due to previously being discarded from physical memory, and this is certainly an event you don't want to occur regularly.

However, that's not the only thing that can cause a page in. I was quite deliberate when I said "read from disk"; I did not mean "read from pagefile." The pagefile is one of the things on the disk that can fulfill a page in operation, but it is not the main thing. Executables and libraries are also paged in. When you run a program (or when that program loads a DLL), Windows does not load the entire program into memory. Instead, only those portions of the executable that are actually needed are loaded into memory. Everything else is left residing on-disk, to reduce the pressure on the machine's physical memory. Each time a new part of the executable needs to be read from disk (due, for example, to choosing a menu item in the program that hadn't previously been used), this incurs a page in.

If you regularly start programs, you will regularly experience page ins. Not because the system is low on memory or having to read from the pagefile a lot, but because that's how programs are loaded by Windows (and every other modern OS, in fact). Starting programs is a regular occurrence on my computer; Web browsers like Google Chrome and Internet Explorer 8 start a new process for every tab you create, for example, and I create tabs often. Page Ins can be indicative of problems, but that is far from guaranteed.

So anyway, I continue to stand by my claim that DMS's software cannot diagnose the situation it is claiming to diagnose (low memory) and hence the claim that a large proportion of Windows 7 machines are low on memory cannot be justified on the basis of DMS's software. The claims look more like self-serving hype. It's a common technique; Microsoft is a big target, and any news that's bad for Microsoft tends to be good for pageviews.