Ars Technica published a piece on how Microsoft has dragged its development practices into the 21 century. How about Testing specific activities within that stack?

Consider this snippet from the piece:

"The agile approach of combining development and testing, under the name "combined engineering" (first used in the Bing team), is also spreading. At Bing, the task of creating programmatic tests was moved onto developers, instead of dedicated testers. QA still exists and is still important, but it performs end-user style "real world" testing, not programmatic automated testing. This testing has been successful for Bing, improving the team's ability to ship changes without harming overall software quality."

When "How we test software at Microsoft" was published a few years back, Programmatic Testers I believe were a key component of that philosophy. If that task has been moved onto developers, I hope that work is still being done diligently by some other member of the team. But why move that work to developers instead of dedicated testers? Is this an attempt to save costs by eliminating an entire role? Does not make much sense considering that work is still there and needs to be done by some one. Unless we are talking about going back to the good old days of buggy software. The whole idea of programmatic testers in my experience was introduced as a possibility of developing low level tests concurrently while components are being developed. Testing earlier in the cycle was key to the success of this philosophy.

but one victim group appears to have been the dedicated programmatic testers in the Operating Systems Group (OSG), as OSG is following Bing's lead and moving to a combined engineering approach. Prior to these cuts, Testing/QA staff was in some parts of the company outnumbering developers by about two to one. Afterward, the ratio was closer to one to one. As a precursor to these layoffs and the shifting roles of development and testing, the OSG renamed its test team to "Quality."

Talking about developer to tester ratios across the board in my opinion is utter nonsense. Context is important. If the ratio is 2:1 or 1:1 and if there is a real need, why not? Here is an example from my experience. If the project involves lots of client side testing with technologies like HTML, JavaScript, CSS that needs to be rendered across multiple browsers, screen sizes and operating systems, I think testers needed on such a project will be very high. It is a reflection of lack of tools, poor standardization and rapid release cycles of OS/devices. If the defect rate is really high, then that would also probably drive demand for more testers. Simply making the ratio 1:1 without fixing or addressing other realities is just crazy.

I suspect during debugging and fixing phase as described in the article, programmatic testers probably played a big role which is why they are clearly not seen as creators but an added cost. It is potentially a fundamental thinking that testers do not really create anything and thus are a cost.