Automated Testing of ASP.NET MVC Applications

Artem Smirnov, Developer of Ivonna for Typemock Isolator, www.typemock.com

Introduction

For many years the developers who practiced Unit Testing were frustrated about numerous problems they had when trying to apply automated testing to ASP.NET sites, in particular, those that were built using the WebForms framework (which was, for many, a synonym of ASP.NET). Not so long ago, Microsoft developed a new ASP.NET framework, called ASP.NET MVC (Model-View-Controller). One of the selling points of this framework was its testability. In this article, we're discussing what the problems are with testing ASP.NET sites in general, and how the ASP.NET MVC framework tries to solve them.

The Model-View-Controller approach (source microsoft.com)

What's different about testing ASP.NET applications?

When you are developing a library, or an application, you "own" the code. It means that, provided you follow certain guidelines, you can automate testing all, or at least all worth testing, features of your product. While you usually use third party libraries, or system resources like database or Internet connection, they can be considered "enhancements" sitting on top of your code.

With ASP.NET, the situation is reversed. What you develop are, essentially, enhancements to the IIS (or another hosting process). This is not unique, of course, to ASP.NET - the same holds true for any plugin development. It is like your code is a "guest" in this system, and should behave accordingly. You can add as many classes as you wish, and make them testable, but there is always a boundary layer that communicates with the framework. The host system creates these boundary classes, and calls certain methods that you provide, so that you have a brief moment of control. In WebForms, this layer is made of ASPX pages and code-behind files. In ASP.NET MVC, it is Controllers and Views.

For testing, a developer can take two very distinct paths. A developer can either test the main system together with your custom code (integration testing), or test your code in isolation, somehow dealing with the absence of the main system (unit testing).

Integration Testing

End-to-end, or integration, tests verify the behavior of the system as a whole. One can start, for example, with a certain URL, and inspect the HTML output, plus the side effects (a DB record inserted, an email sent, some money written off of our account, etc). Based on this test, we can assume that everything works as expected, or something is wrong. Given the HTML response, for example, all we can is search for a certain string and hope that if we find it, then everything else is as expected as well. If not, we have an idea that something went wrong, but we don't know what. This is known as a "black box" testing - we provide an input and inspect the output, but we don't know what's inside.

We can push this even further, and test our client code together with the server code. While the former tests mimic a browser (set up an HTTP request, inspect the response), the latter ones mimic a user ("click" a button, inspect the content of a certain HTML element). Typically, we write code that "drives" a browser, automating all the clicking for us. There are some important qualities that characterize such tests:

We are testing the system behavior as it is presented to the end user, so if the tests pass, the system is expected to work correctly.

If a test is broken, we don't know whether it is a problem with the client or the server code, or the incompatibility between these two.

Such tests are really slow. The clicking happens faster than a human would do it, but still much slower than the code processing. A complex system can take several hours to test.

Complicated setup and cleanup: Often we have to perform several steps before we even get to the page we want to test. For example, testing a password protected page would require visiting the login page first.

Tests are hard to maintain, as there is a lot of test code involved that is not directly related to the system.

Whether we use client code testing or not, there are some general problems with such tests, and these problems do not depend on a particular framework or programming language:

If something's wrong, we are not sure about what needs to be fixed.

More often than not, some tests start to fail because of a non-functional change (such as changing an ID of an HTML tag).

It is hard to control the external dependencies. For example, if we test an e-commerce application, we don't want to bill somebody's account each time we run a test, yet we need to get a positive response from the billing system, so that we can feed it into our system.

Integration tests alone do not drive your system to a better design. After all, they don't care about the implementation.

So, while integration tests are very important, and should be a part of any test suite in order to ensure that your system works properly as a whole, it is essential to use unit testing in your development process.

Unit testing

As we saw, when working with a framework that hosts your code, you have to create a number of "boundary" classes that interface with that framework. It is these classes that are sometimes hard to test, and have to be designed carefully in order to keep both your hosting framework and your test runner happy. One of the biggest challenges is to test your boundary class without the "real" hosting system up and running. For example, the WebForms framework is responsible for parsing the ASPX pages, assembling the controls into a web page, and firing certain events now and then. In order to write a sensible test for a WebForm page, you need at least the page itself built up properly, meaning that you should do a lot of work that normally is done for you by the framework. Another problem is that often the tested code depends on certain objects that only the framework can provide. For example, the infamous HttpContext object, that cannot be created manually, and its "children" - HttpRequest, HttpResponse, etc.

Typically, the only way out of this situation is make the boundary classes to serve as mere adaptors, and put the real "meat" in other classes, those that do not interact directly with the framework, or at least are easy to setup manually and tested in isolation. This supposedly leads to a better code as well, since, for example, moving the business and DAL logic away from the code-behind leads to more maintainable applications. The boundary classes themselves are left untested. One example is when, rather than put your data access in the code behind, you move it to DataObject classes that are used by ObjectDataSource controls. These classes, although being created and used by the framework, are relatively easy created and tested in isolation.

Here, the point whether the framework design cares for testability becomes important. A lot of people (me included) tried to follow this path for WebForms applications, trying to apply the MVP pattern. What happened is a lot of plumbing code that required an amount of effort far exceeding the testability benefits, and making the code harder, to maintain.

ASP.NET MVC: Testability Paradise

After listening to developers criticize the WebForms design and testability issues for several years, Microsoft gave us the ASP.NET MVC framework. It has been designed to solve many shortcomings of WebForms, including the testability problems. The main "boundary" classes are now Controllers. Although they are created by the framework, and have to conform to some rules, they are quite easy to setup in tests, and they can happily exist without the ASP.NET request pipeline. What's important, we are not limited to default constructors: we can inject dependencies into our controllers, making it possible to mock our services and test controllers in isolation. Another major benefit is that controllers are totally independent of the UI. It is widely agreed that the UI is hard to test, and Codebehind classes were suffering from this "burden". Now we've got Views, which are still hard to test (although some effort is made into that direction, one blog post from an architect on the ASP.NET team), but this does not affect testing our Controllers.

Another step toward testability is addressing the HttpContext nightmare. Microsoft took the "thin wrapper" approach, which, admittedly, is the only way when you deal with the vast amount of existing code which should be easily upgradeable to the new ASP.NET version. In terms of testability, I'd say it's moving from impossible to very hard. And if we recall the popular belief that testability equals good design, this case is a great example of the opposite. So, what we gained here is a possibility to write unreadable tests with chained partial mocks, plus having to use Http** wrappers in your production code without any real benefit to the application design.

Most developers tend towards one of two extremities. One, the "Big Ball of Mud" approach (that MVC does not save us from), is to put all the request handling code in the Action method of the controller. Writing tests for such methods is essential, since the code can become very complicated. That would be, of course, integration tests: each would test everything that happens during the request. However, the size of the "black box" is now much smaller, and we can even peek at what's inside it, since our test is now in the same process as the code being tested.

The other tendency, dubbed "Slim Controller", follows the idea that the sole responsibility of a controller is to wire up various services and model classes. Each Action method looks like a piece of configuration in an executable form. Typically, a test for such a method involves lots of mocking, and essentially duplicates the method itself (although in an unreadable form). Such tests should be generally avoided, since they don't add any value to the system, being only a maintenance problem. The rest of the classes involved in the Action method are either business layer services (which are testable or not regardless of the framework), or infrastructure classes like custom Action Filters, Model Binders etc., which are mostly testable, but with some considerable effort (see, for example, Scott Hanselman's post about unit testing ASP.NET MVC Custom Model Binders).

So, there are two extremities: bad design with integration tests, or good design with brittle and useless unit tests. This is why many developers that tend towards good design tend to not write tests for controllers at all, instead concentrating their efforts on testing the underlying classes.

How about test-driving our design?

Controller testability can be a good encouragement for a better Action method design. For example, it's a bad idea to use HttpContext.Current in your code. And while you can use HttpContextBase as a mockable dependency (see our discussion of the Http* wrappers above), the mere fact that your tests are hard to write and maintain should also give you an idea that you shouldn't do it either. Fortunately, you can use Model Binders, Action Method Filters, and other infrastructure classes to shield your code from the framework. That still leaves you with a piece not covered by tests, but at least this piece is part of the infrastructure now, and is not supposed to be changed often. Such classes are very simple, since their responsibility is just to extract the information from the framework specific classes and present it in a framework agnostic form.

Is it perfect?

As with most applications, unit testing is not enough. This is especially true when your code interface with a framework that does not participate in your unit tests. Since the boundary part is not tested for the most part, and can become rather wide (especially if you start using the infrastructure classes, like custom Route Providers and Model Binders), it is essential that you write a number of tests that make sure all your parts work together as expected.

Another potential problem is testing the Views. While this problem should discourage you from using considerable amounts of code in your Views, sometimes in order to make your Views maintainable you have to use Helpers, Partials, and even render Child Actions, and the result can become much more complicated than a simple piece of data bound HTML. Add to the mix a lot of "magic" that the framework conveniently does for you, and you'll find yourself in a situation when making a simple change suddenly breaks things in an unpredictable way. Having a safety net of at least simple smoke tests can save you from numerous regression bugs.

I guess the correct question would be not "is it perfect," but rather "could they make it better." My answer is, given the circumstances, not much. On one hand, looking at the FubuMVC framework which I started to use recently, I see that it can be better. On the other hand, the general idea of making a Ruby on Rails clone for ASP.NET must have forced a number of API decisions (e.g. returning an ActionResult from the Action method). If we go deeper and try to test some advanced infrastructure code (e.g. a custom convention for View location), we discover that the Microsoft's infamous preference of inheritance over composition does a poor job for writing such tests.

However, I should admit that for 99% of development efforts these issues are not a big problem. It is definitely good that Microsoft realized the importance of automated testing, and made testability one of the cornerstone principles of the ASP.NET MVC framework. Thanks to that decision, and its proper implementation, developers can make Web sites that are fully tested and have much less bugs. Because of that, the Web has become a better place.

More Software Testing and .NET Knowledge

Software Testing Magazine

.NET Tutorials and Videos

Software Testing Tutorials and Videos

Click here to view the complete list of archived articles

This article was originally published in the Spring 2012 issue of Methods & Tools