When I encounter an API in Unity that I am unfamiliar with the first thing I (and most of us) do is go to the Unity Scripting API Manual to see how it works via one of the examples. If that example will not compile when I try it, I assume that I must be doing something wrong. The example couldn’t possibly be broken, could it…?

This is how I discovered that we do indeed have examples in our scripting docs that do not compile, as a result of API changes over time and the odd case of an the example never compiling to start with. At Unity we have a lot of freedom in how we work; if we see a problem we can report it to the relevant team or fix it ourselves. At one of our recent Sustained Engineering team weeks we decided to do our own hackweek and picked several issues we wanted to tackle. Some of us chose to look into a solution for there being broken examples in the scripting docs.

There are about 15,000 scripting docs pages. Not all of them contain examples (a different problem which we are working to improve); however a large portion do. Going through each example and testing them manually would be unachievable in a week. It would not solve the problem of API changes or broken examples being written in the future either either.

Last year as part of the Unity 5.3 release we included a new feature called the Editor Test Runner. This is a unit test framework that can be run from within Unity. We have been using the Editor Test Runner internally for our own automated tests since its introduction. I decided to tackle the problem using an editor test. All our scripting docs are stored in XML files which we edit through an internal Unity project.

The code to parse all these files is already available in this project so it made sense to add the editor test into the same project so we could reuse it.

In our editor test framework (which is using NUnit) there is an attribute that can be applied to a test called TestCaseSource. This lets a test be run multiple times with different source data. In this case the source data would be our list of script examples.

public class ScriptVerification { public static IEnumerable TestFiles { get { // Get all the xml files var files = Directory.GetFiles("OurDocsApiPath/*.mem.xml", SearchOption.AllDirectories); // Each file is a separate test. foreach (var file in files) { string testName = Path.GetFileName(file).Replace(k_FileExtension, ""); yield return new TestCaseData(file).SetName(testName); } } } [Test] [TestCaseSource("TestFiles")] public void TestDocumentationExampleScripts(string docXmlFile) { // Do the test } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 public class ScriptVerification { public static IEnumerable TestFiles { get { // Get all the xml files var files = Directory . GetFiles ( "OurDocsApiPath/*.mem.xml" , SearchOption . AllDirectories ) ; // Each file is a separate test. foreach ( var file in files ) { string testName = Path . GetFileName ( file ) . Replace ( k_FileExtension , "" ) ; yield return new TestCaseData ( file ) . SetName ( testName ) ; } } } [ Test ] [ TestCaseSource ( "TestFiles" ) ] public void TestDocumentationExampleScripts ( string docXmlFile ) { // Do the test } }

Using this method now shows a list of all the tests that will be run in the test runner. Each test can be run individually or they can all be run using the Run All option.

To compile the examples we use CodeDomProvider. It allows us to pass in one or more strings that represent a script, and it will compile and return information on errors and warnings.

This is a cutdown version (XML parsing removed) of the first iteration of the test:

using UnityEngine; using NUnit.Framework; using System.CodeDom.Compiler; using System.Collections; using System.Reflection; using System.Xml; using System.IO; using UnityEditor; public class ScriptVerification { public static IEnumerable TestFiles { get { // Get all the xml files var files = Directory.GetFiles("OurDocsApiPath/*.mem.xml", SearchOption.AllDirectories); // Each file is a seperate test foreach (var file in files) { string testName = Path.GetFileName(file).Replace(k_FileExtension, ""); yield return new TestCaseData(file).SetName(testName); } } } CodeDomProvider m_DomProvider; CompilerParameters m_CompilerParams; [SetUp] public void InitScriptCompiler() { m_DomProvider = CodeDomProvider.CreateProvider("CSharp"); m_CompilerParams = new CompilerParameters { GenerateExecutable = false, GenerateInMemory = false, TreatWarningsAsErrors = false, }; Assembly unityEngineAssembly = Assembly.GetAssembly(typeof(MonoBehaviour)); Assembly unityEditorAssembly = Assembly.GetAssembly(typeof(Editor)); m_CompilerParams.ReferencedAssemblies.Add(unityEngineAssembly.Location); m_CompilerParams.ReferencedAssemblies.Add(unityEditorAssembly.Location); } [Test] [TestCaseSource("TestFiles")] public void TestDocumentationExampleScripts(string docXmlFile) { // Parse the xml and extract the scripts // foreach script example in our doc call TestCsharpScript } void TestCsharpScript(string scriptText) { // Check for errors CompilerResults compilerResults = m_DomProvider.CompileAssemblyFromSource(m_CompilerParams, scriptText); string errors = ""; if (compilerResults.Errors.HasErrors) { foreach (CompilerError compilerError in compilerResults.Errors) { errors += compilerError.ToString() + "

"; } } Assert.IsFalse(compilerResults.Errors.HasErrors, errors); } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 using UnityEngine ; using NUnit . Framework ; using System . CodeDom . Compiler ; using System . Collections ; using System . Reflection ; using System . Xml ; using System . IO ; using UnityEditor ; public class ScriptVerification { public static IEnumerable TestFiles { get { // Get all the xml files var files = Directory . GetFiles ( "OurDocsApiPath/*.mem.xml" , SearchOption . AllDirectories ) ; // Each file is a seperate test foreach ( var file in files ) { string testName = Path . GetFileName ( file ) . Replace ( k_FileExtension , "" ) ; yield return new TestCaseData ( file ) . SetName ( testName ) ; } } } CodeDomProvider m_DomProvider ; CompilerParameters m_CompilerParams ; [ SetUp ] public void InitScriptCompiler ( ) { m_DomProvider = CodeDomProvider . CreateProvider ( "CSharp" ) ; m_CompilerParams = new CompilerParameters { GenerateExecutable = false , GenerateInMemory = false , TreatWarningsAsErrors = false , } ; Assembly unityEngineAssembly = Assembly . GetAssembly ( typeof ( MonoBehaviour ) ) ; Assembly unityEditorAssembly = Assembly . GetAssembly ( typeof ( Editor ) ) ; m_CompilerParams . ReferencedAssemblies . Add ( unityEngineAssembly . Location ) ; m_CompilerParams . ReferencedAssemblies . Add ( unityEditorAssembly . Location ) ; } [ Test ] [ TestCaseSource ( "TestFiles" ) ] public void TestDocumentationExampleScripts ( string docXmlFile ) { // Parse the xml and extract the scripts // foreach script example in our doc call TestCsharpScript } void TestCsharpScript ( string scriptText ) { // Check for errors CompilerResults compilerResults = m_DomProvider . CompileAssemblyFromSource ( m_CompilerParams , scriptText ) ; string errors = "" ; if ( compilerResults . Errors . HasErrors ) { foreach ( CompilerError compilerError in compilerResults . Errors ) { errors += compilerError . ToString ( ) + "

" ; } } Assert . IsFalse ( compilerResults . Errors . HasErrors , errors ) ; } }

And it worked! We needed to make some small changes in how we compile the examples, though, as some scripts are designed to go together as a larger example. To check for this we compiled them separately; if we found an error, we then compiled them again combined to see if that worked.

Some examples are written as single lines of code which are not wrapped in a class or function. We could fix this by wrapping them in our test, but we have a rule that all examples should compile standalone (i.e. if a user copies and pastes it into a new file it should compile and work), so we count those examples as test failures.

The test was now in a state where it could be run as part of our build verification on the path to trunk. However there was one small problem: the test took 30 minutes to run. This is far too long for a test running in build verification, considering we run around 7000 builds a day.

The test was running sequentially, one script after another, but there was no reason we could not run them in parallel as the tests were independent of each other and did not need to make any calls to the Unity API;and we are only testing that they compile, not the behaviour. Introducing ThreadPool, a .NET API that can be used to execute tasks in parallel. We push the tests as individual tasks into the ThreadPool and they will be executed as soon as a thread becomes available. This needs to be driven from a single function, meaning that we can’t have individual NUnit test cases for testing specific examples from the docs. As a result we lose the ability to run any one of the tests individually, but we gain the ability to run them all quickly.

[Test] public void ScriptVerificationCSharp() { // Setup. Start all tests running on multiple threads. s_ThreadEvents = new ManualResetEvent[s_DocInfo.Count]; for (int i = 0; i < s_DocInfo.Count; ++i) { // Queue this example up for testing s_ThreadEvents[i] = new ManualResetEvent(false); ThreadPool.QueueUserWorkItem(TestDocumentationExampleScriptsThreaded, i); } // Check for errors and build the error output if required. bool testFailed = false; StringBuilder results = new StringBuilder(); for (int i = 0; i < s_ThreadEvents.Length; ++i) { // Wait for the test to finish. s_ThreadEvents[i].WaitOne(); if (s_DocInfo[i].status == TestStatus.Failed) { testFailed = true; GenerateFailureMessage(results, s_DocInfo[i]); } } // If a single item has failed then the test is considered a failure. Assert.IsFalse(testFailed, results.ToString()); } public static void TestDocumentationExampleScriptsThreaded(object o) { var infoIdx = (int)o; var info = s_DocInfo[infoIdx]; try { TestScriptsCompile(info); } catch (Exception e) { info.status = TestStatus.Failed; info.testRunnerFailure = e.ToString(); } finally { s_ThreadEvents[infoIdx].Set(); } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 [ Test ] public void ScriptVerificationCSharp ( ) { // Setup. Start all tests running on multiple threads. s_ThreadEvents = new ManualResetEvent [ s_DocInfo . Count ] ; for ( int i = 0 ; i < s_DocInfo . Count ; ++ i ) { // Queue this example up for testing s_ThreadEvents [ i ] = new ManualResetEvent ( false ) ; ThreadPool . QueueUserWorkItem ( TestDocumentationExampleScriptsThreaded , i ) ; } // Check for errors and build the error output if required. bool testFailed = false ; StringBuilder results = new StringBuilder ( ) ; for ( int i = 0 ; i < s_ThreadEvents . Length ; ++ i ) { // Wait for the test to finish. s_ThreadEvents [ i ] . WaitOne ( ) ; if ( s_DocInfo [ i ] . status == TestStatus . Failed ) { testFailed = true ; GenerateFailureMessage ( results , s_DocInfo [ i ] ) ; } } // If a single item has failed then the test is considered a failure. Assert . IsFalse ( testFailed , results . ToString ( ) ) ; } public static void TestDocumentationExampleScriptsThreaded ( object o ) { var infoIdx = ( int ) o ; var info = s_DocInfo [ infoIdx ] ; try { TestScriptsCompile ( info ) ; } catch ( Exception e ) { info . status = TestStatus . Failed ; info . testRunnerFailure = e . ToString ( ) ; } finally { s_ThreadEvents [ infoIdx ] . Set ( ) ; } }

This took the test time from 30 minutes to 2, which is fine for running as part of our build verification.

Since we couldn’t test individual examples with NUnit any more, we added a button to the scripting doc editor to allow developers to test the examples as they write them. The script with an error is now colored red when the test is run and error messages are displayed beneath.

When the test was first run we had 326 failures which I whitelisted (so they could be fixed at a later date). We now have that down to 32, of which most are failures in the test runner mainly due to not having access to some specific assemblies. There have been no new issues introduced and we can rest assured that when we deprecate parts of the API the test will fail and we can then update the example to use the new API.

Overall I thought this was an interesting use of the Editor Test Runner. It does have some limitations: We only test C# examples, and I have not managed to get JS compilation working, although that won’t be an issue in the future.

Here is the full test.