Writing a simple C# desktop app has a pretty convenient workflow. After installing Visual Studio, you create a project using already available templates. You add some NuGet package dependencies. What comes next is writing C# code and running your app by pressing F5 every now and then. You don’t even have to touch the command line or setup pre-build or post-build tasks.

Working on ever growing applications alters the scene. You are ought to get your hands dirty. Not every dependency is a shiny NuGet package and you also have dependencies written in totally different languages. As brilliant ideas and new requirements surface on and off, you have to be flexible. Let’s say, you have to ship all the templates and fonts in the installer, but you don’t want to commit that extra 200 MB into your GitHub repository, don’t you?

Let’s take Prezi for Windows as an example. It’s a cloud-based desktop application. It’s mix of WPF and web technologies. It utilizes the Chromium Embedded Framework to embed a browser into the WPF application, which the prezi editor run on. Why do we need all the hassle? The prezi editor has been written in AS3/Flash. The application itself is a great mix of different technology stacks. We also support offline prezi document editing features, that’s why we ship fonts and many other assets in the installer.

We kept declaring more and more pre- and post-build steps in the project. Copy this, move that, execute {place-your-favourite-exe-here} as post processing. We incorporated automated tests and a auto updater framework. Other teams had been knocking on our door to provide a way of having a nightly build against their dependency.

The learning is that customers, dependent teams, and product teams have always big dreams and expectations towards applications, so change is inevitable. From the many challenges, which it imposes, I would like to deal the following two:

How long does it take to build your application on a clean machine?

How many manual steps are required to build and maintain your application?

How long does it take to build your application on a clean machine?

Let me be Captain Obvious here. The longer the build takes, the less you can get done. You will be more tempted to check up on {place-your-favourite-time drainer-website} while the app is building.

Adding more and more pre- and post-build step to your build would enormously slow down your regular pressing “F5” kind of workflow. For example, it’s not worth checking and pulling down the new fonts on every local build. Neither building your application against the latest version of dependencies. You don’t want to download the internet, over and over again.

By and large, these operations should be the part of setting up or updating your work-space and shouldn’t run on every “F5”.

How many manual steps are required to build and maintain your application?

Boring manual steps are the easiest to screw up and sometimes they go unnoticed, causing headaches later on. For example, if you have an x86 application, you can’t just tick a checkbox to allow your application to use more than 2 GB of memory. You have to run the EditBin tool to workaround the problem. If you want to do this by hand for every release, you’ll totally forget every now and then.

#Running the EditBin tool

$TargetPath = "C:\path\to\the\app.exe"

EditBin "$TargetPath" /LARGEADDRESSAWARE

Scripts are here to rescue

Scripts can be the answer to the questions. You can create scripts for initializing your workspace every now and then, so you don’t do anything time-consuming while you’re building your app. Usually, these scripts download files from various sources and organize them some way. Furthermore, scripts allow you to automate boring manual tasks like digitally signing the executables or releasing a new version.

When I first started fiddling with scripting I felt I was way out of my comfort zone. The lack of Intelli-Sense and the limited syntax highlighting in different editors are just the tip of the iceberg. The biggest problem for me is the absence of a type system, or proper compiler, which could point out my mistakes in time. I felt alone, in a dark room’s corner.

Over time, I adjusted and started to respect the scripts and I felt quite accomplished after automating something boring. However, the more script I wrote the more I’ve started drifting towards mayhem. My scripts were coming to bite me.

When all hell breaks loose

It’s Friday afternoon and you’ve been working hard all week to push out a release. You have a long week behind. You’re exhausted. The application is polished and it’s ready for the release. You have some scripts responsible for the process. You hit the big red button and start packing. Time to go home. The only problem is that the script failed. As you can’t leave it like that, you check the console and all you’re seeing is errors, that doesn’t make sense at all.

Your heart rate starts increasing. You can’t identify the exact problem and as a hopeless move, you start adding WriteLine’s to the release script. After modifying and re-running the script few times, you managed to release the application, although it’s 8 PM and your kids are crying for you at home.

It happened several times with me. My team and I had to suffer because of my lame scripts over and over again. I solved one problem and created two in return. Poorly written scripts are daunting and they can cause serious problems. Just like duct tape, flaky scripts provide temporary redemption followed by unexpected failures.

All I can say is that you should treat your scripts as your code.

How to live a long and “scriptful” life

I can thank the late night debugging sessions at least one thing. Over time I’ve picked up some practices how I can produce kind of well-behaving scripts. Let’s go through them.

Assert your hypothesis

What if a command returns JSON with different formats?

Is it acceptable if a command failed to run in a script, but it continues executing the next commands? Did it really copy the files to the server? Are the items in the returned list in the right order? Are you sure how given command line tools operate?

In languages like C#, most of your hypothesis are asserted by the compiler. When it comes to scripts, you have to validate thoroughly the results of commands.

For example, during the release process, I check whether by any chance some files would be overridden.

if ((ExistsOnS3 "releases" "Setup_1.0.0.0.exe")) {

throw "You cannot overwrite already released version=1.0.0.0"

exit 1

} MoveInS3 "rtm/Setup_1.0.0.0.exe" "releases/Setup_1.0.0.0.exe"

I’m also in favor of checking input parameters of scripts. It can be really helpful when you start tweaking your script files and you don’t call the given script with the right parameters.

AssertNotNullOrEmpty $version "Version is null or empty"

AssertNotNullOrEmpty $gitSha "Git sha is null or empty"

AssertNotNullOrEmpty $awsConfigPath "awsConfigPath is null or empty"

AssertFileExists $awsConfigPath

My point is that it’s better to fail early than letting the scripts run and do harm. Assertions also help you identify the source of the problem.

Try to break it, before you release it

Which situation would you prefer to be in? Trying to break the script before committing it to your repository, or walk away the thorough testing and risking the chance of a potential debugging session, let’s say in a half year?

Sometimes you are urged to get things done _ASAP_. For example, you have to release a critical security patch, but you’re held back as your scripts failing to run for whatever reason. How much effort should be put into testing the given script? You should answer one simple question.

Is it critical If I can’t run the XYZ script anytime?

It’s always problematic if you can’t release a security fix. However, it’s not a nightmare if the translated resource strings are not auto-magically committed into your repository on weekends.

Having false assumptions towards commands is the recipe for disaster. Let me tell you the story of how the introduction of version 6.10.0.0 broke our release process. After every release, we tag our repository with the released version number. During our build process, we obtain the latest released version from the git tags and we use that version to calculate the next version number. This is how we obtained the latest released version number.

$tags = git tag | Where { $_ -match ‘(\d+)\.(\d+)\.(\d+)\.(\d+)’ }

$latestVersion = $tags[-1]

After releasing 6.10.0.0, our installer scripts started to fail. We’ve realized that the git tag command doesn’t work as we expected. When I issued the git tag command, this is what I saw and I laughed (left out some versions)

PS D:\repos\awesome-project> git tag

6.0.0.0

6.1.0.0

6.10.0.0

6.2.0.0

6.9.0.0

We wrongly assumed that the tags are sorted by the date added. It turned out that it’s sorted alphabetically. The fix was an easy one. I used the built in System.Version type for getting the list sorted by the right way.

$tags= git tag

$versions = $tags | Where { $_ -match ‘(\d+)\.(\d+)\.(\d+)\.(\d+)’ }

$sortedVersions = $versions | %{[System.Version]$_} | sort

$latestVersion = $sortedVersions[-1]

Breadcrumbs

You will experience errors, I’m sure. When everything gets out of your hand, you should at least be able to point to an exact problem. I sprinkle breadcrumbs in my scripts in forms of WriteLine’s and assertions, so when the shit hits the fan, I know where to start digging. Some examples:

#Instead of being general

echo "Uploading the installer..." #Prefer to be more contextual

echo "Uploading the installer"

echo "To https://cdn.somewhere.com/Setup_5.0.0.0.exe"

echo "Installer hash $installerHash"

echo "Installer size $installerSize"

echo "Installer file path $installerFilePath" #Assertions are your buddies

#It's better letting your scripts fail, than risking them to do some damage #My favorite assertion is for checking the last exit code of a command $workingDirectory = (Get-Item .).FullName

function ExitIfLastExitCodeIsNotZero

{

#LastExitCode is populated by PowerShell

#If(!$?)... could be used as well, but it looks odd

If ($LastExitCode -ne 0)

{

Get-PSCallstack #provide some context for this call

cd $workingDirectory

throw "Last exit code was not 0 - I need human intervention"

exit 1

}

} #This is how it's used

doSomeSeriousStuff.exe

ExitIfLastExitCodeIsNotZero

Isolate your scripts

Have small LEGO parts. Not only does it allow you to easily test them, but you can also try them out one by one. It takes much more time to run the whole set of your scripts, than testing the isolated parts. It’s also worth mentioning that big files tend to grow over time. Law of physics, or whatnot.

Follow the DRY principle

Don’t (always) repeat yourself. Even though you can leverage the power of REGEX, when you want to replace something across your scripts, I’d rather suggest you following the DRY principle to some degree.

For example, when you have scripts responsible for generating installers, you should have a separate <pick one: JSON/INI/PROPERTIES/PS1> file containing the name of the product, the name of the company, the current version, etc.

I also tend to keep a biggish common.ps1 script around, which contains utilities used by different scripts. Let’s see few examples:

function CleanUpFolder($folderPath)

{

if (Test-Path $folderPath) {

rm -r $folderPath

} mkdir $folderPath

} function GetS3FileMetaData($path, $bucket)

{

WriteLine “Getting metadata for bucket: $bucket path: $path”



$metaDataRaw = ((aws s3api head-object — bucket “$bucket” — key “$path”) | Out-String)

ExitIfLastExitCodeIsNotZero $metaDataObject = $metaDataRaw | ConvertFrom-Json return $metaDataObject

}

Avoid having hierarchical dependencies

When someone depends on your piece of code in any way, it’s more challenging for you and any other fellow developers to modify it. If you work at a big company, you might rely on other teams work. The more popular your code is, the higher the chances of stepping on each other’s toes. My problem with cross-team dependencies is that even small changes can cause havoc at other teams ends. This is the time when copy paste is here to rescue. #copypastepattern

Have full control over the external tools you use

It’s important to have the same tools both on your developer machine and on your production machine / Continuous Integration slave machine. In one of the projects, I’ve started centralizing the tools we use in our build and release scripts. It’s good for several reasons. For example, if something changes in your CI infrastructure, you can instantly point it out, that your scripts are failing, because some tools haven’t been installed on the new CI machines. Different software versions can cause troubles as well.

$NUGET = “C:\Program Files (x86)\NuGet

uget.exe”

if (!(Test-Path $NUGET)) {

throw “NUGET IS NOT INSTALLED: $NUGET”

exit 1

} #Usage

&$NUGET restore

Write tests

“If wishes were fishes, we’d all cast nets.” — Frank Herbert

Learn about tools and the scripting environment

Some tools are extremely easy to start with, which make you productive and also gives you a false illusion of experience and expertise. Every tool and environment have their own quirks. Even if you’ll learn them in a hard way, invest in mastering a skill, if you feel it’ll be used extensively.

PowerShell is my love of choice, although it required some getting used to. It had certainly some quirks. The most painful was for me to learn the return semantics in PowerShell with my C# mindset. Actually, it caused some weird bugs in my scripts. Not once, not twice, …

Scripts should be able to run both locally and on your CI

For example in Jenkins, you can create jobs which can run arbitrary scripts. In these cases that scripts are not stored in your Git repository, but actually somewhere else. I like to keep my scripts in my applications repository. It’s likely that the changes made to the scripts will be reviewed by others as they will see the pull request. In addition to that, as you have the scripts files in your repository, you can easily try them out on your local development environment.

Make it fast — take advantage of caching

The more time your job requires to run to completion, the higher the chances that some CI error will happen (node disconnected, timeout, …) You should also not download the same resources over and over again.

Write scripts in a proper text editor

For example PowerShell support is incredible in VS Code or in the PowerShell ISE as they provide basic Intelli-Sense kind of support. Notepad is awesome, but you can get rid of some very basic typos by using a decent editor.

Follow the Unix philosophy

Probably I should have started with the UNIX philosophy, but people might learn more from their mistakes than from the books, just like me. Keep iterating.

I’m confused, should I use scripts or not?

Yes. No. Read the small print.

By using scripts you can get rid of manual, boring and repetitive work. I wasn’t used to scripting, but I felt pretty much productive immediately after started writing scripts.

Although every now and then I had to face hard debugging sessions, it felt entertaining and I was the hero, who fixed the script. I replaced the boring work, with more entertaining one. However, I had to come to the realization, that I wasn’t able to do less by scripting overall. Actually, in some cases, it generated an extra amount of shit to take care of.

Hopefully, I managed to identify few key practices over time, which now allows me to write painless scripts.