Unusual Unit testing (part 2) — Powershell scripts with Pester

Marck Oemar
5 min readAug 19, 2020
Photo by Luca Bravo on Unsplash

Windows system administration and unit testing?

This is part 2 of the series ‘Unusual Unit testing’ (to me at least).

In part 1 we’ve created unit tests for Bash scripts with Bats, in this article I’m going to attempt to achieve the same for Windows and Powershell.

If you don’t have a Windows environment, don’t worry, we don’t actually need a Windows system to unit test Powershell scripts, we just need to have Docker Engine installed so we can run our unit tests in a Docker container.

Pester

Pester is an awsome test and mock framework for Powershell, with a bunch of testing features like assertions and mocking. Pester can test functions, Cmdlets, Modules and scripts.

When it comes to unit testing, we’re trying to isolate code and make it testable. Often there is internal/external interaction between code parts, and we’ll need setup a test double to control the behaviour.

Let’s get started — preparation

The example code for this exercise can be found here.

We’re using a docker container as the testing environment, specifically microsoft/powershell which is a great way to interact with powershell.

Download the image, execute the container and activate Pester:

docker run --rm -it -w /code -v "${PWD}:/code" microsoft/powershell
Install-Module Pester -Force
Import-Module Pester

Now, let’s write some tests!

Test 1 — assert the output of a function

Let’s create an example Powershell file with a basic function:

# example1.ps1function Func1 {
param()
Write-Output "Func1 is working!"
}

We want to test that this function writes output Func1 is working! , so we’ll create a test file with a specific naming convention by adding Tests in the filename of the test file:

# example1.Tests.ps1Describe 'Testing Func1' {
BeforeAll {
. $PSCommandPath.Replace('.Tests.ps1','.ps1')
}
It 'outputs a string' {
Func1 | Should -Be "Func1 is working!"
}
}

The first thing we do is define a Describe block which is a construct that helps us describe what we want to do, which is test function Func1.

In the Describe block we can then define multiple tests, but before we do that we want to setup the test by using BeforeAll. Our test needs the function Func1 so that we can execute and test it, so we’re dot sourcing the source file example1.ps1.

Now we can define the test, by using construct It.

It validates the results of a test inside of a Describe or Context block. The convention is to assert a single expectation for each It block. The code inside of the It block should throw a terminating error if the expectation of the test is not met and thus cause the test to fail.

Our It statement asserts that Func1 outputs a string. We do that by simply executing Func1 and pipe it to the Pester assertion command Should with the -Be operator.

Let’s run this test with the command ‘Invoke-Pester’:

Great! but what if the test fails? We can add a negative test assertion:

It 'outputs an unexpected string' {
Func1 | Should -Be "Func1 is workinggg!"
}

and run both tests:

We now get a beautiful assertion error with detailed reason why the second assertion failed.

Now, let’s make things a bit more interesting.

Test 2 — test exception handling with some mocking

Consider the following function:

# example2.ps1function CreateEC2KeyPair($KeyName,$Ec2KeyPairBase64,$Region)
{
try
{
Import-EC2KeyPair -KeyName $KeyName `
-PublicKeyMaterial $Ec2KeyPairBase64 `
-Region $Region
Write-Output "Import Success"
}
catch
{
throw "Import-EC2KeyPair gave ERROR"
}
}

The function attempts to create an AWS EC2 Keypair by using a Cmdlet Import-EC2KeyPair from the AWS SDK, using try-catch to customise exceptions.

We have challenge now, because we don’t want the test to actually execute Import-EC2KeyPair as this might do an actual call out to the AWS API, causing a side effect. We can solve this by using a test-double, specifically a Mock.

Let’s define a new test file:

# example2.Tests.ps1Describe 'Test function CreateEC2KeyPair' {
BeforeAll {
. PSCommandPath.Replace('.Tests.ps1','.ps1')
function Import-EC2KeyPair {
Write-Output "Import-EC2KeyPair Dummy cmdlet"
}
}

It 'test try when AWS SDK Import-EC2KeyPair is successful' {
Mock 'Import-EC2KeyPair' {}
CreateEC2KeyPair | Should -Be "Import Success"
Assert-MockCalled Import-EC2KeyPair
}
}

Looking at the It block, we now have a Mock defined:

Pester provides a set of Mocking functions making it easy to fake dependencies and also to verify behavior. Using these mocking functions can allow you to “shim” a data layer or mock other complex functions that already have their own tests.

So we’re mocking the behaviour of Import-EC2KeyPair , in that it will simply won’t do anything, but also not throw an exception. One caveat though: we can only mock existing cmdlets or commands, which is a problem because we don’t want the AWS SDK to even be loaded in the test. A simple work-around is to define a dummy function with the same name, as you can see in BeforeAll.

Now we have 2 assertions, the first one executes the CreateEC2KeyPair function and asserts the output, as we did in the previous test. The second test makes sure we actually hit the Mock.

Let’s run the test:

Yey! But what would happen if we forget to Mock Import-EC2KeyPair ?

Now this is interesting, because we’re hitting the dummy function. The output now consists of two strings instead of the one we expected, and the test fails.

Let’s add a final It assertion that tests the catch block. To force the behaviour of our code we have Mock Import-EC2KeyPair throw an exception, which should be catched by the catch block.

It 'test catch when AWS SDK Import-EC2KeyPair has error' {
Mock 'Import-EC2KeyPair' {
Throw 'I am throwing AWS SDK exception'
}
{ CreateEC2KeyPair } | Should -Throw 'Import-EC2KeyPair gave ERROR'Assert-MockCalled Import-EC2KeyPair

This time we’re adding curly braces around the execution of CreateEC2KeyPair , which is required by the Should -Throw assertion.

Now both It tests should pass:

Conclusion

I think unit testing Powershell scripts is quite valuable, as they tend to grow and become complex. We can put the unit tests in a CI pipeline and release a new version when all tests pass.

--

--

Marck Oemar

DevOps coach, AWS Cloud consultant. People > Processes > Technology.