applicationHost.Config Backups and Updates

on Friday, May 30, 2014

IIS’s applicationHost.config is almost never stored in version control. Yet, it’s often updated to add new sites, ARR rules, and special configurations. There are a variety of ways to update the config: IIS Manager, appcmd.exe, PowerShell/WebAdministration, and editing the file by hand.

In general editing the .config file is pretty safe, with a low risk of affecting functionality on one website when updating a different website. But, it’s always nice to

  • have a backup
  • have an audit trail of updates

This PowerShell function can make a quick backup of the applicationHost.config file, with information on who was running the backup, and when it was run.

$global:WebAdministrationExt = @{}

# Used by unit tests. We run the unit tests so often, the backups can really grow in size. Most
# unit tests should turn off the backups by default; but the unit tests which actually test the
# backup functions turn it back on.
$global:WebAdministrationExt.AlwaysBackupHostConfig = $true;

<#
.SYNOPSIS
Saves an backup of an xml config file. The purpose is to ensure a backup gets
made before each update to an applicationHost.config or web.config.

Used by the Save-WebConfig function to ensure a backup gets made.
#>
Function Backup-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
if($global:WebAdministrationExt.AlwaysBackupHostConfig -eq $false) {
Write-Warning ("Global:WebAdministrationExt.AlwaysBackupHostConfig has been set to false. " +
"Skipping backup of $configPath");
return $null;
}

if([System.IO.File]::Exists($ConfigPath) -eq $false) {
throw "Backup-WebConfig: No file to read from, at path $ConfigPath, could be found."
}

$fileInfo = (Get-ChildItem $ConfigPath)[0];
$basePath = Split-Path $fileInfo.FullName;
$filename = $fileInfo.Name;
$timestamp = Get-TimeStamp;
$appendString = "." + $env:UserName + "." + $timestamp;

$i = 0;
$backupPath = Join-Path $basePath ($filename + $appendString + ".bak");
while(Test-Path $backupPath) {
$i++;
$backupPath = Join-Path $basePath ($filename + $appendString + "-" + $i + ".bak");
}

Write-Warning "Backing up $ConfigPath to $backupPath"
Copy-Item $ConfigPath $backupPath
Write-Host "Backed up $ConfigPath to $backupPath"

return $backupPath
}
}

As long as there is now a backup function in PowerShell, might as well round out the suite with a Read and Save function for applicationHost.config.

In the Read function I’ve chosen to not preserve whitespace. Initially I was preserving whitespace to ensure the file stayed 'readable'. But, once I started adding in new xml elements using PowerShell, those new elements were very unreadable and causing weird new line issues wherever they were added. PowerShell’s xml functionality will preserve comments, and if you set the XmlWriter’s IndentChars property to something better than 2 spaces (like a tab) then the applicatHost.config file will stay very readable while avoiding the new xml element problem.


<#
.SYNOPSIS
Saves an xml config file. The purpose is to keep the logic for setting
up formatting to be the same way all the time.

Used to store applicationHost.config file updates.
#>
Function Save-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[xml]$WebConfig,
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
# First backup the current config
Backup-WebConfig $ConfigPath

# Set up formatting
$xwSettings = New-Object System.Xml.XmlWriterSettings;
$xwSettings.Indent = $true;
$xwSettings.IndentChars = " "; # could use `t
$xwSettings.NewLineOnAttributes = $false;

# Create an XmlWriter and save the modified XML document
$xmlWriter = [Xml.XmlWriter]::Create($ConfigPath, $xwSettings);
$WebConfig.Save($xmlWriter);
$xmlWriter.Close();
}
}

<#
.SYNOPSIS
Load an xml config file. The purpose is to keep the logic for setting
up formatting to be the same way all the time.

Used to load applicationHost.config files.
#>
Function Read-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
[xml]$appHost = New-Object xml
#$appHost.psbase.PreserveWhitespace = $true
$appHost.Load($ConfigPath)
return $appHost;
}
}

There is almost always a reason to have PowerShell enter new xml elements into the applicationHost.config. Both IIS Manager and appcmd.exe have limitations that can only be overcome by hand editing the file or scripting the update in PowerShell (or .NET).

But when creating PowerShell functions to add new elements, it’s always nice to be able to unit test the code before using it on your servers. So, you can make a function which will get the location of an applicationHost.config file. That function can be overridden during a unit test to use a dummy file. This code snippet uses an example of how the unit tests could be used with the serviceAutoStartProviders.


# Used by unit tests. If a value is supplied here, then it will always be returned by Get-WebConfigPath
$global:WebAdministrationExt.ApplicationHostConfigPath = "";

<#
.SYNOPSIS
Contains the logic to get the path to an applicationHost.config file.

Used to allow for a config file path to be overloaded during unit tests.
#>
Function Get-WebConfigPath {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ServerName
)
Process {
if($global:WebAdministrationExt.ApplicationHostConfigPath -ne "") {
return $global:WebAdministrationExt.ApplicationHostConfigPath;
}

$configDir = "\\$ServerName\C$\Windows\System32\inetsrv\config"
$configPath = "$configDir\applicationHost.config"
return $configPath;
}
}

# Example usage
$global:WebAdministrationExt.ApplicationHostConfigPath = "C:\Modules\WebAdministrationExt\applicationHost.UnitTest.config";
$global:WebAdministrationExt.AlwaysBackupHostConfig = $false;

$providerName = Get-Random
Set-AutoStartProvider -SiteName "unittest.local.frabikam.com" -AutoStartProvider $providerName;
Get-AutoStartProvider -SiteName "unittest.local.frabikam.com" | Should Be $providerName;

$global:WebAdministrationExt.ApplicationHostConfigPath = "";
$global:WebAdministrationExt.AlwaysBackupHostConfig = $true;

PowerShellGet with Multiple Source Repositories

on Monday, May 26, 2014

The PowerShell Team added the PowerShellGet module in the May 2014 update to the v5.0 Preview. This created an official Microsoft repository for pulling in the latest modules. But, it also allowed for companies and teams to setup their own internal repositories. The PowerShell Team put together a post on how to setup a MyGet repository with just a few keystrokes.

The ability to have a private repository for your team is great. What would make it even better is if you could have multiple repositories that are all searched and used when Finding and Installing modules.

Internally, PowerShellGet uses NuGet to handle package management. Which is a wonderful thing. It’s a great product and has been used by other projects like MyGet and Chocolatey without any problems.

However, there was one little problem. The PowerShell Team didn’t want to have any conflicts with updates or end user configurations with their normal NuGet installations. Because of this, PowerShellGet downloads a separate installation of NuGet.exe and only allows for one repository to be used at a time. That repository is defined by the variable $PSGallerySourceUri. How could it be updated to work more like ‘normal’ NuGet and handle multiple repositories?

With a little bit of updating to the internal PSGallery.psm1 file, you can now get an updated version of PowerShellGet which can handle both a NuGet.Config file and multiple repositories defined within the $PSGallerySourceUri variable.

The module can be found with:

$PSGallerySourceUri = “https://www.myget.org/F/smaglio81-psmodule/api/v2

I think you should be cautious about using it as I set it up only to ask for the functionality to be added by the PowerShell Team. I don’t really have long term plans of maintaining it.

Source Code: https://github.com/smaglio81/powershellget

Two other things to look at with private repositories:

Using PowerShellGet on Win7

on Friday, May 23, 2014

The Windows Management Framework 5.0 Preview May 2014 contains a new module, PowerShellGet. The management framework has a requirement of at least Windows 8.1. But, the module itself only has PowerShell requirement of 3.0. So, it can run successfully on Windows 7.

If you have access to Windows Azure, then you have a quick and easy way to get the module without the problem of upgrading to Windows 8.1. Using Windows Azure, you can create a quick Windows 8.1 machine using the Visual Studio 2013 Update 2 image.

image

Once the machine is up and running, you can connect with Remote Desktop. After installing WMF 5.0, the virtual machine will have the module under C:\Windows\system32\WindowsPowerShell\v1.0\Modules\PowerShellGet. Copy that folder down to your local Windows 7 installation and you should be ready to use it.

image

image

Preload a WCF svc before adding to a WebFarm

on Sunday, May 18, 2014

WCF services hosted in IIS have a first request processing overhead. This is usually just a couple hundred milliseconds, but that can be enough to cause dangerous lag spikes and failed requests when the service is being added to a web farm already under load.

It would seem like using WCF’s serviceAutoStartProvider functionality would prevent the ‘first request’ overhead, but it only lessens it. To truly prevent the overhead from occurring you need to have a request fully processed by the service. Application Initialization is an option, but I haven’t used it in conjunction with a serviceAutoStartProvider.

Instead, I used PowerShell to create a function which would send a request to the service and verify the response. If the response was incorrect or in error, then an exception would be thrown by the function. This has the added benefits of:

  • It can be reused by Operations to check the health and status of production systems
  • It can be used by Developers and Operations to inspect and test individual application servers when issues are noticed
  • It can be used to test the health of a system before being added back into a web farm, in order to ensure that a bad deployment isn’t put into production

To do this I used the built in PowerShell function New-WebServiceProxy. In a script similar to this one:

Function Test-BrokerService {
Param(
[Parameter(Mandatory=$true)]
[string]$Environment,
[Parameter(Mandatory=$true)]
[string]$BrokerName,
[string]$DnsName = ""
)
Process {
# Get environment info
$envInfo = Get-BrokerEnvironmentVariables $Environment;

# Setup dnsName
if($DnsName -eq "") { $DnsName = $envInfo.dnsName; }

$uri = "https://" + $DnsName + "/broker.svc";
$proxy = New-WebServiceProxy -Uri $uri

Write-Warning "Testing $BrokerName on $uri ..."
switch($BrokerName.ToLower()) {
"firstBroker" {
# ...
}
default {
throw ("Broker " + $BrokerName + " is unknown. No test could be performed on " + $uri + ".");
}
}

Write-Host "Successfully tested $BrokerName on $uri"
$proxy.Dispose();

return $true;
}
}

I then setup a test system which had:

  • A 2008 R2 proxy server using WFF/ARR/UrlRewrite (Proxy)
  • Two 2008 R2 application servers to host the WCF services (AppServer1 & 2)

The load test uses:

  • 20 concurrent users
  • 0 delay between requests
  • 2 minute run time
  • A deployment script which
    • Takes AppServer1 out of the farm
    • Does a code deployment
    • Places AppServer1 back in the farm, without running an initial request
    • Repeats the actions for AppServer2

image

The small spikes that occur at 1:05 and 1:25 are when AppServer 1 & 2 are added back into the farm. In this particular test no timeouts occurred from it; but it has happened in previous tests.

After changing the deployment script to run an initial request before the server is added back into the web farm, the response timings smoothed out.

image

Deploying PowerShell Modules

on Wednesday, May 14, 2014

On MSDN there is an article on how to install PowerShell modules onto systems. One of the subsections is about Installing Multiple Versions of a Module. The basic idea behind it seems to continue to use PowerShells autoload feature, while using a Version number within the PowerShell descriptor file (.psd1 or manifest file) as a selector.

This seems to create a repeated hierarchy of directories. Each hierarchy starting with a folder name that is also versioned. Their example diagram is:

C:\Program Files
Fabrikam Manager
Fabrikam8
Fabrikam
Fabrikam.psd1 (module manifest: ModuleVersion = "8.0")
Fabrikam.dll (module assembly)
Fabrikam9
Fabrikam
Fabrikam.psd1 (module manifest: ModuleVersion = "9.0")
Fabrikam.dll (module assembly)

Followed by adding each of the versioned directories to the environment variable PSModulePath. Their example is:

$p = [Environment]::GetEnvironmentVariable("PSModulePath")
$p += ";C:\Program Files\Fabrikam\Fabrikam8;C:\Program Files\Fabrikam\Fabrikam9"
[Environment]::SetEnvironmentVariable("PSModulePath",$p)

Environment Variable Length Problems

Unfortunately, when extending the value of an environment variable, you also run the risk of hitting the 8191 character limit. Which can happen pretty quickly. Especially with frequent deployments.

One solution is to tightly control the rules in which a Module will be updated, versioned, and deployed. But, this removes a lot of flexibility and usually isn’t ideal. A great thing about PowerShell  is how quick the language can be used to get tasks done. Limiting it’s ability to be fluid and updateable doesn’t seem to fit with that design.

Potentially, the environment variable character limit problem could be alleviated by using the nested environment variable trick. (Grouping PS environment variables alphabetically before the PSModulePath variable seems to be the best approach). An example might be (written in shorthand):

%PS1_F8% = C:\Program Files Fabrikam\Manager Fabrikam8
%PS1_F9% = C:\Program Files Fabrikam\Manager Fabrikam9
%PS1_F% = %PS1_F8%;%PS1_F9%
%PSModulePath% = %PSModulePath%;%PS1_F%

Deploying Multiple Versions

All of these variables can be difficult to setup by hand. It would probably be best to have guidelines or standards at your company for Module development. Some good ones up front would be:

  • Determine a source control system for module development
  • Agree to the usage of a .psd1 file for each module
    • And, agree that a version number must be included. If the module is updated then the version needs to be updated as well. (In C# development, a build number is often attached to a specific build of a dll; but with PowerShell you probably don’t want the version number to contain the build number. More below.)
  • Determine the default module installation location to deploy to on target machines.
  • Have the deployment system inspect the .psd1 file for version information and construct the deployment path on target machines.
    • (Continuation on version number, from above:) If a machine already has a path with a matching version number, then that machine shouldn’t be updated. This is what requires development on the modules to always go hand-in-hand with updating the version number.
  • Have the deployment system update the machine’s environment variables with the deployments.

Pushing the Problem Down The Road

A solution like this doesn’t seem to fix the real problem: The need to add multiple versions of a module to machines without conflict and making them easily discoverable.

The recommendations from the PowerShell team are a good solution, but it looks like there is always room for improvement.

 

Refactoring PowerShell Modules into Scripts

on Friday, May 9, 2014

When writing a module the code can pile up pretty quickly. And once you get to a certain point it becomes unwieldy to find function definitions and to “Go to definition” of a method. Especially when you add in Doc Comments.

This can be helped slightly by the PowerShell Tools for Visual Studio. When developing in Visual Studio there is a dropdown of all function names within a file, and they are listed alphabetically. There is also an add-on for the PowerShell ISE, FunctionExplorer, but it is pretty unstable.

One thing that C# developers have done for a while is break out large files into multiple smaller files. Grouping the contents of each file by a specified area. This can also ‘kinda’ be done in PowerShell, and it’s with the help of this trick to get the current script directory.

$root = Split-Path $MyInvocation.MyCommand.Path –Parent

With that function at the top of the module file (.psm1) you can then start adding in normal script files (.ps1) to the module definition. This allows you to break apart a single module into multiple script files.

For example, you could have a Car module:

image

image

And, it can be refactored by the different groupings of functions. Like wheels, doors, or engine.

image

These files can all be loaded by the main Car.psm1 module, using the $root pathing. This will ensure that no matter where the module is imported from, the files that are next to it in the directory get loaded.

image

This also helps separate out unit tests into smaller test groups. Making it easier to debug certain sections of a module. (You may notice that each Test script has the $root variable defined at the top. This is to ensure that each script can have the variable available if it’s run on it’s own.)

image

image

Or debug the entire module at once.

image

PowerShell Tools for Visual Studio

on Friday, May 2, 2014

Last week I tried out PowerShell Tools for Visual Studio by Adam Driscoll. I liked the features a lot, but had some difficulty getting used to not having the instant feedback from a command window.

Goods News on the Command Window. It looks like Mr. Driscoll is implementing that feature right now. He added a REPL window to the GitHub repository just a few days ago. (I was also surprised to see that the code is Copyright by Microsoft. I thought Mr. Driscoll worked for DELL/PowerGUI?)

This week I had the opportunity to try PowerShell with TFS 2013 source control. This brought me right back to Visual Studio to handle the check-ins and check-outs. So I spent a lot of this week with PowerShell in VS2013.

Since I normally work in VS2013, continuing to use it as my primary IDE felt very natural. Some features that struck me were:

  • PowerShell projects in the Solution Explorer
    • Having all your files under one easy to view place, where you can quickly pull them up to edit is very useful.
  • Having keyboard shortcuts that I’m used to
    • The biggest one of these was multi-line comment and uncomment. I am very used to Ctrl+K,Ctrl+C to comment and it’s twin. And, when I was in the ISE I found myself constantly trying to use that command.
  • Built In Source Control Integration
  • IntelliSense can be Very Unresponsive
    • It can be a 2~5 second delay to show the IntelliSense menu, and that delay is on the main UI thread. So, VS2013 can become unresponsive during that time period.
    • It’s actually gotten so bad, that I fear typing $.
  • Debugging was missing in-depth variable inspection
    • The ability to expand a variable, especially XmlDocument, wasn’t available and very missed.
    • Since the Command Window/REPL isn’t implemented yet, there was no way to dig into a variables inner values with a command line.

Using PowerShell ISE like a Browser

I’ve done a fair amount of web development and have become comfortable with using 2 applications to code in.

  • Visual Studio: Used to write the code
  • Web Browser: Used to execute and somewhat debug the code

So, I found it really comfortable to use the PowerShell ISE along with VS2013:

  • Visual Studio: Used to write the Module code and handle TFS check-ins.
  • PowerShell ISE: Used to write Scripts, Unit Tests, and Debug.

This felt very familiar to me and help differentiate what type of code I was writing. It also helped me to figure out how much time I should be spending on a particular type of code. If I was writing code for a Module, then I could spend extra effort to write unit tests and ensure it’s stability. If I was writing a Script (which a unit test kinda is) then the goal was to get the job done.

image

Potential Issues

Visual Studio 2012 & “Server Workspaces”

Some of our team members have setup their local workspaces using Visual Studio 2012. It looks like VS 2012 doesn’t have the ability to handle “Local workspaces” (at least I got some error pop-ups [no screenshots, sorry]). This could mean that all the files that get pulled down in a workspace that’s created with VS2012 will be in a “Server workspace”. And in Server workspaces, all the files are going to be Read Only when checked-in. I would imagine that would create a lot of consternation when trying to pull up a file in PowerShell ISE and looking to make a quick edit.

VS2013 has a default workspace type of “Local”, which doesn’t use the Read Only flag on files.

PowerShell Tools VSIX & Multiple Domain Accounts

Some of our team members have multiple domain accounts. The second account is for doing SysAdmin work, like updating Production servers.

It looks like the PowerShell Tools for Visual Studio VSIX will install into an individual user folder. So, if you have two domain accounts you will need to install it using both domain accounts. It can easily be installed using both accounts and will work just fine.

PowerShell ISE Add-Ons

I also tried a few Add-Ons for PowerShell ISE, but every one that I tried just made the ISE unstable and prone to crashing. So, I removed them from my ISE. If you’re interested here is a list of some interesting ones:

(A lot of these you have to hand edit your PowerShell profile to get to load in the ISE)

image

VariableExplorer: Displays the full variable list currently available in the runtime.
FunctionExplorer: Displays a list of all function definitions in a file (very useful!)
CommentSelectedLines: Crazy unstable! But when it works, its fantastic. You can setup which keyboard shortcuts you want to bind to by editing the PowerShell .ps1. And, it adds the ability to save your ISE state when you close it down.
Script Browser: I actually didn’t find this useful, but it’s worth noting because it’s made by Microsoft and it was stable.

 

PSGet

I just wanted to remind myself that PSGet has a good sized directory of useful and up-to-date modules.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.