WebAdministration Not Loaded Correctly on Remote

on Friday, November 14, 2014

When making remote calls that use the WebAdministration module you can sometimes get this error, inconsistently:

ERROR: Get-WebSite : Could not load file or assembly ‘Microsoft.IIS.PowerShell.Framework' or one of its dependencies. The system cannot find the file specified.

It’s a really tricky error because its inconsistent. But, there is a workaround that will prevent the error from giving you too much trouble. From the community that has done troubleshooting on this, the problem seems to occur on the first call that uses the WebAdministration module. If you can wrap that call in a try/catch, then subsequent calls will work correctly.

$scriptBlock = {
    Import-Module WebAdministration

    try {
        $sites = Get-WebSite
    } catch {
        # http://help.octopusdeploy.com/discussions/problems/5172-error-using-get-website-in-predeploy-because-of-filenotfoundexception
        $sites = Get-WebSite
    }
}

Invoke-Command -ScriptBlock $scriptBlock -ComputerName Remote01

PowerShell AppPool Assignment Problems

on Friday, November 7, 2014

The WebAdministration module has a Function called IIS:. It essentially acts like a drive letter or an uri protocol. Its really convenient and makes accessing appPool, site information, and ssl bindings easy.

I recently noticed two problems with assigning values through the IIS: protocol or the objects which is works with:

StartMode Can’t Be Set Directly

For some reason, using Set-ItemProperty to set the startMode value directly throws an error. But, if you retrieve the appPool into a variable and set the value using an = operator, everything works fine.

# https://connect.microsoft.com/PowerShell/feedbackdetail/view/1023778/webadministration-apppool-startmode-cant-be-set-directly
ipmo webadministration

New-WebAppPool "delete.me"

Set-ItemProperty IIS:\AppPools\delete.me startMode "AlwaysRunning" # throws an error

$a = Get-Item IIS:\AppPools\delete.me
$a.startMode = "AlwaysRunning"
Set-Item IIS:\AppPools\delete.me $a # works

Here is the error that gets thrown:

Set-ItemProperty : AlwaysRunning is not a valid value for Int32.
At C:\Issue-PowershellThrowsErrorOnAppPoolStartMode.ps1:6 char:1
+ Set-ItemProperty IIS:\AppPools\delete.me startMode "AlwaysRunning" # throws an e ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-ItemProperty], Exception
    + FullyQualifiedErrorId : System.Exception,Microsoft.PowerShell.Commands.SetItemPropertyCommand

 

CPU’s resetLimit Can’t Directly Use New-TimeSpan’s Result

I think the example can show the problem better than I can describe it:

# https://connect.microsoft.com/PowerShell/feedbackdetail/view/1023785/webadministration-apppools-cpu-limit-interval-resetlimit-cant-be-set-directly
ipmo webadministration

New-WebAppPool "delete.me"

$a = Get-ItemProperty IIS:\AppPools\delete.me cpu
$a.resetInterval = New-TimeSpan -Minutes 4 # this will throw an error
Set-ItemProperty IIS:\AppPools\delete.me cpu $a

$a = Get-ItemProperty IIS:\AppPools\delete.me cpu
$k = New-TimeSpan -Minutes 4 # this will work
$a.resetInterval = $k
Set-ItemProperty IIS:\AppPools\delete.me cpu $a

Here is the error that gets thrown:

Set-ItemProperty : Specified cast is not valid.
At C:\Issue-PowershellThrowsErrorOnCpuLimitReset.ps1:8 char:1
+ Set-ItemProperty IIS:\AppPools\delete.me cpu $a
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-ItemProperty], InvalidCastException
    + FullyQualifiedErrorId : System.InvalidCastException,Microsoft.PowerShell.Commands.SetItemPropertyCommand

The links on each section correspond with bug reports for the issues, so hopefully they will get looked into.

PowerShell Wrapper for Http Namespaces

on Friday, October 31, 2014

When hosting HTTP WCF services as a self-hosted Windows Services the server needs to have the HTTP Namespace reserved. The reservation allows for the domain account which runs the service to setup a listener on a particular port, for a particular address.

There are some tools already available which can help in this process:

  • HTTP Namespace Manager – A nice GUI interface, which is easy to understand and setup. It also works on Server Core Servers.
  • httpcfg – Windows Server 2003
  • netsh – Windows Server 2008+

But, there are no PowerShell wrappers for these commands. So, here’s a wrapper that provides:

  • Add-HttpNamespace
  • Get-HttpNamespace
  • Get-HttpNamespaces
  • Test-HttpNamespaceExists

There’s no remove because I haven’t needed it yet. A namespace is usually associated with a particular port, and I haven’t been involved in a situation where a port needed to be reused.

<#
.SYNOPSIS
    Parses the output from netsh to turn them in PSObjects.
#>
Function Get-HttpNamespaces {
[CmdletBinding()]
[OutputType([PSObject[]])]
Param()

    # the $propsReady variable causes alot of errors to occur, but the results are accurate.
    # so this helps hide the errors
    $originalErrorAction = $ErrorActionPreference
    $ErrorActionPreference = 'SilentlyContinue'

    try {

        # pull the data from netsh
        $urlaclOutput = . netsh http show urlacl

        # parse the data into PSObjects
        $httpNamespaces = New-Object System.Collections.Generic.List[PSObject]
        $props = @{}
        $userProps = @{}
        $userRdy = $false
        for($i = 0; $i -lt $urlaclOutput.Count; $i++) {
            $line = $urlaclOutput[$i].Trim()

            $split = $line.Split(":", [StringSplitOptions]::RemoveEmptyEntries)

            $first = ""
            if($split.Count -gt 0) { $first = $split[0] }
        
            # line parsing
            switch($first.Trim()) {
                "Reserved URL" {
                    $props.ReservedUrl = $line.Substring(25).Trim()
                    $users = New-Object System.Collections.Generic.List[PSObject]
                }
                "User" {
                    if($userRdy) {
                        $user = New-Object PSObject -Property $userProps
                        $users.Add($user)

                        $userProps = @{}
                        $userRdy = $false
                    }

                    $userProps.User = $split[1].Trim()
                }
                "Listen" { $userProps.Listen = $split[1].Trim() }
                "Delegate" {
                    $userProps.Delegate = $split[1].Trim()
                    $userRdy = $true
                }
                "SDDL" {
                    $userProps.SDDL = $line.Substring(5).Trim()
                    $userRdy = $true
                }
                "" {
                    if($userRdy) {
                        # user
                        $user = New-Object PSObject -Property $userProps
                        $users.Add($user)

                        $userProps = @{}

                        # url
                        $props.Users = $users.ToArray()

                        $cnObj = New-Object PSObject -Property $props
                        $httpNamespaces.Add($cnObj)

                        $props = @{}

                        # reset flag
                        $userRdy = $false
                    }
                }
            }
        }
    } finally {
        $ErrorActionPreference = $originalErrorAction # revert the error action
    }

    return $httpNamespaces.ToArray()
}


<#
.SYNOPSIS
    Retrieves the namespace information for a given namespace. It will also search for namespaces
    which match but the host names have been replaced with + or * symbols.
#>
Function Get-HttpNamespace {
[CmdletBinding()]
[OutputType([PSObject])]
Param (
    [Parameter(Mandatory = $true)]
    [string] $HttpNamespace
)

    $httpNamespaces = Get-HttpNamespaces

    # get * and + versions of the url ready
    $starNamespace = $HttpNamespace
    $plusNamespace = $HttpNamespace
    $namespaceRegex = [regex] "http.*://(.*):.*/.*"
    if($HttpNamespace -match $namespaceRegex) {
        $hostname = $Matches[1]
        $starNamespace = $HttpNamespace.Replace($hostname, "*")
        $plusNamespace = $HttpNamespace.Replace($hostname, "+")
    }

    # sometimes the http namespaces get /'s added to the end
    $namespace = $httpNamespaces |? {
                            $_.ReservedUrl -eq $HttpNamespace `
                    -or     $_.ReservedUrl -eq ($HttpNamespace + '/') `
                    -or     $_.ReservedUrl -eq $starNamespace `
                    -or     $_.ReservedUrl -eq ($starNamespace + '/') `
                    -or     $_.ReservedUrl -eq $plusNamespace `
                    -or     $_.ReservedUrl -eq ($plusNamespace + '/')
                }

    return $namespace
}



<#
.SYNOPSIS
    Checks if a namespace already exists. It will also search if the namespace has had its host name
    replaced with + or * symbols.
#>
Function Test-HttpNamespaceExists {
[CmdletBinding()]
[OutputType([bool])]
Param (
    [Parameter(Mandatory = $true)]
    [string] $HttpNamespace
)

    $namespace = Get-HttpNamespace $HttpNamespace

    return $namespace -ne $null
}



<#
.SYNOPSIS
    Adds a new Http Namespace. This will automatically swap out the host name for a + symbol. The
    + symbol allows the Http Namespace to bind on all NIC addresses.
#>
Function Add-HttpNamespace {
[CmdletBinding()]
[OutputType([PSObject])]
Param (
    [Parameter(Mandatory = $true)]
    [string] $HttpNamespace,
    [Parameter(Mandatory = $true)]
    [string] $DomainAccount
)

    $create = $true
    if(Test-HttpNamespaceExists $HttpNamespace) {
        # it already exists, so maybe not create it
        $create = $false

        $namespace = Get-HttpNamespace $HttpNamespace
        # but, if the given DomainAccount doesn't exist then create it
        $user = $namespace.users |? { $_.user -eq $DomainAccount }
        if($user) {
            Write-Warning "NET $env:COMPUTERNAME - Http Namespace '$HttpNamespace' already contains a rule for '$DomainAccount'. Skipping creation."
            return
        } else {
            $create = $true
        }
    }

    if($create) {
        # the standard pattern to use is http://+:port/servicename.
        #   eg. http://contoso01:15110/EmployeeService would become http://+:15110/EmployeeService
        $plusNamespace = $HttpNamespace
        $namespaceRegex = [regex] "http.*://(.*):.*/.*"
        if($HttpNamespace -match $namespaceRegex) {
            $hostname = $Matches[1]
            $plusNamespace = $HttpNamespace.Replace($hostname, "+")
        } else {
            throw "NET $env:COMPUTERNAME - Http Namespace '$HttpNamespace' could not be parsed into plus format before being added. Plus format " + `
                "looks like http://+:port/servicename. For example, http://contoso01:15110/EmployeeService would be formatted into " + `
                "http://+:15110/EmployeeService."
        }

        # ensure the full domain account name is used
        $fullDomainAccount = Get-FullDomainAccount $DomainAccount

        # create the permission
        Write-Warning "NET $env:COMPUTERNAME - Adding Http Namespace '$Httpnamespace' for account '$fullDomainAccount'"
        $results = . netsh http add urlacl url=$plusNamespace user=$fullDomainAccount listen=yes delegate=yes
        Write-Host "NET $env:COMPUTERNAME - Added Http Namespace '$Httpnamespace' for account '$fullDomainAccount'"
    }

    $namespace = Get-HttpNamespace $HttpNamespace
    return $namespace
}

Quick Redis with PowerShell

on Friday, October 24, 2014

ASP.NET has some great documentation on How To Setup a SignalR Backplane using Redis, but it uses a linux based server as the host. The open source port of Redis maintained by MSOpenStack creates an incredibly easy to install Redis server for a Windows Server environment (using Chocolatey ... KickStarter). This is a quick PowerShell script to install redis as a Windows Service, add the firewall rule, and start the service.

# PreRequisites
#    The assumption is that the server can use chocolatey to install

# use chocolatey to install precompiled redis application
#    this will install under the binaries under $env:ChocolateyInstall\lib\redis-X.X.X\
cinst redis-64

# install redis as a service
#    redis will make the service run under Network Service credentials and setup all appropriate permissions on disk
redis-server --service-install

# open firewall ports
. netsh advfirewall firewall add rule name=SignalR-Redis dir=in protocol=tcp action=allow localport=6379 profile=DOMAIN

# start the service
redis-server --service-start

Note: When installing the service there is an error message "# SetNamedSecurityInfo Error 5". But, it doesn't seem to affect anything; everything seems to run without a problem.

PowerShell VirtualDirectory Wrappers

on Friday, October 17, 2014

The WebAdministration Module ships with a couple functions for working with virtual directories:

But, there are some glitches with them.

A way to get around these problems is to write a wrapper class around the functions. Making the PhysicalPath property on New-WebVirtualDirectory is pretty easy to do, but the other one …

To prevent the confirmation prompt from popping up, the wrapper function can create an empty temporary directory, repoint the virtual directory to it, remove the virtual directory, and then remove the temporary directory.

This code below also wrapped the Get-WebVirtualDirectory because I wanted an API that took a Url as input and figure out how to use it.

Here’s a full list of the wrappers and helper functions:

<#
.SYNOPSIS
 Takes a url and breaks it into these parts: Ssl, SiteName, AppName, AppNames.
 
.DESCRIPTION 
 Takes a url and breaks it into these parts for a [PSObject]:

 Ssl   - true/false - does the url request ssl
 SiteName - string - the dns host name
 AppName  - string - the AppNames as a single string. It starts with '/'.
 AppNames - Array<string> - each folder name within the local path

.PARAMETER Url
 The url to convert into it's UriPaths

.EXAMPLE
 $uriPaths = ConvertTo-WebUriPaths "https://www.contoso.com/services"
#>
Function ConvertTo-WebUriPaths {
Param (
 [Parameter(Mandatory = $true)]
 [string] $Url
)

 $paths = @{
  Ssl = $null;
  SiteName = "";
  AppNames = @();
  AppName = "";
 };

 $uri = New-Object System.Uri $Url;

 $paths = New-PsType "UriPaths"

 Add-PsTypeField $paths "Ssl" ($uri.Scheme -eq "https")
 Add-PsTypeField $paths "SiteName" $uri.Host
 Add-PsTypeField $paths "AppNames" $uri.LocalPath.Split("/", [StringSplitOptions]::RemoveEmptyEntries)
 Add-PsTypeField $paths "AppName" $uri.LocalPath

 # remove trailing slash, if exists
 if($paths.AppName) {
  $appNameLen = $paths.AppName.Length;
  if($paths.AppName[$appNameLen - 1] -eq "/") {
   $paths.AppName = $paths.AppName.Substring(0, $appNameLen - 1);
  }
 }

 return $paths;
}



<#
.SYNOPSIS
 Takes a UriPaths object (from ConvertTo-UriPaths) and turns it into
 and IIS:\Sites\XXXX string value.

.PARAMETER UriPaths
 The UriPaths object from ConvertTo-WebUriPaths

.EXAMPLE
    $uriPaths = ConvertTo-WebUriPaths "https://www.contoso.com/services"
 $iisPath = ConvertTo-WebIISPath $uriPaths

.LINK
 ConvertTo-WebUriPaths
#>
Function ConvertTo-WebIISPath {
Param (
 [Parameter(Mandatory = $true)]
 [PSObject] $UriPaths
)

 $iisPath = "IIS:\Sites\" + $UriPaths.SiteName;
 $UriPaths.AppNames |% { $iisPath += "\" + $_ }; # alternateively, AppName could also be used

 return $iisPath;
}



<#
.SYNOPSIS
    Using the given url to search the current server for the longest parent website/webapp path that matches the url.
    It will only return the parent website/app information. if the root website is given, then $null will be returned.
    The webapp's information from the IIS:\Sites protocol is returned.

.PARAMETER Url
    The url to search on
    
.EXAMPLE
    $webApp = Get-WebParentAppByUrl "http://www.contoso.com/services" 
#>
Function Get-WebParentAppByUrl {
Param (
    [Parameter(Mandatory = $true)]
    [string] $Url
)

    $uriPaths = ConvertTo-WebUriPaths -Url $Url

    $currentPath = "IIS:\Sites\{0}" -f $uriPaths.SiteName
    if((Test-Path $currentPath) -eq $false) { return $null }
    if($uriPaths.AppName -eq "" -or $uriPaths.AppName -eq "/") { return $null}

    $webApp = Get-Item $currentPath
    if($uriPaths.AppNames -is [Array]) {
        for($i = 0; $i -lt $uriPaths.AppNames.Count - 1; $i++) {
            $currentPath += "\{0}" -f $uriPaths.AppNames[$i]
            if(Test-Path $currentPath) { $webApp = Get-Item $currentPath }
        }
    }

    return $webApp
}



<#
.SYNOPSIS
 Get virtual directory information from a site/app. If the given Url is a site/app, this will
    search for all virtual directories at the same level as the given the path. If not a site/app,
    this will search for a virtual directory under the parent site/app.

.PARAMETER Url
 The url to search at.

.EXAMPLE
 $vdirs = Get-WebVirtualDirectoryWrapper -Url "http://www.contoso.com"
#>
Function Get-WebVirtualDirectoryWrapper {
[OutputType([Microsoft.IIs.PowerShell.Framework.ConfigurationElement])]
Param (
    [Parameter(Mandatory = $true)]
    [string] $Url
)

    $uriPaths = ConvertTo-WebUriPaths -Url $Url

    # check if the url is a site/app
    $iisPath = ConvertTo-WebIISPath -UriPaths $uriPaths

    if(-not (Test-Path $iisPath)) {
        Write-Warning "IIS $env:COMPUTERNAME - No path could be found for '$Url'. No virtual directories could be looked up."
        return $null
    }

    $node = Get-Item $iisPath
    if(@("Application", "Site") -contains $node.ElementTagName) {
        # search for virtual directories below this level

        $vdirs = Get-WebVirtualDirectory -Site $uriPaths.SiteName -Application $uriPaths.AppName

    } else {
        # search the parent app for the given virtual directory name

        $parentApp = Get-WebParentAppByUrl $Url
        $vdir = $uriPaths.AppName.Substring($parentApp.path.Length)
        $appPath = $parentApp.path
        if(-not $appPath) { $appPath = "/" }

        $vdirs = Get-WebVirtualDirectory -Site $uripaths.SiteName -Application $appPath -Name $vdir
    }
        
    return $vdirs
}


<#
.SYNOPSIS
 Set a virtual directory for a site/app. This will set the physical path for the given Url.

.PARAMETER Url
 The url to turn into a virtual directory.

.PARAMETER PhysicalPath
    The physical path on the server to attach to the virtual path.

.PARAMETER Force
    Overwrites the current physical path if already set.

.EXAMPLE
    # Create a new virtual directory

 $vdir = New-WebVirtualDirectoryWrapper `
                    -Url "http://admissions.{env}.sa.ucsb.edu" `
                    -PhysicalPath "D:\AllContent\Data\admissions.{env}.sa.ucsb.edu"
                    -ServerName "SA89"  
#>
Function New-WebVirtualDirectoryWrapper {
[OutputType([Microsoft.IIs.PowerShell.Framework.ConfigurationElement])]
Param (
    [Parameter(Mandatory = $true)]
    [string] $Url,
    [Parameter(Mandatory = $true)]
    [string] $PhysicalPath,
    [switch] $Force
)

    $uriPaths = ConvertTo-WebUriPaths -Url $Url

    # parse the name of the virtual directory from the given url
    if($uriPaths.AppName -eq "" -or $uriPaths.AppName -eq "/") {
        throw "IIS $env:COMPUTERNAME - No virtual path could be found in url '$Url'. A subpath needs to be defined within the url."
    }

    $parentApp = Get-WebParentAppByUrl -Url $Url

    if($parentApp -eq $null) {
        throw "IIS $env:COMPUTERNAME - No parent application could be found for url '$Url'. No virtual directory could be added."
    }

    $appPath = $parentApp.path
    if(-not $appPath) { $appPath = "/" }

    $vdirPath = $uriPaths.AppName.Substring($appPath.Length)

    # if the vdirPath is multiple levels deep, check that the root path exists
    if($vdirPath.Split("/", [StringSplitOptions]::RemoveEmptyEntries).Count -gt 1) {
        $i = $vdirPath.LastIndexOf("/")
        $rootSubLevel = $vdirPath.Substring(0,$i).Replace("/","\")
        $iisPath = "IIS:\Sites\{0}\{1}" -f $uriPaths.SiteName, $rootSubLevel
        if((Test-Path $iisPath) -eq $false) {
            throw "IIS $env:COMPUTERNAME - Part of the sub path for '$Url' could not be found. Please ensure the full base path exists in IIS."
        }
    }

    Write-Warning "IIS $env:COMPUTERNAME - Creating a virtual directory for $Url to $PhysicalPath."
    if($Force) {
        if($appPath -eq "/") { # it adds an extra / if you set the applicationName to '/'
            $vdir = New-WebVirtualDirectory -Site $uriPaths.SiteName -Name $vdirPath -PhysicalPath $PhysicalPath -Force
        } else {
            $vdir = New-WebVirtualDirectory -Site $uriPaths.SiteName -Application $appPath -Name $vdirPath -PhysicalPath $PhysicalPath -Force
        }
    } else {
        if($appPath -eq "/") { # it adds an extra / if you set the applicationName to '/'
            $vdir = New-WebVirtualDirectory -Site $uriPaths.SiteName -Name $vdirPath -PhysicalPath $PhysicalPath
        } else {
            $vdir = New-WebVirtualDirectory -Site $uriPaths.SiteName -Application $appPath -Name $vdirPath -PhysicalPath $PhysicalPath
        }
    }
    Write-Host "IIS $env:COMPUTERNAME - Created a virtual directory for $Url to $PhysicalPath."

    return $vdir
}


<#
.SYNOPSIS
 Removes a virtual directory from a site/app. It will only remove the virtual directory if the Url
    given matches up with a virtual directory.

.PARAMETER Url
 The url to search at.

.EXAMPLE
    Remove-WebVirtualDirectoryWrapper -Url "http://www.contoso.com/services"
#>
Function Remove-WebVirtualDirectoryWrapper {
Param (
    [Parameter(Mandatory = $true)]
    [string] $Url
)

    $uriPaths = ConvertTo-WebUriPaths -Url $Url

    # parse the name of the virtual directory from the given url
    if($uriPaths.AppName -eq "" -or $uriPaths.AppName -eq "/") {
        throw "IIS $env:COMPUTERNAME - No virtual path could be found in url '$Url'. A subpath needs to be defined within the url."
    }

    $parentApp = Get-WebParentAppByUrl -Url $Url

    if($parentApp -eq $null) {
        throw "IIS $env:COMPUTERNAME - No parent application could be found for url '$Url'. No virtual directory could be added."
    }

    # ensure the path is a virtual directory
    $iisPath = ConvertTo-WebIISPath -UriPaths $uriPaths
    if(-not (Test-Path $iisPath)) {
        throw "IIS $env:COMPUTERNAME - No path for $Url could be found in IIS."
    }

    $node = Get-Item $iisPath
    if($node.ElementTagName -ne "VirtualDirectory") {
        switch($node.GetType().FullName) {
            "System.IO.FileInfo" { $type = "File" }
            "System.IO.DirectoryInfo" { $type = "Directory" }
            "Microsoft.IIs.PowerShell.Framework.ConfigurationElement" { $type = $node.ElementTagName }
        }
        throw "IIS $env:COMPUTERNAME - The url '$Url' doesn't match with a Virtual Directory. It is a $type."
    }

    $vdirPath = $uriPaths.AppName.Substring($parentApp.path.Length)

    # check if the virtual path has files or folders beneath it. An error will occur if there are.
    $iisVPath = ConvertTo-WebIISPath -UriPaths $uriPaths

    $childItems = Get-ChildItem $iisVPath
    if($childItems) {
            Write-Warning ("IIS $env:COMPUTERNAME - The virtual path at '$Url' has items beneth it. Due to a bug in " + `
            " WebAdministration\Remove-WebVirtualDirectory this would force a windows pop-up dialog to get approval." + `
            " To get around this, a temporary folder will be created and the current virtual directory will" + `
            " be repointed to the new (empty) location before removal. After removal of the virtual directory" + `
            " the temporary folder will also be removed. The domain account this process runs under will need" + `
            " permissions to the temporary folder location to create and remove it.")


        $guid = [Guid]::NewGuid()
        $PhysicalPath = (Get-WebVirtualDirectoryWrapper -Url $Url).PhysicalPath
        $tempPath = Join-Path $PhysicalPath $guid

        Write-Warning "IIS $env:COMPUTERNAME - Creating temp directory '$tempDir' in order to remove a virtual directory."
        $tempDir = New-Item $tempPath -ItemType Directory
        Write-Host "IIS $env:COMPUTERNAME - Created temp directory '$tempDir' in order to remove a virtual directory."
        $void = New-WebVirtualDirectoryWrapper -Url $Url -PhysicalPath $tempPath -Force
    }

    $appPath = $parentApp.path
    if(-not $appPath) { $appPath = "/" }

    Write-Warning "IIS $env:COMPUTERNAME - Removing a virtual directory '$vdirPath' for '$Url'."
    Remove-WebVirtualDirectory -Site $uriPaths.SiteName -Application $appPath -Name $vdirPath
    Write-Host "IIS $env:COMPUTERNAME - Removed a virtual directory '$vdirPath' for '$Url'."

    if($tempDir) {
        Write-Warning "IIS $env:COMPUTERNAME - Removing temp directory '$tempDir' in order to remove a virtual directory."
        $void = Remove-Item $tempDir
        Write-Host "IIS $env:COMPUTERNAME - Removed temp directory '$tempDir' in order to remove a virtual directory."
    }
}

Customized Internal NuGet Gallery

on Friday, August 15, 2014

NuGet’s great and there are plenty of resources to help get your team setup with private feeds (MyGet, Inedo's ProGet, JFrog's Artifactory, Sonatype's Nexus), but sometimes there are needs to host your own feed internally.

It’s not too hard to do, but there are a few hoops that you need to jump through in order to get it all setup:

  1. NuGet already provides a great guide for downloading the Gallery code and getting it running on your local machine.
  2. They also have a guide for altering the Gallery code (LocalGuide) to prepare it to run on a local IIS instance.
  3. But, there are a few details that you might want to change to customize the Gallery for your organization/needs:
    1. At the end of the LocalGuide it mentions “you can register a user and make it an Admin by adding a record to the UserRoles table”. Here’s the script:
      select * from [dbo].[Users] -- find your id
      insert into [dbo].[Roles] (name) values ('Admins')
      insert into [dbo].[UserRoles] (UserKey, RoleKey) values (<your id>, 1)
    2. Remove Alert.md – This feeds the yellow bar that appears at the top of the screen and states “This is a development environment. No data will be preserved.”
      1. It’s under FrontEnd/NuGetGallery/App_Data/Files/Content/Alert.md
      2. I think it’s a good idea to remember that file. It’s a really nice implementation to be able to set an alert without disrupting the service.
    3. Update Web.config – These will kinda be obvious
      1. Gallery.Environment should be empty
      2. Gallery.SiteRoot
      3. Gallery.SmtpUri
      4. Gallery.Brand
      5. Gallery.GalleryOwner
      6. Remove <rewrite> rules (from LocalGuide)
    4. Update the Title Icon/Name – This is defined by CSS
      1. FrontEnd/NuGetGallery/Content – Both Layout.css and Site.css (it’s just a good idea to keep them insync)
      2. If you have the time to make a new image, that would be best.
      3. If you don’t have time, then
        1. comment out
          1. background
          2. text-indent
        2. add
          1. font-weight: bold
          2. color: white
          3. font-size: 1.2 em
          4. text-decoration: none
        3. The Web.Config setting of Gallery.Brand text will be displayed
    5. Add Gallery URL
      1. FrontEnd/NuGetGallery/Views/Pages/Home.cshtml
      2. Add some text before @ViewBag.Content like: Visual Studio URL: http://nuget.xyz.com/api/v2/
    6. Have Lucene Search Index update on each package upload
      1. FrontEnd/NuGetGallery/Controllers/ApiController – PublishPacakge function – By default the line IndexingService.UpdatePackage(package) is supposed to update the search index. But, sometimes it doesn’t.
      2. Replace that line with: IndexingService.UpdateIndex(forceRefresh: true)

I’m sure the first thing you’ll want to do once you have the website up and running is play around with some test packages. Here is a script to help cleanup the database once you’re done testing. (Also, delete any .nupkg files under <website>/App_Data/Files/packages/)

declare @trunc bit = 0
if(@trunc = 1) begin
 truncate table [dbo].[GallerySettings]
 truncate table [dbo].[PackageAuthors]
 truncate table [dbo].[PackageDependencies]
 truncate table [dbo].[PackageRegistrationOwners]
 delete from [dbo].[PackageStatistics] where [key] = 1
 delete from dbo.Packages where [key] = 1
 delete from [dbo].[PackageRegistrations] where [key] = 2
 /*delete from [dbo].[UserRoles] where [Userkey] = 1
 delete from [dbo].[Users] where [key] = 1*/
end

/****** Script for SelectTopNRows command from SSMS  ******/
select * from [dbo].[GallerySettings]
select * from [dbo].[PackageAuthors]
select * from [dbo].[PackageDependencies]
select * from [dbo].[PackageRegistrationOwners]
select * from [dbo].[PackageRegistrations]
select * from dbo.Packages
select * from [dbo].[PackageStatistics]
/*select * from [dbo].[Roles]
select * from [dbo].[UserRoles]
select * from [dbo].[Users]*/

Remote profile.ps1

on Friday, August 8, 2014

There have been a lot of articles on how profile.ps1 is used.

They seem to be incorrect; or at least the system has changed under their feet. You can check out how much your system conforms to the documentation standard by creating 6 (six!) different profile.ps1 files. Each one, with a statement of “Write-Host ‘ran xyz profile.ps1’”.

The fun part is that none of them will run when connecting from a remote a session. To do that you Have To Use a Session Profile. Which is kind of weird. But, it kinda fits in with the whole DSC thing. You configure a server once; when its created, and you never touch it again.

I’m not sure I agree with that approach.

Running Local On Remote

on Friday, August 1, 2014

A lot of PowerShell functions/Cmdlets are written in a way that they can only be run on a localhost. But, sometimes you need to run them remotely.

PSSession will let you run a command on a remote host (One Hop). If you need to connect to more hosts than that, you’ll to need setup CredSSP in your environment.

One Hop Scripts

This function is a template for running a local command on a remote host:

Function Verb-Noun {
[CmdletBinding()]
[OutputType(If you can set this, that's awesome)]
Param (
    [Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true, ParameterSetName = "A")]
    [PSObject] $A
    [Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true, ParameterSetName = "A")]
    [string] $AZ,
    [Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true, ParameterSetName = "B")]
    [string] $B,
    [Parameter(Mandatory = $false, ValueFromPipelineByPropertyName = $true)]
    [string] $ServerName = $env:COMPUTERNAME,
    [Parameter(Mandatory = $false, ValueFromPipelineByPropertyName = $true)]
    [System.Management.Automation.Runspaces.PSSession] $Session = $null
)

    $scriptBlock = {
        Import-Module WebAdministration
        Import-Module ABC

        if($args) {
   Merge-AllParams -Arguments $args[0];
  }

        ... code goes here ...

        return $XYZ
    }

    # handle calling with sessions
    $sessInfo = Test-CreateNewSession -Session $Session -ServerName $ServerName
    $Session = $sessInfo.Session

    try {
     if($session -and -not (Test-IsLocalSession $session)) {
            # copy all variables to pass across with Invoke-Command
         $allParams = Get-AllParams -Command $MyInvocation.MyCommand -Local (Get-Variable -Scope Local)

      # if a session is avaliable, run it in the session; unless its the local sysem
      $XYZ = Invoke-Command -Session $Session -ArgumentList $allParams -ScriptBlock $scriptblock;
     } else {
      # if it's a local session or if no session is avaliable, then run the script block inline
      $XYZ = (. $scriptblock);
     }
    } finally {
        if($sessInfo.CreatedSession) { Remove-PSSession $Session }
    }

    return $XYZ
}

The function relies on Get-AllParams, Merge-AllParams, and Test-CreateNewSession.

<#
.SYNOPSIS
 Will retrieve all arguments passed into a function. This can help ease passing those values
 to an Invoke-Command cmdlet.

.EXAMPLE
 Get-AllParams -Command $MyInvocation.MyCommand -Locals (Get-Variable -Scope Local);
#>
Function Get-AllParams {
[CmdletBinding()]
Param(
 [Parameter(Mandatory = $true)]
 [System.Management.Automation.FunctionInfo]$Command,
 [Parameter(Mandatory = $true)]
 [Array]$Locals
)

 $allParams = @{};
 $Command.Parameters.Keys| foreach {
   $i = $_;
   $allParams[$i] = ($Locals |? { $_.Name -eq $i; }).Value;
  }
 return $allParams;
}

<#
.SYNOPSIS
 Will load all parameters passed in into the Script scope. This can be used in conjuction with
 Get-AllParams to pass variables into an Invoke-Command block.

.EXAMPLE
 Merge-AllParams -Arguments $args[0];
#>
Function Merge-AllParams {
[CmdletBinding()]
Param (
 [Hashtable]$Arguments
)

 $Arguments.GetEnumerator() |% { Set-Variable -Name $_.key -Value  $_.value -Scope Global; }
}

<#
.SYNOPSIS
    Sets up a Session object if needed. It also returns a flag if a session object was created.

.DESCRIPTION
    Sets up a Session object if needed. It also returns a flag if a session object was created.

    When functions sometimes need to run remotely (through a Session) or sometime locally, the
    code can be written to use a script block and logic can be added to call the code with a Session.
    The logic can become redundant when determing if and how to call the Session. This helper
    function helps with the process.

.PARAMETER Session
    The current Session variable passed into the calling function

.PARAMETER ServerName
    The current ServerName variable available in the calling function

.EXAMPLE
    $sessInfo = Test-CreateNewSession -Session $Session -ServerName $ServerName
    $Session = $sessInfo.Session

    try {
        ... determine if the session needs to be called or a local execution should be used ...
    } finally {
        if($sessInfo.CreatedSession) { Remove-PSSession $sessInfo.Session }
    }
#>
Function Test-CreateNewSession {
[CmdletBinding()]
Param (
    [System.Management.Automation.Runspaces.PSSession] $Session = $null,   
    [string] $ServerName = ""
)

    $createdSession = $false
    if($Session -eq $null -and $ServerName -ne "") {
        if(-not (Test-IsLocalComputerName $ServerName)) {
            $Session = New-PSSession $ServerName
            $createdSession = $true
        }
    }

    $sessInfo = New-PsType "CoreUcsb.PSSessionCreate" @{
                    Session = $Session
                    CreatedSession = $createdSession
                }

    return $sessInfo
}

PowerShellGet Install and Import Module

on Friday, July 25, 2014

WMF v5.0 Preview’s PowerShellGet module is pretty nice, but it is lacking some functionality. Today I went through and added two new features: Install-Module also Imports module and NuGet.exe is now located with the module.

Install-Module Also Imports the Module

… even when it doesn’t install the module, it still imports the module. This allows for all Import-Module statements to be replaced with Install-Module statements.

PSGet had this feature, and that made it just a little more user friendly. But, I also think I understand why Microsoft didn’t implement this feature. It seems to complicate the lifecycle of when to Update the modules. How do you answer these questions:

  • If the current version on the gallery is newer than the installed version, should the installed version be updated?
  • If Install-Module replaces the usage of Import-Module how does the management of which versions are on which servers play out? Does it break Server Management consistency?

I choose to ignore those concerns, and I’ll circle back to them at a later date. For now, it can replace the Import-Module statement.

NuGet.exe is now located with the Module

There were actually 3 updates with this one:

  • NuGet.exe is no longer downloaded to %UserAppData%\Local\Microsoft\Windows\PowerShell\PowerShellGet\NuGet.exe. It’s now downloaded to %ProgramFiles%\WindowsPowerShell\Modules\PowerShellGet. The same location as the rest of the module.
  • A default NuGet.config is installed to the same location if it doesn’t exist.
  • The prompt which asks if you want to download NuGet.exe has been removed.

If NuGet.exe is downloaded into a user specific folder, then it has to download it for every user which runs Install-Module. Since PowerShell scripts can be run by both privileged users and by service accounts on a server, this made for multiple copies.

And, a default NuGet.config file lowers the cost on new users to find the config file and update it.

 

Since I’m starting to work on these enhancements, I’ll try to keep this community feed updated with stable builds on a weekly basis. (It doesn’t have the latest at the time of this posting, my apologizes.)

Custom IIS log files with PowerShell

on Friday, June 27, 2014

Depending on your infrastructure you may have a need to place IIS logs onto a separate disk. A disk which can fill up without taking down the server. The easiest solution to this is to set the Default Site Settings with log file locations other than the C: drive. But, then you still run into the problem of each log file being written under a folder with a name like W3SVC12.

The name W3SVC12 corresponds with the website which has SiteID 12. Unfortunately, you can only find out that information if you have access to IIS manager. And, most developers don’t have access to IIS manager on the production servers. So, it would be nice to give the log files a location with a more friendly name.

I’m sure there’s an appcmd which can setup both IIS log files and Failed Request Tracing log files for an individual website. But, in this post, I’ll show the few commands needed to setup those locations by directly editing the applicationHost.config file.

When an individual website is setup with custom IIS log file and Failed Request Tracing log file locations, the applicationHost.config file will look like this:

<site name="unittest.dev.yoursite.com" id="15" serverAutoStart="true">
<application path="/">
  <virtualDirectory path="/" physicalPath="D:\AllContent\Websites\unittest.dev.yoursite.com\" />
 </application>
 <application path="/normal/childapp">
  <virtualDirectory path="/" />
 </application>
 <bindings>
  <binding protocol="http" bindingInformation="*:80:" />
 </bindings>
 <traceFailedRequestsLogging enabled="false" directory="D:\AllContent\logs\unittest.dev.yoursite.com\FailedReqLogFiles" />
 <logFile directory="D:\AllContent\logs\unittest.dev.yoursite.com\LogFiles" />
</site>

The two commands below create and remove those xml elements. The script will also use the name of the website when creating the log file location path.

<#
.SYNOPSIS
 Adds a specialized log folder and FRT folder. See ConvertTo-WebUriPaths to create a $UriPaths Hashtable.

.EXAMPLE
 New-WebAppLogFile -UriPaths $paths

#>
Function New-WebAppLogFile {
Param (
 [Parameter(Mandatory = $true)]
 [Hashtable]$UriPaths,
 [string]$PhysicalPath = "",
 [string]$ServerName = $env:COMPUTERNAME
)
Process {
 # if the web application can't be found, then skip
 $configPath = Get-WebConfigPath $ServerName;
 $appHost = [System.Xml.XmlDocument](Read-WebConfig -ConfigPath $configPath);
 $sites = $appHost.configuration.'system.applicationHost'.sites;
 $site = [System.Xml.XmlElement]($sites.site |? { $_.name -eq $UriPaths.SiteName });
 if($site -eq $null) {
  Write-Warning "IIS $ServerName - Web site $($UriPaths.SiteName) couldn't be found. The log and FRT paths will be skipped.";
  return;
 }

 # get the physical path
 $rootLogsPath = $PhysicalPath;
 if($rootLogsPath -eq "") {
  $rootLogsPath = Join-Path $global:WebAdministrationUcsb.DefaultLogPath $UriPaths.SiteName;
 }
 $frtPath = Join-Path $rootLogsPath "FailedReqLogFiles";
 $logPath = Join-Path $rootLogsPath "LogFiles";

 # add the FRT location
 $frt = [System.Xml.XmlElement]($appHost.CreateElement("traceFailedRequestsLogging"));
 $frt.SetAttribute("enabled", "false");
 $frt.SetAttribute("directory", $frtPath);
 $frt = $site.AppendChild($frt);
 
 Write-Warning "IIS $ServerName - Adding custom FRT path for $($UriPaths.SiteName) to $frtPath.";
 Save-WebConfig -WebConfig $appHost -ConfigPath $configPath
 Write-Host "IIS $ServerName - Added custom FRT path for $($UriPaths.SiteName) to $frtPath.";

 # add the log location
 $log = [System.Xml.XmlElement]($appHost.CreateElement("logFile"));
 $log.SetAttribute("directory", $logPath);
 $log = $site.AppendChild($log);
 
 Write-Warning "IIS $ServerName - Adding custom log file path for $($UriPaths.SiteName) to $logPath.";
 Save-WebConfig -WebConfig $appHost -ConfigPath $configPath
 Write-Host "IIS $ServerName - Added custom log file path for $($UriPaths.SiteName) to $logPath.";
}
}


<#
.SYNOPSIS
 Remove a specialized log folder and FRT folder. See ConvertTo-WebUriPaths to create a $UriPaths Hashtable.

.EXAMPLE
 Remove-WebAppLogFile -UriPaths $paths
#>
Function Remove-WebAppLogFile {
Param (
 [Parameter(Mandatory = $true)]
 [Hashtable]$UriPaths,
 [string]$ServerName = $env:COMPUTERNAME
)
Process {
 # if the web application can't be found, then skip
 $configPath = Get-WebConfigPath $ServerName;
 $appHost = Read-WebConfig -ConfigPath $configPath;
 $sites = $appHost.configuration.'system.applicationHost'.sites;
 $site = $sites.site |? { $_.name -eq $UriPaths.SiteName };
 if($site -eq $null) {
  Write-Warning "IIS $ServerName - Web site $($UriPaths.SiteName) couldn't be found. The log and FRT path removal will be skipped.";
  return;
 }

 # remove the FRT location
 $frt = $site.traceFailedRequestsLogging
 if($frt -eq $null) {
  Write-Warning "IIS $ServerName - Web site $($UriPaths.SiteName) doesn't have a custom FRT path. Skipping its removal.";
 } else {
  $frt = $site.RemoveChild($frt)

  Write-Warning "IIS $ServerName - Removing custom FRT path from $($UriPaths.SiteName).";
  Save-WebConfig -WebConfig $appHost -ConfigPath $configPath
  Write-Host "IIS $ServerName - Removed custom FRT path from $($UriPaths.SiteName).";
 }

 # remove the log location
 $log = $site.logFile
 if($log -eq $null) {
  Write-Warning "IIS $ServerName - Web site $($UriPaths.SiteName) doesn't have a custom log file path. Skipping its removal.";
 } else {
  $log = $site.RemoveChild($log)

  Write-Warning "IIS $ServerName - Removing custom log file path from $($UriPaths.SiteName).";
  Save-WebConfig -WebConfig $appHost -ConfigPath $configPath
  Write-Host "IIS $ServerName - Removed custom log file path from $($UriPaths.SiteName).";
 }
}
}

These commands rely on the Read-WebConfig and Save-WebConfig from an earlier post.

Enable an Application Server on all Web Farms

on Friday, June 20, 2014

Last time, I looked at Enabling/Disabling an Application Server within a single Web Farm. I’ll try to continue on that same thread and update the script to Enable or Disable an Application Server on all Web Farms on a Proxy Server.

The core of the work is done by searching for all web farms which use the server, Get-WebFarmsByAppServer. After that, it’s just a matter of calling a mass update function, Set-WebFarmsAppServerEnabled.

<#
.SYNOPSIS
 Retrieves a list of all Web Farms which contain the given list of App Servers on the given Proxy Server.

 This is used to retrieve the list of Web Farms which will need to up updated in order to remove a single
 server from all Web Farms at once.

 The resulting list will be of type [System.Collections.Generic.List[PSObject]]. The 
 inner PSObject's will have these properties:

 WebFarmName The name of the web farm which has the given App Server in its list
 AppServerName The search will look for both shorthand names (App1) and FQDN's (App1.your.domain.here), this
   will have the value which was matched
 Enabled  Is the server currently enabled.

.PARAMETER ServerName
 The name of the proxy server to update. If this parameter is not supplied, the local computers config
 file will be updated.

.PARAMETER AppServerNames
 The name of the App Servers to search for.

.EXAMPLE
 Get-WebFarmsByAppServer -ServerName "Proxy1" -AppServerNames "App1"
#>
Function Get-WebFarmsByAppServer {
Param (
 [string] $ServerName = $env:COMPUTERNAME,
 [Parameter(Mandatory = $true)]
 [System.Array] $AppServerNames
)
 $configPath = Get-WebConfigPath $ServerName
 $appHost = [System.Xml.XmlDocument](Read-WebConfig -ConfigPath $configPath)

 $farms = $appHost.configuration.webFarms.webfarm;

 # if there are no web farms defined, write a warning and return an empty array
 if($farms -eq $null) {
  Write-Warning "IIS Proxy $ServerName - No web farms are currently defined."
  return @();
 }

 # determine search values, check if an fqdn might also be possible value to search on
 $searchValues = New-Object System.Collections.Generic.List[string]
 $AppServerNames |% { $searchValues.Add($_); }

 <# You could add a check for Fully Qualified Domain Names along with the supplied values
 $AppServerNames |% {
  $isFqdn = $_ -match "\.your\.domain\.here"
  if($isFqdn -eq $false) {
   $fqdn = $_ + ".your.domain.here"
   try {
    $result = [System.Net.Dns]::GetHostAddresses($fqdn);

    $searchValues.Add($fqdn);
   } catch {}
  }
 }
 #>

 # search for all occurrences in the web farm list
 $found = New-Object System.Collections.Generic.List[PSObject]
 for($i = 0; $i -lt $farms.Count; $i++) {
  $farm = $farms[$i];

  $servers = New-Object System.Collections.Generic.List[System.Xml.XmlElement]
  $serverlist = $farm.server;
  $serverlist |% { $servers.Add($_); }

  for($j = 0; $j -lt $servers.Count; $j++) {
   $server = $servers[$j];

   $searchValues |% {
    if($server.address.ToLower() -eq $_.ToLower()) {
     # http://stackoverflow.com/questions/59819/how-do-i-create-a-custom-type-in-powershell-for-my-scripts-to-use
     $m = new-object PSObject
     $m.PSObject.TypeNames.Insert(0,'WebAdministrationExt.WebFarmAppServerMatch')

     $m | add-member -type NoteProperty -Name WebFarmName -Value $farm.Name
     $m | add-member -type NoteProperty -Name AppServerName -Value $server.Address
     $m | add-member -type NoteProperty -Name Enabled -Value $server.Enabled
     
     $found.Add($m);
    }
   }
  }
 }

 # return the list
 return $found;
}


<#
.SYNOPSIS
 Set the given list of AppServers to be enabled/disabled in all Web Farms on the Proxy Server.

 TODO: This could probably be updated to handle pipeline input

.PARAMETER ServerName
 The name of the proxy server to update. If this parameter is not supplied, the local computers config
 file will be updated.

.PARAMETER AppServerNames
 The name of the App Servers to set to enabled/disabled.

.PARAMETER Enabled
 Set the server to enabled or disabled.

.EXAMPLE
 $updatedFarms = Set-WebFarmsAppServerEnabled -ServerName "Proxy1" -AppServerNames "App1" -Enabled $false
#>
Function Set-WebFarmsAppServerEnabled {
[CmdletBinding()]
Param (
 [string] $ServerName = $env:COMPUTERNAME,
 [Parameter(Mandatory = $true)]
 [System.Array] $AppServerNames,
 [Parameter(Mandatory = $true)]
 [bool] $Enabled
)

 $farms = Get-WebFarmsByAppServer -ServerName $ServerName -AppServerNames $AppServerNames

 # if no farms we're found, then skip this
 if($farms -eq $null) {
  Write-Warning "IIS Proxy $ServerName - No web farms we're found which use $AppServerNames. Skipping setting the App Servers to $Enabled."
  return;
 }

 # set the servers to the desired values
 for($i = 0; $i -lt $farms.Count; $i++) {
  $farm = $farms[$i];

  # NOTE: SkipLoadBalancingDelay is set to true because it incurs a 10 second delay for each update. That could
  # be a long time for large updates. The LoadBalancingDelay was introduced to handle web deployments, this
  # function is expected to be used with Windows Server updates (Windows Servers updates will have a delay built into them).
  Set-WebFarmServerEnabled -ServerName $ServerName `
   -WebFarmName $farm.WebFarmName -AppServerName $farm.AppServerName -Enabled $Enabled `
   -SkipLoadBalancingDelay $true
 }

 return $farms;
}

Add / Remove Web Farm Server

on Friday, June 6, 2014

Here’s a couple of PowerShell functions to enable and disable a web farm server on IIS 7+. They work directly with the applicationHost.config file so they aren’t dependent on any particular version of IIS. This also means that they can be used to update remote IIS installations.

Sorry for the misleading title, but it’s better to enable/disable a web farm server rather than add/remove it. When you remove a server from a web farm any requests currently being processed by it will be lost. When disabling the server, the hanging requests will finish processing.

<#
.SYNOPSIS
Checks if an application server is listed in web farm. And, if it is, is it enabled.

Used to check if a server is 'enabled' on a web farm.
>
Function Test-WebFarmServerEnabled {
[CmdletBinding()]
Param(
[string]$ProxyServerName = $env:COMPUTERNAME,
[Parameter(Mandatory = $true)]
[string]$WebFarmName,
[Parameter(Mandatory = $true)]
[string]$AppServerName
)
$configPath = Get-WebConfigPath $ProxyServerName
$appHost = Read-WebConfig -ConfigPath $configPath

$webFarm = $appHost.configuration.webFarms.webFarm |? {$_.name -eq $WebFarmName}
$webFarmServer = $webFarm.server |? {$_.address -eq $AppServerName}

$enabled = [System.Convert]::ToBoolean($webFarmServer.enabled);
return $enabled;
}

<#
.SYNOPSIS
Sets an application server in a web farm to either enabled or disabled. By setting the -Enabled
parameter the server will be enabled. If the parameter is missing the server will
be disabled.

Use when enabling or disabling servers in a web farm. This doesn't actually add or
remove a server to a web farm.

.EXAMPLE
To enable, set the -Enabled parameter to $true

Set-WebFarmServerEnabled -ProxyServerName "WebProxy1" -WebFarmName "WebFarm-Dev" `
-AppServerName "WebApp1" -Enabled $true

.EXAMPLE
To disable set the -Enabled parameter to $false

Set-WebFarmServerEnabled -ProxyServerName "WebProxy1" -WebFarmName "WebFarm-Dev" `
-AppServerName "WebApp1" -Enabled $false
>
Function Set-WebFarmServerEnabled {
[CmdletBinding()]
Param(
[string] $ProxyServerName = $env:COMPUTERNAME,
[Parameter(Mandatory = $true)]
[string] $WebFarmName,
[Parameter(Mandatory = $true)]
[string] $AppServerName,
[Parameter(Mandatory = $true)]
[bool] $Enabled,
[bool] $SkipLoadBalancingDelay = $false
)
$configPath = Get-WebConfigPath $ProxyServerName
$appHost = Read-WebConfig -ConfigPath $configPath

$webFarm = $appHost.configuration.webFarms.webFarm |? {$_.name -eq $WebFarmName}
$webFarmServer = $webFarm.server |? {$_.address -eq $AppServerName}

if($Enabled) {
$value = "true";
} else {
$value = "false";
}
$webFarmServer.enabled = $value;

Write-Warning "Updating Proxy $ProxyServerName - Setting webfarm $WebFarmName's server $AppServerName to enabled='$value'"
Save-WebConfig -WebConfig $appHost -ConfigPath $configPath
Write-Host "Updated Proxy $ProxyServerName - Setting webfarm $WebFarmName's server $AppServerName to enabled='$value'"

if($SkipLoadBalancingDelay -eq $false) {
Write-Warning "Waiting 10 seconds to let the proxy server handle any hanging requests or start load balancing"
Start-Sleep 10;
}
}

Get-WebConfigPath, Read-WebConfig, and Save-WebConfig are described in this previous post.

Background

In IIS 7 there were Web Farm PowerShell Cmdlets. There was a separate package for them because in IIS 7 the Web Farm Framework (WFF) was a separate package.

In IIS 8, WFF became integrated into Application Request Routing, but the PowerShell Cmdlet’s didn’t get integrated into the WebAdministration module. Nor, was there a separate PowerShell module made for ARR.

It also wasn’t possible to use the IIS 7 PowerShell Cmdlets on IIS 8 because they called a particular set of dlls that were installed by IIS 7’s Web Farm Framework. Those dlls were integrated into other packages and aren’t available on IIS 8.

applicationHost.Config Backups and Updates

on Friday, May 30, 2014

IIS’s applicationHost.config is almost never stored in version control. Yet, it’s often updated to add new sites, ARR rules, and special configurations. There are a variety of ways to update the config: IIS Manager, appcmd.exe, PowerShell/WebAdministration, and editing the file by hand.

In general editing the .config file is pretty safe, with a low risk of affecting functionality on one website when updating a different website. But, it’s always nice to

  • have a backup
  • have an audit trail of updates

This PowerShell function can make a quick backup of the applicationHost.config file, with information on who was running the backup, and when it was run.

$global:WebAdministrationExt = @{}

# Used by unit tests. We run the unit tests so often, the backups can really grow in size. Most
# unit tests should turn off the backups by default; but the unit tests which actually test the
# backup functions turn it back on.
$global:WebAdministrationExt.AlwaysBackupHostConfig = $true;

<#
.SYNOPSIS
Saves an backup of an xml config file. The purpose is to ensure a backup gets
made before each update to an applicationHost.config or web.config.

Used by the Save-WebConfig function to ensure a backup gets made.
#>
Function Backup-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
if($global:WebAdministrationExt.AlwaysBackupHostConfig -eq $false) {
Write-Warning ("Global:WebAdministrationExt.AlwaysBackupHostConfig has been set to false. " +
"Skipping backup of $configPath");
return $null;
}

if([System.IO.File]::Exists($ConfigPath) -eq $false) {
throw "Backup-WebConfig: No file to read from, at path $ConfigPath, could be found."
}

$fileInfo = (Get-ChildItem $ConfigPath)[0];
$basePath = Split-Path $fileInfo.FullName;
$filename = $fileInfo.Name;
$timestamp = Get-TimeStamp;
$appendString = "." + $env:UserName + "." + $timestamp;

$i = 0;
$backupPath = Join-Path $basePath ($filename + $appendString + ".bak");
while(Test-Path $backupPath) {
$i++;
$backupPath = Join-Path $basePath ($filename + $appendString + "-" + $i + ".bak");
}

Write-Warning "Backing up $ConfigPath to $backupPath"
Copy-Item $ConfigPath $backupPath
Write-Host "Backed up $ConfigPath to $backupPath"

return $backupPath
}
}

As long as there is now a backup function in PowerShell, might as well round out the suite with a Read and Save function for applicationHost.config.

In the Read function I’ve chosen to not preserve whitespace. Initially I was preserving whitespace to ensure the file stayed 'readable'. But, once I started adding in new xml elements using PowerShell, those new elements were very unreadable and causing weird new line issues wherever they were added. PowerShell’s xml functionality will preserve comments, and if you set the XmlWriter’s IndentChars property to something better than 2 spaces (like a tab) then the applicatHost.config file will stay very readable while avoiding the new xml element problem.


<#
.SYNOPSIS
Saves an xml config file. The purpose is to keep the logic for setting
up formatting to be the same way all the time.

Used to store applicationHost.config file updates.
#>
Function Save-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[xml]$WebConfig,
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
# First backup the current config
Backup-WebConfig $ConfigPath

# Set up formatting
$xwSettings = New-Object System.Xml.XmlWriterSettings;
$xwSettings.Indent = $true;
$xwSettings.IndentChars = " "; # could use `t
$xwSettings.NewLineOnAttributes = $false;

# Create an XmlWriter and save the modified XML document
$xmlWriter = [Xml.XmlWriter]::Create($ConfigPath, $xwSettings);
$WebConfig.Save($xmlWriter);
$xmlWriter.Close();
}
}

<#
.SYNOPSIS
Load an xml config file. The purpose is to keep the logic for setting
up formatting to be the same way all the time.

Used to load applicationHost.config files.
#>
Function Read-WebConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ConfigPath
)
Process {
[xml]$appHost = New-Object xml
#$appHost.psbase.PreserveWhitespace = $true
$appHost.Load($ConfigPath)
return $appHost;
}
}

There is almost always a reason to have PowerShell enter new xml elements into the applicationHost.config. Both IIS Manager and appcmd.exe have limitations that can only be overcome by hand editing the file or scripting the update in PowerShell (or .NET).

But when creating PowerShell functions to add new elements, it’s always nice to be able to unit test the code before using it on your servers. So, you can make a function which will get the location of an applicationHost.config file. That function can be overridden during a unit test to use a dummy file. This code snippet uses an example of how the unit tests could be used with the serviceAutoStartProviders.


# Used by unit tests. If a value is supplied here, then it will always be returned by Get-WebConfigPath
$global:WebAdministrationExt.ApplicationHostConfigPath = "";

<#
.SYNOPSIS
Contains the logic to get the path to an applicationHost.config file.

Used to allow for a config file path to be overloaded during unit tests.
#>
Function Get-WebConfigPath {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]$ServerName
)
Process {
if($global:WebAdministrationExt.ApplicationHostConfigPath -ne "") {
return $global:WebAdministrationExt.ApplicationHostConfigPath;
}

$configDir = "\\$ServerName\C$\Windows\System32\inetsrv\config"
$configPath = "$configDir\applicationHost.config"
return $configPath;
}
}

# Example usage
$global:WebAdministrationExt.ApplicationHostConfigPath = "C:\Modules\WebAdministrationExt\applicationHost.UnitTest.config";
$global:WebAdministrationExt.AlwaysBackupHostConfig = $false;

$providerName = Get-Random
Set-AutoStartProvider -SiteName "unittest.local.frabikam.com" -AutoStartProvider $providerName;
Get-AutoStartProvider -SiteName "unittest.local.frabikam.com" | Should Be $providerName;

$global:WebAdministrationExt.ApplicationHostConfigPath = "";
$global:WebAdministrationExt.AlwaysBackupHostConfig = $true;

PowerShellGet with Multiple Source Repositories

on Monday, May 26, 2014

The PowerShell Team added the PowerShellGet module in the May 2014 update to the v5.0 Preview. This created an official Microsoft repository for pulling in the latest modules. But, it also allowed for companies and teams to setup their own internal repositories. The PowerShell Team put together a post on how to setup a MyGet repository with just a few keystrokes.

The ability to have a private repository for your team is great. What would make it even better is if you could have multiple repositories that are all searched and used when Finding and Installing modules.

Internally, PowerShellGet uses NuGet to handle package management. Which is a wonderful thing. It’s a great product and has been used by other projects like MyGet and Chocolatey without any problems.

However, there was one little problem. The PowerShell Team didn’t want to have any conflicts with updates or end user configurations with their normal NuGet installations. Because of this, PowerShellGet downloads a separate installation of NuGet.exe and only allows for one repository to be used at a time. That repository is defined by the variable $PSGallerySourceUri. How could it be updated to work more like ‘normal’ NuGet and handle multiple repositories?

With a little bit of updating to the internal PSGallery.psm1 file, you can now get an updated version of PowerShellGet which can handle both a NuGet.Config file and multiple repositories defined within the $PSGallerySourceUri variable.

The module can be found with:

$PSGallerySourceUri = “https://www.myget.org/F/smaglio81-psmodule/api/v2

I think you should be cautious about using it as I set it up only to ask for the functionality to be added by the PowerShell Team. I don’t really have long term plans of maintaining it.

Source Code: https://github.com/smaglio81/powershellget

Two other things to look at with private repositories:

Using PowerShellGet on Win7

on Friday, May 23, 2014

The Windows Management Framework 5.0 Preview May 2014 contains a new module, PowerShellGet. The management framework has a requirement of at least Windows 8.1. But, the module itself only has PowerShell requirement of 3.0. So, it can run successfully on Windows 7.

If you have access to Windows Azure, then you have a quick and easy way to get the module without the problem of upgrading to Windows 8.1. Using Windows Azure, you can create a quick Windows 8.1 machine using the Visual Studio 2013 Update 2 image.

image

Once the machine is up and running, you can connect with Remote Desktop. After installing WMF 5.0, the virtual machine will have the module under C:\Windows\system32\WindowsPowerShell\v1.0\Modules\PowerShellGet. Copy that folder down to your local Windows 7 installation and you should be ready to use it.

image

image

Preload a WCF svc before adding to a WebFarm

on Sunday, May 18, 2014

WCF services hosted in IIS have a first request processing overhead. This is usually just a couple hundred milliseconds, but that can be enough to cause dangerous lag spikes and failed requests when the service is being added to a web farm already under load.

It would seem like using WCF’s serviceAutoStartProvider functionality would prevent the ‘first request’ overhead, but it only lessens it. To truly prevent the overhead from occurring you need to have a request fully processed by the service. Application Initialization is an option, but I haven’t used it in conjunction with a serviceAutoStartProvider.

Instead, I used PowerShell to create a function which would send a request to the service and verify the response. If the response was incorrect or in error, then an exception would be thrown by the function. This has the added benefits of:

  • It can be reused by Operations to check the health and status of production systems
  • It can be used by Developers and Operations to inspect and test individual application servers when issues are noticed
  • It can be used to test the health of a system before being added back into a web farm, in order to ensure that a bad deployment isn’t put into production

To do this I used the built in PowerShell function New-WebServiceProxy. In a script similar to this one:

Function Test-BrokerService {
Param(
[Parameter(Mandatory=$true)]
[string]$Environment,
[Parameter(Mandatory=$true)]
[string]$BrokerName,
[string]$DnsName = ""
)
Process {
# Get environment info
$envInfo = Get-BrokerEnvironmentVariables $Environment;

# Setup dnsName
if($DnsName -eq "") { $DnsName = $envInfo.dnsName; }

$uri = "https://" + $DnsName + "/broker.svc";
$proxy = New-WebServiceProxy -Uri $uri

Write-Warning "Testing $BrokerName on $uri ..."
switch($BrokerName.ToLower()) {
"firstBroker" {
# ...
}
default {
throw ("Broker " + $BrokerName + " is unknown. No test could be performed on " + $uri + ".");
}
}

Write-Host "Successfully tested $BrokerName on $uri"
$proxy.Dispose();

return $true;
}
}

I then setup a test system which had:

  • A 2008 R2 proxy server using WFF/ARR/UrlRewrite (Proxy)
  • Two 2008 R2 application servers to host the WCF services (AppServer1 & 2)

The load test uses:

  • 20 concurrent users
  • 0 delay between requests
  • 2 minute run time
  • A deployment script which
    • Takes AppServer1 out of the farm
    • Does a code deployment
    • Places AppServer1 back in the farm, without running an initial request
    • Repeats the actions for AppServer2

image

The small spikes that occur at 1:05 and 1:25 are when AppServer 1 & 2 are added back into the farm. In this particular test no timeouts occurred from it; but it has happened in previous tests.

After changing the deployment script to run an initial request before the server is added back into the web farm, the response timings smoothed out.

image

Deploying PowerShell Modules

on Wednesday, May 14, 2014

On MSDN there is an article on how to install PowerShell modules onto systems. One of the subsections is about Installing Multiple Versions of a Module. The basic idea behind it seems to continue to use PowerShells autoload feature, while using a Version number within the PowerShell descriptor file (.psd1 or manifest file) as a selector.

This seems to create a repeated hierarchy of directories. Each hierarchy starting with a folder name that is also versioned. Their example diagram is:

C:\Program Files
Fabrikam Manager
Fabrikam8
Fabrikam
Fabrikam.psd1 (module manifest: ModuleVersion = "8.0")
Fabrikam.dll (module assembly)
Fabrikam9
Fabrikam
Fabrikam.psd1 (module manifest: ModuleVersion = "9.0")
Fabrikam.dll (module assembly)

Followed by adding each of the versioned directories to the environment variable PSModulePath. Their example is:

$p = [Environment]::GetEnvironmentVariable("PSModulePath")
$p += ";C:\Program Files\Fabrikam\Fabrikam8;C:\Program Files\Fabrikam\Fabrikam9"
[Environment]::SetEnvironmentVariable("PSModulePath",$p)

Environment Variable Length Problems

Unfortunately, when extending the value of an environment variable, you also run the risk of hitting the 8191 character limit. Which can happen pretty quickly. Especially with frequent deployments.

One solution is to tightly control the rules in which a Module will be updated, versioned, and deployed. But, this removes a lot of flexibility and usually isn’t ideal. A great thing about PowerShell  is how quick the language can be used to get tasks done. Limiting it’s ability to be fluid and updateable doesn’t seem to fit with that design.

Potentially, the environment variable character limit problem could be alleviated by using the nested environment variable trick. (Grouping PS environment variables alphabetically before the PSModulePath variable seems to be the best approach). An example might be (written in shorthand):

%PS1_F8% = C:\Program Files Fabrikam\Manager Fabrikam8
%PS1_F9% = C:\Program Files Fabrikam\Manager Fabrikam9
%PS1_F% = %PS1_F8%;%PS1_F9%
%PSModulePath% = %PSModulePath%;%PS1_F%

Deploying Multiple Versions

All of these variables can be difficult to setup by hand. It would probably be best to have guidelines or standards at your company for Module development. Some good ones up front would be:

  • Determine a source control system for module development
  • Agree to the usage of a .psd1 file for each module
    • And, agree that a version number must be included. If the module is updated then the version needs to be updated as well. (In C# development, a build number is often attached to a specific build of a dll; but with PowerShell you probably don’t want the version number to contain the build number. More below.)
  • Determine the default module installation location to deploy to on target machines.
  • Have the deployment system inspect the .psd1 file for version information and construct the deployment path on target machines.
    • (Continuation on version number, from above:) If a machine already has a path with a matching version number, then that machine shouldn’t be updated. This is what requires development on the modules to always go hand-in-hand with updating the version number.
  • Have the deployment system update the machine’s environment variables with the deployments.

Pushing the Problem Down The Road

A solution like this doesn’t seem to fix the real problem: The need to add multiple versions of a module to machines without conflict and making them easily discoverable.

The recommendations from the PowerShell team are a good solution, but it looks like there is always room for improvement.

 

Refactoring PowerShell Modules into Scripts

on Friday, May 9, 2014

When writing a module the code can pile up pretty quickly. And once you get to a certain point it becomes unwieldy to find function definitions and to “Go to definition” of a method. Especially when you add in Doc Comments.

This can be helped slightly by the PowerShell Tools for Visual Studio. When developing in Visual Studio there is a dropdown of all function names within a file, and they are listed alphabetically. There is also an add-on for the PowerShell ISE, FunctionExplorer, but it is pretty unstable.

One thing that C# developers have done for a while is break out large files into multiple smaller files. Grouping the contents of each file by a specified area. This can also ‘kinda’ be done in PowerShell, and it’s with the help of this trick to get the current script directory.

$root = Split-Path $MyInvocation.MyCommand.Path –Parent

With that function at the top of the module file (.psm1) you can then start adding in normal script files (.ps1) to the module definition. This allows you to break apart a single module into multiple script files.

For example, you could have a Car module:

image

image

And, it can be refactored by the different groupings of functions. Like wheels, doors, or engine.

image

These files can all be loaded by the main Car.psm1 module, using the $root pathing. This will ensure that no matter where the module is imported from, the files that are next to it in the directory get loaded.

image

This also helps separate out unit tests into smaller test groups. Making it easier to debug certain sections of a module. (You may notice that each Test script has the $root variable defined at the top. This is to ensure that each script can have the variable available if it’s run on it’s own.)

image

image

Or debug the entire module at once.

image

PowerShell Tools for Visual Studio

on Friday, May 2, 2014

Last week I tried out PowerShell Tools for Visual Studio by Adam Driscoll. I liked the features a lot, but had some difficulty getting used to not having the instant feedback from a command window.

Goods News on the Command Window. It looks like Mr. Driscoll is implementing that feature right now. He added a REPL window to the GitHub repository just a few days ago. (I was also surprised to see that the code is Copyright by Microsoft. I thought Mr. Driscoll worked for DELL/PowerGUI?)

This week I had the opportunity to try PowerShell with TFS 2013 source control. This brought me right back to Visual Studio to handle the check-ins and check-outs. So I spent a lot of this week with PowerShell in VS2013.

Since I normally work in VS2013, continuing to use it as my primary IDE felt very natural. Some features that struck me were:

  • PowerShell projects in the Solution Explorer
    • Having all your files under one easy to view place, where you can quickly pull them up to edit is very useful.
  • Having keyboard shortcuts that I’m used to
    • The biggest one of these was multi-line comment and uncomment. I am very used to Ctrl+K,Ctrl+C to comment and it’s twin. And, when I was in the ISE I found myself constantly trying to use that command.
  • Built In Source Control Integration
  • IntelliSense can be Very Unresponsive
    • It can be a 2~5 second delay to show the IntelliSense menu, and that delay is on the main UI thread. So, VS2013 can become unresponsive during that time period.
    • It’s actually gotten so bad, that I fear typing $.
  • Debugging was missing in-depth variable inspection
    • The ability to expand a variable, especially XmlDocument, wasn’t available and very missed.
    • Since the Command Window/REPL isn’t implemented yet, there was no way to dig into a variables inner values with a command line.

Using PowerShell ISE like a Browser

I’ve done a fair amount of web development and have become comfortable with using 2 applications to code in.

  • Visual Studio: Used to write the code
  • Web Browser: Used to execute and somewhat debug the code

So, I found it really comfortable to use the PowerShell ISE along with VS2013:

  • Visual Studio: Used to write the Module code and handle TFS check-ins.
  • PowerShell ISE: Used to write Scripts, Unit Tests, and Debug.

This felt very familiar to me and help differentiate what type of code I was writing. It also helped me to figure out how much time I should be spending on a particular type of code. If I was writing code for a Module, then I could spend extra effort to write unit tests and ensure it’s stability. If I was writing a Script (which a unit test kinda is) then the goal was to get the job done.

image

Potential Issues

Visual Studio 2012 & “Server Workspaces”

Some of our team members have setup their local workspaces using Visual Studio 2012. It looks like VS 2012 doesn’t have the ability to handle “Local workspaces” (at least I got some error pop-ups [no screenshots, sorry]). This could mean that all the files that get pulled down in a workspace that’s created with VS2012 will be in a “Server workspace”. And in Server workspaces, all the files are going to be Read Only when checked-in. I would imagine that would create a lot of consternation when trying to pull up a file in PowerShell ISE and looking to make a quick edit.

VS2013 has a default workspace type of “Local”, which doesn’t use the Read Only flag on files.

PowerShell Tools VSIX & Multiple Domain Accounts

Some of our team members have multiple domain accounts. The second account is for doing SysAdmin work, like updating Production servers.

It looks like the PowerShell Tools for Visual Studio VSIX will install into an individual user folder. So, if you have two domain accounts you will need to install it using both domain accounts. It can easily be installed using both accounts and will work just fine.

PowerShell ISE Add-Ons

I also tried a few Add-Ons for PowerShell ISE, but every one that I tried just made the ISE unstable and prone to crashing. So, I removed them from my ISE. If you’re interested here is a list of some interesting ones:

(A lot of these you have to hand edit your PowerShell profile to get to load in the ISE)

image

VariableExplorer: Displays the full variable list currently available in the runtime.
FunctionExplorer: Displays a list of all function definitions in a file (very useful!)
CommentSelectedLines: Crazy unstable! But when it works, its fantastic. You can setup which keyboard shortcuts you want to bind to by editing the PowerShell .ps1. And, it adds the ability to save your ISE state when you close it down.
Script Browser: I actually didn’t find this useful, but it’s worth noting because it’s made by Microsoft and it was stable.

 

PSGet

I just wanted to remind myself that PSGet has a good sized directory of useful and up-to-date modules.

Powershell Unit Testing

on Friday, April 25, 2014

Unit testing is always good practice, but with newer languages there aren’t tools available to provide that functionality. So, it’s always awesome when someone uses their time to make the tool. Jon Wagner did an awesome job and put together:

PSMock/PShould/PSate - a collection of PowerShell modules that setup unit testing. And, he makes them really easy to install with a wiki article describing the steps. I think I was setup and (poorly) writing unit tests in about 15 minutes.

image

For installation, I went the PSGet route. I used PSGet because I like package managers, they take a lot of the pain out of updating. And, PSGet could be installed through a package manager itself (PSGet from Chocolatey).

On a side note, the use of Chocolatey is great because the Windows Server team recently announced that they would use Chocolatey as the primary repository for sharing installs through the OneGet PowerShell module.

I wonder if there’s an easy way to run the unit tests in TFS, and use the results for gated checkins?

 

I also poked around at the work of Adam Driscoll, mostly his PowerShell Tools for Visual Studio. It’s pretty interesting because you get the full Visual Studio debugging experience, which reveals a lot of the environment variables you may not be aware of. I certainly wasn’t.

image

It adds in PowerShell projects as a first class citizen and gives you an alphabetically ordered drop down of all the functions in a module. I like the navigational drop down feature soo much that I would love to see it in the PowerShell ISE.

image

image

It’s a really nice VS plugin and I use it when debugging difficult problems where I want to see the state of a lot of variables at the same time.

However, for most development I continue to use the PowerShell ISE. The ability to run scripts instantly and fluidly switch between the command window and script window is something that I missed when using Visual Studio.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.