Use PowerShell to Process Dump an IIS w3wp Process

on Monday, August 27, 2018

Sometimes processes go wild and you would like to collect information on them before killing or restarting the process. And the collection process is generally:

  • Your custom made logging
  • Open source logging: Elmah, log4Net, etc
  • Built in logging on the platform (like AppInsights)
  • Event Viewer Logs
  • Log aggregators Splunk, New Relic, etc
  • and, almost always last on the list, a Process Dump

Process dumps are old enough that they are very well documented, but obscure enough that very few people know how or when to use them. I certainly don’t! But, when you’re really confused about why an issue is occurring a process dump may be the only way to really figure out what was going on inside of a system.

Unfortunately, they are so rarely used that it’s often difficult to re-learn how to get a process dump when an actual problem is occurring. Windows tried to make things easier by adding Create dump file as an option in the Task Manager.

image

But, logging onto a server to debug a problem is becoming a less frequent occurrence. With Cloud systems the first debugging technique is to just delete the VM/Container/App Service and create a new instance. And, On-Premise web farms are often interacted with through scripting commands.

So here’s another one: New-WebProcDump

This command will take in a ServerName and Url and attempt to take a process dump and put it in a shared location. It does require a number pre-requisites to work:

  • The Powershell command must be in a folder with a subfolder named Resources that contains procdump.exe.
  • Your web servers are using IIS and ASP.NET Full Framework
  • The computer running the command has a D drive
    • The D drive has a Temp folder (D:\Temp)
  • Remote computers (ie. Web Servers) have a C:\IT\Temp folder.
  • You have PowerShell Remoting (ie winrm quickconfig –force) turned on for all the computers in your domain/network.
  • The application pools on the Web Server must have names that match up with the url of the site. For example https://unittest.some.company.com should have an application pool of unittest.some.company.com. A second example would be https://unittest.some.company.com/subsitea/ should have an application pool of unittest.some.company.com_subsitea.
  • Probably a bunch more that I’m forgetting.

So, here are the scripts that make it work:

  • WebAdmin.New-WebProcDump.ps1

    Takes a procdump of the w3wp process associated with a given url (either locally or remote). Transfers the process dump to a communal shared location for retrieval.
  • WebAdmin.Test-WebAppExists.ps1

    Check if the an application pool exists on a remote server.
  • WebAdmin.Test-IsLocalComputerName.ps1

    Tests if the command will need to run locally or remotely.
  • WebAdmin.ConvertTo-UrlBasedAppPoolName.ps1

    The name kind of covers it. For example https://unittest.some.company.com should have an application pool of unittest.some.company.com. A second example would be https://unittest.some.company.com/subsitea/ should have an application pool of unittest.some.company.com_subsitea.


if($global:WebAdmin -eq $null) {
$global:WebAdmin = @{}
}
# http://stackoverflow.com/questions/1183183/path-of-currently-executing-powershell-script
$root = Split-Path $MyInvocation.MyCommand.Path -Parent;
$global:WebAdmin.ProcDumpLocalPath = "$root\Resources\procdump.exe"
<#
.SYNOPSIS
Uses sysinternal procdump to get a proc dump of a w3wp service on a webserver. The file will
be transfered to a shared location for distribution.
.PARAMETER ServerName
The server to pull a proc dump from.
.PARAMETER Url
The url of the website to get a proc dump from
.EXAMPLE
Command:
New-WebProcDumpUcsb -ServerName SA177 -Url my.dev.sa.ucsb.edu/aaa
Output:
#>
Function New-WebProcDump {
[CmdletBinding()]
Param (
[Parameter(Mandatory=$false)]
[string] $ServerName = $env:COMPUTERNAME,
[Parameter(Mandatory=$true)]
[string] $Url
)
# setup variables
$appPoolName = ConvertTo-UrlBasedAppPoolName -Url $Url
$isLocalMachine = Test-IsLocalComputerName -ComputerName $ServerName
if((Test-WebAppExists -ServerName $ServerName -Url $Url) -eq $false) {
throw "IIS $env:COMPUTERNAME - No webapp could be found for url $Url on $ServerName"
}
# ensure procdump exists on the remote server
if((Test-Path $global:WebAdmin.ProcDumpLocalPath) -eq $false) {
throw "IIS $env:COMPUTERNAME - Cannot find local copy of procdump.exe in WebAdministrationUcsb module ($($global:WebAdministrationUcsb.ProcDumpLocalPath)). Ensure it exists before running again."
}
if($isLocalMachine) {
# gonna run procdump locally so the local procdump in the module will be used.
} else {
# gonna run this on a remote server, so ensure that procdump is on the server
$utilRemotePath = "\\{0}\C$\IT\Utilities" -f $ServerName
if((Test-Path $utilRemotePath) -eq $false) {
New-Item -Path $utilRemotePath -ItemType Directory | Out-Null
}
$procdumpRemotePath = "$utilRemotePath\procdump.exe"
if((Test-Path $procdumpRemotePath) -eq $false) {
Copy-Item -Path $global:WebAdministrationUcsb.ProcDumpLocalPath -Destination $utilRemotePath | Out-Null
}
}
# get the process info from the remote server
$processScript = {
if($appPoolName -eq $null) {
$appPoolName = $args[0]
}
Import-Module WebAdministration
$webModule = Get-Module WebAdministration
if(-not $webModule) {
Import-Module WebAdministration
}
$processes = dir "IIS:\AppPools\$appPoolName\WorkerProcesses"
return $processes
}
$params = @($appPoolName)
if($isLocalMachine) {
$w = . $processScript
} else {
$w = Invoke-Command -ComputerName $ServerName -ScriptBlock $processScript -ArgumentList $params
}
if($w -eq $null) {
throw "IIS $env:COMPUTERNAME - No process for appPool $appPoolName on $ServerName could be found."
}
if(@($w).Count -gt 1) {
throw "IIS $env:COMPUTERNAME - Multiple processes for appPool $appPoolName on $ServerName were found. This is weird, contact an administrator. Process Count: $(@($w).Count)"
}
# run the dump remotely
$dumpScript = {
if($processId -eq $null) {
$processId = $args[0]
}
if($procdump -eq $null) {
$procdump = "C:\IT\Utilities\procdump.exe"
}
cd "C:\Users\$($env:USERNAME)\AppData\Local\Temp"
$out = . $procdump -ma -accepteula $processId
$line = $out |? { $_ -match "Dump 1 initiated" }
$ix = $line.IndexOf("ed: ")
$path = $line.Substring($ix + 4)
return $path
}
$processId = $w.processId
$procdump = $global:WebAdmin.ProcDumpLocalPath
if($isLocalMachine) {
$path = . $dumpScript
} else {
$path = Invoke-Command -ComputerName $ServerName -ScriptBlock $dumpScript -ArgumentList $processId
}
# copy dump to local storage
$sharepath = ""
if([string]::IsNullOrWhiteSpace($path) -eq $false) {
$nwPath = $path -replace "C:\\", "\\$ServerName\C$\"
if(Test-Path $nwPath) {
$parent = Split-Path $nwPath -Parent
$leaf = Split-Path $nwPath -Leaf
$null = . robocopy "$parent" "D:\Temp\" /r:1 /w:1 $leaf
$locpath = "D:\Temp\$leaf"
$curnttime = [DateTime]::Now.ToString("yyyyMMddHHmm")
$newfilename = "$ServerName-$appPoolName-w3wp-$curnttime.dmp"
$newpath = "D:\Temp\$newfilename"
mv $locpath $newpath
del $nwPath -Force -ErrorAction SilentlyContinue
$sharepath = "\\$($env:COMPUTERNAME)\d\temp\$newfilename"
}
}
return $sharepath
}
<#
.SYNOPSIS
Tests if a web application exists (this can be a site or application).
This was only written to keep naming conventions consistent. This is the same as
Test-Path IIS:\Sites\$SiteName;
.PARAMETER Url
The url of the match on
.PARAMETER Environment
The environment to apply this to
.EXAMPLE
Test-WebAppExists -Environment Dev -Url "http://unittest.{env}.place.something.com/services"
#>
Function Test-WebAppExists {
Param (
[string] $ServerName = $env:COMPUTERNAME,
[Parameter(Mandatory = $true)]
[string] $Url
)
# setup variables
$appPoolName = ConvertTo-UrlBasedAppPoolName -Url $Url
$scriptBlock = {
if($appPoolName -eq $null) {
$appPoolName = $args[0]
}
Import-Module WebAdministration
$pathToTest = "IIS:\AppPools\{0}" -f $appPoolName
if((Test-Path $pathToTest) -eq $false) { return $false }
$app = Get-Item $pathToTest
$exists = $true
if($app -eq $null) { $exists = $false }
if($app.GetType().Fullname -ne "Microsoft.IIs.PowerShell.Framework.ConfigurationElement") { $exists = $false }
return $exists;
} # end scriptblock
$parameters = @($appPoolName)
if(Test-IsLocalComputerName -ComputerName $ServerName) {
$exists = . $scriptBlock
} else {
$exists = Invoke-Command -ComputerName $ServerName -ScriptBlock $scriptBlock -ArgumentList $parameters
}
return $exists;
}
<#
.SYNOPSIS
Checks if the given ComputerName is for the local computer
.PARAMETER ComputerName
The name of a computer to check.
.EXAMPLE
$session = New-PSSession .
$computerName = $session.ComputerName
if(Test-IsLocalComputerName $computerName) { ... }
#>
Function Test-IsLocalComputerName {
[CmdletBinding()]
[OutputType([bool])]
Param (
[Parameter(Mandatory = $true)]
[string] $ComputerName
)
<# DEBUGGING
$callStack = Get-PSCallStack
if ($callStack.Count -gt 0) {
Write-Host ("$($env:COMPUTERNAME) - Test-IsLocalComputerName - Parent function: {0}" -f $callStack[1].FunctionName)
}
Write-Host "$($env:COMPUTERNAME) - Test-IsLocalComputerName - ComputerName = $ComputerName"
#>
if($ComputerName -eq "localhost") { return $true; }
if($ComputerName -eq $env:COMPUTERNAME) { return $true; }
$address = [System.Net.Dns]::GetHostAddresses($ComputerName).IPAddressToString
if($address.StartsWith("127.")) { return $true; }
$addressesOnThisMachine = [System.Net.Dns]::GetHostAddresses($env:COMPUTERNAME).IPAddressToString
if($addressesOnThisMachine -contains $address) { return $true; }
#Write-Host "$($env:COMPUTERNAME) - Test-IsLocalComputerName - Result = $false"
return $false;
}
<#
.SYNOPSIS
Enforces the formatting standards for application pool names.
This should be used to figure out the application pool name before creating an
new one. The name is also used to create unique ARR rule names.
.PARAMETER Url
The url to parse and convert to our standardized app pool name.
.PARAMETER Environment
If an environment is also passed, the url will be run through Get-WebEnvironmentUri
before being parsed/converted.
.LINK
Get-WebEnvironmentUri
.EXAMPLE
$url = "http://unittest.{env}.place.something.com"
$env = "dev"
$appPoolName = ConvertTo-UrlBasedAppPoolName -Url $url -Environment $env
#>
Function ConvertTo-UrlBasedAppPoolName {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string] $Url
)
$parseUrl = $Url
$m = "(https?://)?(.*)"
if($parseUrl -match $m) {
$hostPath = $Matches[2]
}
$hostPath = $hostPath.Replace("/","_")
# this prevents http://aaa.sa.ucsb.edu/ from becoming aaa.sa.ucsb.edu_
$pathLen = $hostPath.Length;
if($pathLen -gt 0) {
if($hostPath[$pathLen - 1] -eq "_") {
$hostPath = $hostPath.Substring(0, $pathLen - 1);
}
}
return $hostPath.ToLower()
}

Apigee REST Management API with MFA

on Monday, August 20, 2018

Not too long ago Apigee updated their documentation to show that Basic Authentication was going to be deprecated on their Management API. This wasn’t really a big deal and it isn’t very difficult to implement an OAuth 2.0 machine-to-machine (grant_type=password) authentication system. Apigee has documentation on how to use their updated version of curl (ie. acurl) to make the calls. But, if you read through a generic explanation of using OAuth it’s pretty straight forward.

But, what about using MFA One Time Password Token’s (OTP) with OAuth authentication? Apigee supports the usage of Google Authenticator to do OTP tokens when signing in through the portal. And … much to my surprise … they also support the OTP tokens in their Management API OAuth login. They call the parameter, mfa_token.

This will sound crazy, but we wanted to setup MFA on an account that is used by a bot/script. Since the bot is only run from a secure location, and the username/password are already securely stored outside of the bot there is really no reason to add MFA to the account login process. It already meets all the criteria for being securely managed. But, on the other hand, why not see if it’s possible?

The only thing left that needed to be figured out was how to generate the One Time Password used by the mfa_token parameter. And, the internet had already done that! (Thank You James Nelson!) All that was left to do was find the Shared Secret Key that the OTP function needed.

Luckily I work with someone knowledgeable on the subject and they pointed out not only that the OTP algorithm that Google Authenticator uses is available on the internet but that Apigee MFA sign-up screen had the Shared Secret Key available on the page. (Thank You Kevin Wu!)

When setting up Google Authenticator in Apigeee, click on the Unable to Scan Barcode? link

image

Which reveals the OTP Shared Secret:

image

From there, you just need a little Powershell to tie it all together:

# This file shouldn't be run on it's own. It should be loaded using the Apigee Module.
<#
.SYNOPSIS
Implementation of the Time-based One-time Password Algorithm used by Google Authenticator.
.DESCRIPTION
As described in http://tools.ietf.org/id/draft-mraihi-totp-timebased-06.html, the script generates a one-time password based on a shared secret key and time value.
This script generates output identical to that of the Google Authenticator application, but is NOT INTENDED FOR PRODUCTION USE as no effort has been made to code securely or protect the key. For demonstration-use only.
Script code is essentially a transation of a javascript implementation found at http://jsfiddle.net/russau/uRCTk/
Output is a PSObject that includes the generated OTP, the values of intermediate calculations, and a URL leading to a QR code that can be used to generate a corresponding OTP in Google Authenticator applications.
The generated QR code contains a URL that takes the format "otpauth://totp/<email_address_here>?secret=<secret_here>", for example: otpauth://totp/tester@test.com?secret=JBSWY3DPEHPK3PXP
The generated OTP is (obviously) time-based, so this script outptu will only match Google Authenticator output if the clocks on both systems are (nearly) in sync.
The acceptable alphabet of a base32 string is ABCDEFGHIJKLMNOPQRSTUVWXYZ234567.
Virtually no parm checking is done in this script. Caveat Emptor.
.PARAMETER sharedSecretKey
A random, base32 string shared by both the challenge and reponse side of the autheticating pair. This script mandates a string length of 16.
.EXAMPLE
.\Get-OTP.ps1 -sharedSecret "JBSWY3DPEHPK3PXP" | Select SharedSecret, Key, Time, HMAC, URL, OTP
.NOTES
FileName: Get-OTP.ps1
Author: Jim Nelson nelsondev1
#>
Function Get-ApigeeOTP {
[CmdletBinding()]
param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[ValidateLength(16,16)]
[string] $SharedSecret,
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[string] $Email
)
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied Int64 value to hexadecimal.
#------------------------------------------------------------------------------
function Convert-DecimalToHex($in)
{
return ([String]("{0:x}" -f [Int64]$in)).ToUpper()
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied hexadecimal value Int64.
#------------------------------------------------------------------------------
function Convert-HexToDecimal($in)
{
return [Convert]::ToInt64($in,16)
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied hexadecimal string to a byte array.
#------------------------------------------------------------------------------
function Convert-HexStringToByteArray($String)
{
return $String -split '([A-F0-9]{2})' | foreach-object { if ($_) {[System.Convert]::ToByte($_,16)}}
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied base32 string to a hexadecimal string
#------------------------------------------------------------------------------
function Convert-Base32ToHex([String]$base32)
{
$base32 = $base32.ToUpper()
$base32chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567"
$bits = ""
$hex = ""
# convert char-by-char of input into 5-bit chunks of binary
foreach ($char in $base32.ToCharArray())
{
$tmp = $base32chars.IndexOf($char)
$bits = $bits + (([Convert]::ToString($tmp,2))).PadLeft(5,"0")
}
# leftpad bits with 0 until length is a multiple of 4
while ($bits.Length % 4 -ne 0)
{
$bits = "0" + $bits
}
# convert binary chunks of 4 into hex
for (($tmp = $bits.Length -4); $tmp -ge 0; $tmp = $tmp - 4)
{
$chunk = $bits.Substring($tmp, 4);
$dec = [Convert]::ToInt32($chunk,2)
$h = Convert-DecimalToHex $dec
$hex = $h + $hex
}
return $hex
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the currentUnix epoch (div 30) in hex, left-padded with 0 to 16 chars
#------------------------------------------------------------------------------
function Get-EpochHex()
{
# this line from http://shafiqissani.wordpress.com/2010/09/30/how-to-get-the-current-epoch-time-unix-timestamp/
$unixEpoch = ([DateTime]::Now.ToUniversalTime().Ticks - 621355968000000000) / 10000000
$h = Convert-DecimalToHex ([Math]::Floor($unixEpoch / 30))
return $h.PadLeft(16,"0")
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the HMAC signature for the supplied key and time values.
#------------------------------------------------------------------------------
function Get-HMAC($key, $time)
{
$hashAlgorithm = New-Object System.Security.Cryptography.HMACSHA1
$hashAlgorithm.key = Convert-HexStringToByteArray $key
$signature = $hashAlgorithm.ComputeHash((Convert-HexStringToByteArray $time))
$result = [string]::join("", ($signature | % {([int]$_).toString('x2')}))
$result = $result.ToUpper()
return $result
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the OTP based on the supplied HMAC
#------------------------------------------------------------------------------
function Get-OTPFromHMAC($hmac)
{
$offset = Convert-HexToDecimal($hmac.Substring($hmac.Length -1))
$p1 = Convert-HexToDecimal($hmac.Substring($offset*2,8))
$p2 = Convert-HexToDecimal("7fffffff")
[string]$otp = $p1 -band $p2
$otp = $otp.Substring($otp.Length - 6, 6)
return $otp
}
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
# MAIN PROGRAM
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
$params = @{
"SharedSecret" = "";
"Key" = "";
"Time" = "";
"HMAC" = "";
"OTP" = "";
"URL" = "";
}
$reportObject = New-Object PSObject -Property $params
# google can generate a QR code of the secret for their authenticator app at this url...
$url = ('https://chart.googleapis.com/chart?chs=200x200&cht=qr&chl=200x200&chld=M|0&cht=qr&chl=otpauth://totp/' + $Email + '%3Fsecret%3D' + $SharedSecret)
$key = Convert-Base32ToHex $sharedSecret
$time = Get-EpochHex
$hmac = Get-HMAC $key $time
$otp = Get-OTPFromHMAC $hmac
$reportObject.SharedSecret = $sharedSecret
$reportObject.Key = $key
$reportObject.Time = $time
$reportObject.HMAC = $hmac
$reportObject.OTP = $otp
$reportObject.URL = $url
return $reportObject
}
[string[]]$funcs =
"Get-ApigeeOTP"
Export-ModuleMember -Function $funcs
view raw Apigee.OTP.ps1 hosted with ❤ by GitHub
# This file shouldn't be run on it's own. It should be loaded using the Apigee Module.
<#
.SYNOPSIS
Makes a call to the Apigee OAuth login endpoint and gets access tokens to use.
This should be used internally by the Apigee module. But, it shouldn't be needed by
the developer.
.EXAMPLE
$result = Get-ApigeeAccessTokens
#>
Function Get-ApigeeAccessTokens {
[CmdletBinding()]
[OutputType([PSCustomObject])]
Param ()
$otp = Get-ApigeeOTP `
-SharedSecret $global:Apigee.OAuthLogin.OTPSharedSecret `
-Email $global:Apigee.OAuthLogin.Username
$body = @{
username = $global:Apigee.OAuthLogin.Username
password = $global:Apigee.OAuthLogin.Password
mfa_token = $otp.OTP
grant_type = "password"
}
$results = Invoke-WebRequest `
-Uri $global:Apigee.OAuthLogin.Url `
-Method $global:Apigee.OAuthLogin.Method `
-Headers $global:Apigee.OAuthLogin.Headers `
-ContentType $global:Apigee.OAuthLogin.ContentType `
-Body $body
if($results.StatusCode -ne 200) {
$resultsAsString = $results | Out-String
throw "Authentication with Apigee's OAuth Failed. `r`n`r`nFull Response Object:`r`n$resultsAsString"
}
$resultsObj = ConvertFrom-Json -InputObject $results.Content
$resultsObj = Add-PsType -PSObject $resultsObj -PsType $global:Apigee.OAuthLogin.ResultObjectName
Set-ApigeeAuthHeader -Authorization $resultsObj.access_token
return $resultsObj
}
<#
.SYNOPSIS
Sets $global:Apigee.AuthHeader @{ Authorization = "value passed in" }
This is used to authenticate all calls to the Apigee REST Management endpoints.
.EXAMPLE
Set-ApigeeAuthHeader -Authorization "Bearer ..."
#>
Function Set-ApigeeAuthHeader {
[CmdletBinding()]
[OutputType([PSCustomObject])]
Param (
[Parameter(Mandatory = $true)]
$Authorization
)
$bearerAuth = "Bearer $Authorization"
$global:Apigee.AuthHeader = @{ Authorization = $bearerAuth }
}
[string[]]$funcs =
"Get-ApigeeAccessTokens", "Set-ApigeeAuthHeader"
Export-ModuleMember -Function $funcs
if($global:Apigee -eq $null) {
$global:Apigee = @{}
$global:Apigee.ApiUrl = "https://api.enterprise.apigee.com/v1/organizations/"
# Use OAuth for access credentials. All public info here:
# https://docs.apigee.com/api-platform/system-administration/using-oauth2-security-apigee-edge-management-api
$global:Apigee.OAuthLogin = @{}
# get username / password for management API
$global:Apigee.OAuthLogin.Username = "store"
$global:Apigee.OAuthLogin.Password = "these"
$global:Apigee.OAuthLogin.OTPSharedSecret = "safely"
$global:Apigee.OAuthLogin.Method = "POST"
$global:Apigee.OAuthLogin.Url = "https://login.apigee.com/oauth/token"
$global:Apigee.OAuthLogin.ContentType = "application/x-www-form-urlencoded"
$global:Apigee.OAuthLogin.Headers = @{
Accept = "application/json;charset=utf-8"
Authorization = "Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0"
}
$global:Apigee.OAuthLogin.ResultObjectName = "ApigeeAccessToken"
}
# grab functions from files (from C:\Chocolatey\chocolateyinstall\helpers\chocolateyInstaller.psm1)
Resolve-Path $root\Apigee.*.ps1 |
? { -not ($_.ProviderPath.Contains(".Tests.")) } |
% { . $_.ProviderPath; }
# get authorization token
$global:Apigee.OAuthToken = Get-ApigeeAccessTokens
view raw Apigee.psm1 hosted with ❤ by GitHub

System.Configuration.ConfigurationManager in Core

on Monday, August 13, 2018

The .NET Core (corefx) issue, System.Configuration Namespace in .Net Core, ends with the question:

@weshaggard Can you clarify the expectations here for System.Configuration usage?

I was recently converting a .NET Full Framework library over to a .NET Standard library and ran into the exact problem in that issue, and I also got stuck trying to figure out “When and How are you supposed to use System.Configuration.ConfigurationManager?”

I ended up with the answer:

If at all possible, you shouldn’t use it. It’s a facade/shim that only works with the .NET Full Framework. It’s exact purpose is to allow .NET Standard libraries to compile; but it doesn’t work unless the runtime is .NET Full Framework. In order to properly write code using it in a .NET Standard library you will have to use compiler directives to ensure that it doesn’t get executed in a .NET Core runtime. It’s scope, purpose and usage is very limited.

In a .NET Standard library if you want to use configuration information you need to plan for two different configuration systems.

  • .NET Full Framework Configuration

    Uses ConfigurationManager from the System.Configuration dll installed with the framework. This uses the familiar Configuration.AppSettings[string] and Configuration.ConnectionStrings[string]. This is a unified model in .NET Full Framework and works across all application types: Web, Console, WPF, etc.

  • .NET Core Configuration

    Uses ConfigurationBuilder from Microsoft.Extensions.Configuration. And, really, it expects ConfigurationBuilder to be used in an ASP.NET Core website. And this is the real big issue. The .NET Core team focused almost solely on ASP.NET Core and other target platforms really got pushed to the side. Because of this, it’s expecting configuration to be done through the ASP.NET Configuration system at Startup.

And, for now, I can only see two reasonable ways to implement this:

  • A single .NET Standard Library that uses compiler directives to determine when to use ConfigurationManager vs a ConfigurationBuilder tie-in.

    This would use the System.Configuration.ConfigurationManager nuget package.

    Pros:
    - Single library with a single nuget package
    - Single namespace

    Cons:
    - You would need a single “Unified Configuration Manager” class which would have #ifdef statements throughout it to determine which configuration system to use.
    - If you did need to reference either the .NET Full Framework or .NET Core Framework the code base would become much more complicated.
    - Unit tests would also need compiler directives to handle differences of running under different Frameworks.
  • A common shared project used in two libraries each targeting the different frameworks.

    This would not use the System.Configuration.ConfigurationManager nuget package.

    This is how the AspNet API Versioning project has handled the situation.

    Pros:
    - The two top-level libraries can target the exact framework they are intended to be used with. They would have access to the full API set of each framework and would not need to use any shims/facades.
    - The usage of #ifdef statements would be uniform across the files as it would only need to be used to select the correct namespace and using statements.
    - The code would read better as all framework specific would be abstracted out of the shared code using extension methods.

    Cons:
    - You would create multiple libraries and multiple nuget packages. This can create headaches and confusion for downstream developers.
    - Unit tests would (most likely) also use multiple libraries, each targeting the correct framework.
    - Requires slightly more overhead to ensure libraries are versioned together and assembly directives are setup in a shared way.
    - The build system would need to handle creating multiple nuget packages.

Apigee TimeTaken AssignVariable vs JS Policy

on Monday, August 6, 2018

Apigee’s API Gateway is built on top of a Java code base. And, all of the policies built into the system are pre-compiled Java policies. So, the built in policies have pretty good performance since they are only reading in some cached configuration information and executing natively in the runtime.

Unfortunately, these policies come with two big draw backs:

  • In order to do some common tasks (like if x then do y and z) you usually have to use multiple predefined policies chained together. And, those predefined policies are all configured in verbose and cumbersome xml definitions.
  • Also, there’s no way to create predefined policies that cover every possible scenario. So, developers will need a way to do things that the original designers never imagined.

For those reasons, there are Javascript Policies which can do anything that javascript can do.

The big drawback with Javascript policies:

  • The system has to instantiate a Javascript engine, populate its environment information, run the javascript file, and return the results back to the runtime. This takes time.

So, I was curious how much more time does it take to use a Javascript Policy vs an Assign Message Policy for a very simple task.

It turns out the difference in timing is relatively significant but overall unimportant.

The test used in the comparison checks if a query string parameter exists, and if it does then write it to a header parameter. If the header parameter existed in the first place, then don’t do any of this.

Here are the pseudo-statistical results:

  • Average Time Taken (non-scientific measurements, best described as “its about this long”):
    • Javascript Policy: ~330,000 nanoseconds (0.33 milliseconds)
    • Assign Message Policy: ~50,000 nanoseconds (0.05 milliseonds)
  • What you can take away
    • A Javascript Policy is about 650% slower or Javascript has about 280,000 nanoseconds overhead for creation, processing and resolution.
    • Both Policies take less that 0.5 ms. While the slower performance is relatively significant; in the larger scheme of things, they are both fast.

Javascript Policy

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Javascript async="false" continueOnError="false" enabled="true" timeLimit="200" name="merge-query-string-api-key">
<DisplayName>Merge Query String API Key</DisplayName>
<Properties/>
<ResourceURL>jsc://Merge-Query-String-API-Key.js</ResourceURL>
</Javascript>
view raw Javascript.xml hosted with ❤ by GitHub
/*
* Attempts to "merge" (aka. use) the query string parameter 'api-key'
* into the header variable api-key if the header variable is missing.
* if the header variable exists, then the header variable is always used
* and the query string parameter is ignored.
*/
var hApiKey = context.getVariable("request.header.api-key")
//print("header: " + hApiKey);
if(hApiKey === null) {
var qsApiKey = context.getVariable("request.queryparam.api-key");
//print("query string: " + qsApiKey);
if(qsApiKey !== null) {
//print("setting request.header.api-key to '" + qsApiKey + "'");
context.setVariable("request.header.api-key", qsApiKey);
}
}

Javascript Timing Results

image

Assign Message Policy

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<AssignMessage async="false" continueOnError="false" enabled="true" name="Assign-API-Key-To-Header">
<DisplayName>Assign API Key To Header</DisplayName>
<AssignTo createNew="false" transport="http" type="request"/>
<Properties/>
<Add>
<Headers>
<Header name="api-key">{request.queryparam.api-key}</Header>
</Headers>
</Add>
<IgnoreUnresolvedVariables>true</IgnoreUnresolvedVariables>
</AssignMessage>

Assing Message Timing Results

image


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.