Pi-hole

on Monday, June 24, 2019

A coworker recently setup an advertisement blocking software called Pi-hole on his home network. I didn’t think much of it at first, but his excitement about it was compelling and he did a quick demonstration of it.

  • It blocks malware and advertisements at the DNS level. Which means you don’t need Adblock Plus. This also means it can block advertisements outside of browsers, like ads in cell phone apps.
  • It runs on a tiny Raspberry Pi, so you can nestle it next to your other networking equipment.
  • It comes with an awesome admin interface which you can use to configure it, customize it, and turn it off when you run into any issues (haven’t yet, fingers crossed).


Quick How-To for Windows Users

Raspberry Pi’s baseline operating system, Raspbian, is a Linux/Debian based kernel, so it can be a little nerve-racking for a Windows user to go into that world. My experience with Raspbian was great. I found it easy to use and it let you get trivial things done without having to spend time researching how to do it.

Purchasing the Hardware ~ $100

The hardware you need is pretty inexpensive, but it’s easy to forget something (I did). The majority of my purchases were from PiShop.us, but there are a number of other retailers that specialize in Raspberry Pi products.

  • Raspberry Pi Model 3 B+ (pishop.us) – $35

    The way I understand it, this is currently the most popular version of the board. There are smaller ones, but the Model 3 B+ still smaller than the palm of your hand. It has wired and wireless network connectivity, it has 4 USB ports, HDMI output, a micro SD slot and a power connection slot. It has other connectors, but they won’t be needed for Pi-Hole.
  • microSD card with Raspbian on it (pishop.us) – $10

    I forgot to buy this the first time. The cards are more expensive when you buy them from a store like Best Buy or Walmart; and you have to install the Raspbian operating system yourself if you do that. It’s not difficult to flash a microSD card (Etcher) with the operating system, but it is easier to just buy a preconfigured card.
  • Power supply (pishop.us) – $9

    I thought this would come with the board. It’s didn’t, but it wasn’t very expensive to buy.
  • Case (pishop.us) – $8

    Optional. This is definitely not required, but it makes it easier to place the board near your networking equipment. There are a number of cases and options on that site; search for one that you like.
  • Networking Cable

    You probably already have a number of these lying around.
  • USB Keyboard (pishop.us) – $17 / USB Mouse (pishop.us) – $8

    It’s surprisingly difficult to find a wired keyboard and mouse at a low price sometimes.
  • HDMI Cable & Monitor – $10

    Make sure you have a monitor that supports HDMI before you start. I thought my monitors supported HDMI, but I was wrong. Luckily I had a television sitting around, so that saved me from having to get too creative with a work around.

    PiShop has a few adapters for VGA which can help. But, plan ahead on this one.

Installation and Configuration

  1. So, there is an awesome Setting up your Raspberry Pi tutorial on their website, which I won’t repeat.
  2. I will point out the list of helpful guides for configuring your Raspberry Pi. But, I want to also point out that under Preferences > Raspberry Pi Configuration … Interfaces is a whole slew of easy configurations that you can do through the GUI.
    image
    image
  3. Pi-hole’s installation was incredibly easy, with just one command:

    curl –sSL https://install.pi-hole.net | bash

  4. Log into your router and setup your Raspberry Pi to have a static IP address assigned in the DHCP tables.
             
    image

  5. While in your router, also setup the primary DNS for your local internet to point to your Raspberry Pi/Pi-Hole.
         
    image

    This will setup your router to forward DNS requests to the Raspberry Pi. You don’t have to do any other configuration to your network.

    (For me, my laptop and cell phone immediately started using the new DNS server. However, my main computer didn’t. I think I was logged into a VPN at the time, which might have taken over responsibility for DNS resolution.)

That’s about it.

Parse a SQL Script Before Execution

on Monday, June 17, 2019

Recently, we’ve added support for running database scripts as a part of our deployment pipeline. One of the steps added to the pipeline was a check to ensure the sql script that would be executed parses correctly. This is a useful evaluation far before a deployment, not just to prevent problems, but to give fast feedback to our teams so they can correct the issue.

The solution turned out to be much more generic than we had imagined. Because we didn’t yet know how generic the solution was we built some custom code for executing sql commands against a database. However, if we were to go back and do it a second time, we would probably use an open source tool to perform the execution of the scripts:

  • dbatools

    This is an amazing powershell module of database related commands and tools. It is designed to be used in DevOps pipelines and to make DBA’s lives easier.

    We looked at this module when first implementing the parsing functionality, but we wanted to get the output (including normal vs error output) in a particular way and we would need more time to get it to fit our exact needs.

    But! It’s an absolutely amazing collection of tools and we will definitely revisit it.
  • Invoke-SqlCmd2 (script)

    This script/module is a healthier and more secure version of Invoke-SqlCmd. It adds in functionality that makes your life easier, without adding on soo much that it becomes confusing to use.

    We started out using this, but then had some extra needs for error handling and output parsing that forced us to explore alternatives.
  • sqlcmd.exe

    This is what we ended up using. It’s a command line utility so it’s doesn’t have native powershell output. But, the output feels a lot like the output from SQL Server Management Studio. Because of that comfort, it made it the easiest to wrap our heads around for handling error and output parsing.

The Parsing Script

For Microsoft SQL Server, parsing is made possible by wrapping your SQL query in the set parseonly directive.

set parseonly on
go
/* your script goes here */
go
set parseonly off
go

From there, you just need to send the script over to the SQL Server and execute it. If SQL Server sends back no output, then it parsed successfully. It’s nice and straight forward.

Lessons Learned

QUOTED_IDENTIFIERS

SQL Server Management Studio has a number of default settings which you are generally unaware of. One of those is the directive QUOTED_IDENTIFIER is set to ON. When using the sqlcmd.exe utility, you will need to use the –I parameter in order to turn on the same functionality.

The Invoke-DbUpdateScript.ps1 below uses the parameter.

The error message looks like this:

Msg 1934, Level 16, State 1, Server xxxxxx, Line 14

UPDATE failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

Linked SQL Servers & Transactions

We required that all queries which are run through the deployment pipeline must be wrapped in a transaction. This guarantees that if any error occurs (which will end the connection to the server) will be rolled back automatically. However, this could result an error message of: “Could not enlist in a distributed transaction.”

To work around the problem change the transaction isolation level from SERIALIZABLE to READ COMMITTED.

Could not enlist in a distributed transaction.

Scripts

function Test-DbUpdateScriptParses {
param(
[Parameter(Mandatory = $true)]
[string] $Database,
[Parameter()]
[string] $Port = "1433",
[Parameter(Mandatory = $true)]
[string] $Script,
[Parameter(Mandatory = $true)]
[ValidateSet("Local","Dev","Test","Schedule","Emergency","Prod")]
[string] $Environment,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[System.Management.Automation.PSCredential] $Credential,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[System.Management.Automation.Runspaces.AuthenticationMechanism] $Authentication = `
[System.Management.Automation.Runspaces.AuthenticationMechanism]::Default,
[switch] $Throw
)
$path = Get-DbUpdateScriptPath -Database $Database -Script $Script -Environment Staging
$content = Get-Content -Path $path -Raw
$pretext = @"
set parseonly on
go
"@
$posttext = @"
go
set parseonly off
go
"@
$formattedSql = $pretext + $content + $posttext
$output = Invoke-DbUpdateScript -Database $Database -Port $Port -Environment $Environment `
-Sql $formattedSql `
-Credential $Credential -Authentication $Authentication `
-SurpressWriteHost
$success = $null -eq $output
if(-not $success -and $Throw) {
throw $output
}
return $success
}
<#
.SYNOPSIS
Generates the path to the file for the given environment (default environment is Staging)
.PARAMETER Environment
The environment SQL Scripts folder. The fault value is 'Staging'.
This is not the same as the deployment target environment.
#>
function Get-DbUpdateScriptPath {
param(
[Parameter(Mandatory = $true)]
[string] $Database,
[Parameter(Mandatory = $true)]
[string] $Script,
[Parameter(Mandatory = $false)]
[ValidateSet("Staging","Dev","Test","Prod","Emergency","Repository","Refresh")]
[string] $Environment = "Staging"
)
<# define your logic here #>
return $path
}
function Invoke-DbUpdateScript {
[CmdletBinding()]
param(
[Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true)]
[string] $Database,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string] $Port = "1433",
[Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true, ParameterSetName = "Script")]
[string] $Script,
[Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true, ParameterSetName = "Sql")]
[string] $Sql,
[Parameter(Mandatory = $true, ValueFromPipelineByPropertyName = $true)]
[ValidateSet("Dev","Test","Prod")]
[string] $Environment,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string] $ComputerName = $env:COMPUTERNAME,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[System.Management.Automation.PSCredential] $Credential,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[System.Management.Automation.Runspaces.AuthenticationMechanism] $Authentication,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[switch] $Throw,
[switch] $SurpressWriteHost
)
if($PSCmdlet.ParameterSetName -eq "Script") {
$path = Get-DbUpdateScriptPath -Database $Database -Script $Script -Environment Staging
if((Test-Path -Path $path) -eq $false) {
throw (
"Invoke-DbUpdateScript: Script '{0}' not found at '{1}'. Please ensure it exists and try again." -f `
$Script, $path
)
}
$Sql = Get-Content -Path $path -Raw
}
$dbhost = Get-DbConnectionString -Database $Database -Port $Port -Environment $Environment
$scriptBlock = {
param([string] $dbhost, [string] $database, [string] $sql, [bool] $SurpressWriteHost)
$tempfile = New-TemporaryFile
$sql | Set-Content -Path $tempfile -Force
$errorfile = New-TemporaryFile
if(-not $SurpressWriteHost) {
Write-Host @"
----------------
dbhost: $dbhost
database: $database
tempfile: $tempfile
errorfile: $errorfile
sql:
$sql
----------------
"@
}
$cmd = 'sqlcmd -E -I -S "{0}" -d "{1}" -i "{2}"' -f $dbhost, $database, $tempfile
#$cmd = 'sqlcmd -E -I -S "{0}" -d "{1}" -Q "{2}"' -f $dbhost, $database, $sql
$result = invoke-expression "$cmd 2> $errorfile"
$errorc = Get-Content -Path $errorfile -Raw
if($errorc.length -gt 0) {
if($result.length -gt 0) { $result += "`r`n`r`n" }
$result += $errorc
}
Remove-Item -Path $tempfile | Out-Null
Remove-Item -Path $errorfile | Out-Null
return $result
}
if($Credential -eq $null) {
if(-not $SurpressWriteHost) {
Write-Host ("Executing script on {0}. (script path = '{1}')" -f $dbhost, $path)
}
$output = Invoke-Command `
-ScriptBlock $scriptBlock `
-ArgumentList @($dbhost, $Database, $Sql, $SurpressWriteHost)
} else {
if(-not $SurpressWriteHost) {
Write-Host (
"Executing script on {0} with credentials for '{1}'. (script path = '{2}')" -f `
$dbhost, $Credential.UserName, $path
)
}
$output = Invoke-Command `
-ComputerName $ComputerName -ScriptBlock $scriptBlock `
-Credential $Credential -Authentication $Authentication `
-ArgumentList @($dbhost, $Database, $Sql, $SurpressWriteHost)
}
## no error or output to report
if([string]::IsNullOrWhiteSpace($output)) {
return $null
}
## Error processing
$errors = Convert-DbUpdateOutputToErrors -Output $output
if(@($errors).Count -gt 0) {
# This is used in debugging locally
if(-not $SurpressWriteHost) {
Write-Host @"
------------- ERRORS --------------
"@
if($output -match "sqlcmd \:") {
Write-Host @"
$output
-----------------------------------
"@
}
}
if($Throw) {
$message = [System.Text.StringBuilder]::new(100 * $errors.Count)
foreach($er in $errors) {
if($message.Length -gt 0) { $message.AppendLine() }
$message.Append($er.OriginalErrorMessage)
}
throw $message.ToString()
}
}
return $output
}
function Convert-DbUpdateOutputToErrors {
param(
[string[]] $Output
)
$errors = @()
$sqlcmdErrorRegex = "sqlcmd : "
if($Output -match $sqlcmdErrorRegex) {
$fullOutput = $Output -join "`r`n"
$props = @{
Msg = $fullOutput
}
$sqlError = New-Object PSCustomObject -Property $props
$errors += @($sqlError)
}
$msgErrorRegex = "Msg ([0-9]+), Level ([0-9]+), State ([0-9]+), Server ([^,]+),([^,]+,)? Line ([0-9]+)"
for($i = 0; $i -lt $Output.Count; $i++) {
$line = $Output[$i]
if($line -match $msgErrorRegex) {
$original = $line
$i++
$line = $Output[$i]
$props = @{
Msg = $matches[1]
Level = $matches[2]
State = $matches[3]
Server = $matches[4]
Line = $matches[5]
ErrorMessage = $line
OriginalErrorMessage = ($original + "`r`n" + $line)
}
$sqlError = New-Object PSCustomObject -Property $props
$errors += @($sqlError)
}
}
return $errors
}

Replacing Invoke-WebRequest with HttpClient

on Monday, June 10, 2019

I’ve written before about how frustrating the Error handling for Invoke-WebRequest and Invoke-RestMethod can be. But, there is another way to make web requests which will never update the global $Error object: write your own wrapper around HttpClient.

This method is much much more complicated than using Invoke-WebRequest, Invoke-RestMethod, or even Invoke-WebServiceProxy. But, it will give you complete control over the request and the response. And as a nice side effect, it’s cross platform compatible (runs on linux and windows).

Below is an example use of HttpClient to call Apigee’s OAuth Login endpoint.

(PS. The idea for using an HttpClient came from David Carroll’s PoShDynDnsApi powershell module. Which works with two implementations that use HttpClient (.NET Core and .NET Full Framework). The reason for two implementations is that DynDns requires one of their calls to perform a non-standard GET request with a body. Microsoft’s HttpClient implementation is pretty strict about following the rules and does not allow a body to be sent in GET requests. So, he had to use reflection to inject a body into his request. Each version of .NET had a different internal class structure that had to be set differently. It’s a pretty amazing work around.)

<#
.SYNOPSIS
Makes a call to the Apigee OAuth login endpoint and gets access tokens to use.
This should be used internally by the ApigeePs module. But, it shouldn't be needed by
the developer.
.EXAMPLE
$result = Get-ApigeeAccessTokens
#>
Function Get-ApigeeAccessTokens {
[CmdletBinding()]
[OutputType([PSCustomObject])]
Param ()
$attempt = 1
$retry = $false
$content = $null
do {
$retry = $false
# You'll need to set teh Shared Secret and Username safely with your account information
$otp = Get-ApigeeOTP `
-SharedSecret $global:ApigeePs.OAuthLogin.OTPSharedSecret `
-Email $global:ApigeePs.OAuthLogin.Username
# You'll need to set the Username and Password safely with your account information
$body = "username={0}&password={1}&mfa_token={2}&grant_type={3}" -f `
$global:ApigeePs.OAuthLogin.Username, `
$global:ApigeePs.OAuthLogin.Password, `
$otp.OTP, `
"password"
$httpClient = [System.Net.Http.Httpclient]::new()
$httpClient.Timeout = [System.TimeSpan]::new(0, 0, 90)
$httpClient.DefaultRequestHeaders.TransferEncodingChunked = $false
$accept = [System.Net.Http.Headers.MediaTypeWithQualityHeaderValue]::new("application/json")
$accept.CharSet = "utf-8"
$httpClient.DefaultRequestHeaders.Accept.Add($accept)
# that's a hard coded value. it's literally in their documentation:
# https://docs.apigee.com/api-platform/system-administration/management-api-tokens
# (it makes complete sense when you think about it, but it's really concerning when you first see it)
$authorization = [System.Net.Http.Headers.AuthenticationHeaderValue]::new("Basic", "ZWRnZWNsaTplZGdlY2xpc2VjcmV0")
$httpClient.DefaultRequestHeaders.Authorization = $authorization
$httpClient.BaseAddress = [Uri] ($global:ApigeePs.OAuthLogin.Url)
$httpMethod = [System.Net.Http.HttpMethod] "POST"
$httpRequest = [System.Net.Http.HttpRequestMessage]::new($httpMethod, "token")
if($body -ne $null) {
$httpRequest.Content = [System.Net.Http.StringContent]::new($body, [System.Text.Encoding]::UTF8, "application/x-www-form-urlencoded")
}
$httpResponseMessage = $httpClient.SendAsync($httpRequest)
if ($httpResponseMessage.IsFaulted) {
$PsCmdlet.ThrowTerminatingError($httpResponseMessage.Exception)
}
$result = $httpResponseMessage.Result
$content = Get-ApigeeRequestContent -Response $result
# check for
if($result.StatusCode -eq 401) {
if($content -eq "{`"error`":`"unauthorized`",`"error_description`":`"Error: Invalid MFA code.`"}") {
# handle authentication retry (see note above about why this isn't implemented)
$retry = $true
}
}
if($result.StatusCode -ge 400 -and $retry -eq $false) {
$uri = "{0}/{1}" -f $global:ApigeePs.ApiUrl, $ApiPath
Write-Verbose "Error calling $uri ($Method)"
if($Body -ne $null) {
Write-Verbose "`tBody: $Body"
}
Write-Verbose "Response: Status = $([int]$result.StatusCode) $($result.ReasonPhrase)"
Write-Verbose "Resposne: Headers"
foreach($key in $result.Headers) {
Write-Verbose ("`t{0}`t`t{1}" -f $key, $result.Headers[$key])
}
Write-Verbose "Response: Content"
$contentOutput = [string]::Empty
if([string]::IsNullOrWhiteSpace($content) -eq $false) { $contentOutput = $content }
Write-Verbose $contentOutput
return $result
}
$attempt++
} while( $retry )
# parse the result and set headers
$json = ConvertFrom-Json -InputObject $content
$authorization = [System.Net.Http.Headers.AuthenticationHeaderValue]::new("bearer", $json.access_token)
$global:ApigeePs.AuthHeader = $authorization
return $json
}
<#
.SYNOPSIS
Implementation of the Time-based One-time Password Algorithm used by Google Authenticator.
.DESCRIPTION
As described in http://tools.ietf.org/id/draft-mraihi-totp-timebased-06.html, the script generates a one-time password based on a shared secret key and time value.
This script generates output identical to that of the Google Authenticator application, but is NOT INTENDED FOR PRODUCTION USE as no effort has been made to code securely or protect the key. For demonstration-use only.
Script code is essentially a transation of a javascript implementation found at http://jsfiddle.net/russau/uRCTk/
Output is a PSObject that includes the generated OTP, the values of intermediate calculations, and a URL leading to a QR code that can be used to generate a corresponding OTP in Google Authenticator applications.
The generated QR code contains a URL that takes the format "otpauth://totp/<email_address_here>?secret=<secret_here>", for example: otpauth://totp/tester@test.com?secret=JBSWY3DPEHPK3PXP
The generated OTP is (obviously) time-based, so this script outptu will only match Google Authenticator output if the clocks on both systems are (nearly) in sync.
The acceptable alphabet of a base32 string is ABCDEFGHIJKLMNOPQRSTUVWXYZ234567.
Virtually no parm checking is done in this script. Caveat Emptor.
.PARAMETER sharedSecretKey
A random, base32 string shared by both the challenge and reponse side of the autheticating pair. This script mandates a string length of 16.
.EXAMPLE
.\Get-OTP.ps1 -sharedSecret "JBSWY3DPEHPK3PXP" | Select SharedSecret, Key, Time, HMAC, URL, OTP
.NOTES
FileName: Get-OTP.ps1
Author: Jim Nelson nelsondev1
#>
Function Get-ApigeeOTP {
[CmdletBinding()]
param
(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[ValidateLength(16,16)]
[string] $SharedSecret,
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[string] $Email
)
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied Int64 value to hexadecimal.
#------------------------------------------------------------------------------
function Convert-DecimalToHex($in)
{
return ([String]("{0:x}" -f [Int64]$in)).ToUpper()
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied hexadecimal value Int64.
#------------------------------------------------------------------------------
function Convert-HexToDecimal($in)
{
return [Convert]::ToInt64($in,16)
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied hexadecimal string to a byte array.
#------------------------------------------------------------------------------
function Convert-HexStringToByteArray($String)
{
return $String -split '([A-F0-9]{2})' | foreach-object { if ($_) {[System.Convert]::ToByte($_,16)}}
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Converts the supplied base32 string to a hexadecimal string
#------------------------------------------------------------------------------
function Convert-Base32ToHex([String]$base32)
{
$base32 = $base32.ToUpper()
$base32chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567"
$bits = ""
$hex = ""
# convert char-by-char of input into 5-bit chunks of binary
foreach ($char in $base32.ToCharArray())
{
$tmp = $base32chars.IndexOf($char)
$bits = $bits + (([Convert]::ToString($tmp,2))).PadLeft(5,"0")
}
# leftpad bits with 0 until length is a multiple of 4
while ($bits.Length % 4 -ne 0)
{
$bits = "0" + $bits
}
# convert binary chunks of 4 into hex
for (($tmp = $bits.Length -4); $tmp -ge 0; $tmp = $tmp - 4)
{
$chunk = $bits.Substring($tmp, 4);
$dec = [Convert]::ToInt32($chunk,2)
$h = Convert-DecimalToHex $dec
$hex = $h + $hex
}
return $hex
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the currentUnix epoch (div 30) in hex, left-padded with 0 to 16 chars
#------------------------------------------------------------------------------
function Get-EpochHex()
{
# this line from http://shafiqissani.wordpress.com/2010/09/30/how-to-get-the-current-epoch-time-unix-timestamp/
$unixEpoch = ([DateTime]::Now.ToUniversalTime().Ticks - 621355968000000000) / 10000000
$h = Convert-DecimalToHex ([Math]::Floor($unixEpoch / 30))
return $h.PadLeft(16,"0")
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the HMAC signature for the supplied key and time values.
#------------------------------------------------------------------------------
function Get-HMAC($key, $time)
{
$hashAlgorithm = New-Object System.Security.Cryptography.HMACSHA1
$hashAlgorithm.key = Convert-HexStringToByteArray $key
$signature = $hashAlgorithm.ComputeHash((Convert-HexStringToByteArray $time))
$result = [string]::join("", ($signature | % {([int]$_).toString('x2')}))
$result = $result.ToUpper()
return $result
}
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# Get the OTP based on the supplied HMAC
#------------------------------------------------------------------------------
function Get-OTPFromHMAC($hmac)
{
$offset = Convert-HexToDecimal($hmac.Substring($hmac.Length -1))
$p1 = Convert-HexToDecimal($hmac.Substring($offset*2,8))
$p2 = Convert-HexToDecimal("7fffffff")
[string]$otp = $p1 -band $p2
$otp = $otp.Substring($otp.Length - 6, 6)
return $otp
}
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
# MAIN PROGRAM
# -------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------
$params = @{
"SharedSecret" = "";
"Key" = "";
"Time" = "";
"HMAC" = "";
"OTP" = "";
"URL" = "";
}
$reportObject = New-Object PSObject -Property $params
# google can generate a QR code of the secret for their authenticator app at this url...
$url = ('https://chart.googleapis.com/chart?chs=200x200&cht=qr&chl=200x200&chld=M|0&cht=qr&chl=otpauth://totp/' + $Email + '%3Fsecret%3D' + $SharedSecret)
$key = Convert-Base32ToHex $sharedSecret
$time = Get-EpochHex
$hmac = Get-HMAC $key $time
$otp = Get-OTPFromHMAC $hmac
$reportObject.SharedSecret = $sharedSecret
$reportObject.Key = $key
$reportObject.Time = $time
$reportObject.HMAC = $hmac
$reportObject.OTP = $otp
$reportObject.URL = $url
return $reportObject
}
function Get-ApigeeRequestContent {
param(
[System.Net.Http.HttpResponseMessage] $Response
)
$content = $Response.Content.ReadAsStringAsync().Result
return $content
}

Don’t update IIS’ applicationHost.config too fast

on Monday, June 3, 2019

IIS’ applicationHost.config file is the persistent backing store for a servers IIS configuration. And, a running IIS instance will monitor that file for any changes on disk. Any changes will trigger a reload of the configuration file and an update to IIS’ configuration, including application pool updates. This is a really nice feature which allows for engineering teams to perform updates to IIS hosts using file operations; which makes it much more flexible to alternative configuration management solutions (hand written scripts, chef, puppet, etc).

Unfortunately, there is a risk involved with updating applicationHost.config outside of the standard appcmd.exe or powershell modules (webadministration and iisadministration). Because the file is read in from disk after each update, a series of rapid updates can cause a pseudo race condition. Even though the file system should prevent reads from occurring when a write is occurring, there seems to be a reproducible problem that IIS may only read in a partial XML configuration file (applicationHost.config) instead of the full file as intended. It’s almost as if updating the file either prevents the reading to finish, or it starts reading in the changes half way through. This only happens sometimes, but if your IIS server is busy enough and you perform enough writes to the applicationHost.config file you can get this error to occur:

The worker process for application pool ‘xxxxxxxxxxxxxx` encountered an error ‘Configuration file is not well-formed XML’ trying to read configuration data from ‘\\?\C:\inetpub\temp\apppools\xxxxxxxxxxxxxx\xxxxxxxxxxxxx.config’, line number ‘3’. The data field contains the error code.


An odd thing to note is that the error message has an unusual value for the .config file name. It uses the application pool name instead of the normal ‘web.config’ (ie. ‘\\?\C:\inetpub\apppools\apppoolname1\apppoolname1.config’).

This is pretty easy to make happen with a for loop in a script. For example:

# looped through 25 times
foreach($healthCheck in $healthChecks){
if($healthCheck.HealthCheck){
Get-WebFarm -ServerName $server -Url $healthCheck.Url | Set-WebFarm -HealthCheckInterval 60
}
}

To prevent the reading problem from happening, there are a number of ways:

  • Use appcmd.exe as Microsoft would suggest
  • Use the Powershell modules that Microsoft provides along with the Stop/Start-*ComitDelay functions
  • Or, put the script to sleep for a few seconds to let IIS process the previous update. This is the most flexible as you can perform updates using a remote network share; where the others requires an active RPC/WinRM session on IIS server. (Example below)
# looped through 25 times
foreach($healthCheck in $healthChecks){
if($healthCheck.HealthCheck){
Get-WebFarm -ServerName $server -Url $healthCheck.Url | Set-WebFarm -HealthCheckInterval 60
# This next line would allow IIS more time to read in configuration updates and handle appPools restarts
Start-Sleep 3
}
}


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.