Powershell Range Operator Performance

on Monday, August 10, 2020

This is a truly silly experiment, but it caught my interest. I was discussing Iron Scripter Challenges with thedavecarroll and he was using switch statements with range operators (PSGibberish.psm1):

What struck me as odd was the idea that the range operators might be calculating each of their ranges at runtime, on each execution of the function.

So, I ran a couple of experiments and the range operators are pretty neat. Here’s what I think (with no real definitive proof to support) is happening with them:

  • Range Operators used within Switch statements, that are contained within Functions are Cached.
    • It seems like when the function is JIT’d, the Range Operator value is calculated and Cached.
    • So, there’s no reason to pre-calculate the values and reference them within the function.
    • And, if you do reference variables from outside the function, looking up variables that require a scope lookup can also be time consuming. (Although, performance isn’t why people turn to powershell in the first place.)
  • Range Operators used within a Switch statement outside of a Function are not cached (like a code block).

To determine this, I ran a series of test against a function which focused on executing the switch statement which used range operators:

To determine how much time was spent making the function call and setting the $a variable, this function was used. This is noted as “Calling a Function Overhead”.

Switch Avg Execution Time = Total Avg Execution Time – Calling a Function Overhead

The results were:

The results indicate that both the Range Operator when run inside of a Function, and the Explicitly Scoped Cached Values have about the same running time. Which might indicate that when the function is JIT’d, it calculates the Range Operator values and caches them.

The large increase in running time between Range Operator and Cached Values not in Func might indicate that searching for variables outside of the function scope has a relatively costly penalty by comparison.

And, finally the Range Operator that was run outside of a Function was mostly likely calculated on each execution. While relatively expensive, its surprisingly fast. C# usually uses 10,000 ticks per millisecond, so that’s ~0.19 milliseconds for compilation and execution.

Full Test Script:

Reducing Noise in Error Logs / Removing PS Errors

on Monday, January 13, 2020

I have a nightly scheduled job which will send out a notification email if the job has an error occur anywhere within it (even when the error is handled). This job infrequently sends out the error email. However, my long history of reviewing these emails has brought me to the point where I assume the error is always:

  • There was a file lock on file X when the file was being save; the function detected the error, waited a brief time period for the lock to clear and then retried the save operation successfully.

I can't remember a time when that wasn't the case. Because of this, I am finding myself less interested in actually reading the error message and desiring to simply ignore the email. But, I know that is going to lead to a situation where something unexpected will happen and I'll ignore the warning emails. Which would be a failure of the entire warning system.

So, what I have is a very narrowly defined and well known case of when the exception occurs, and I have a desire to ignore it. If I setup the code to simply suppress this error after the save operation successfully completes, then I should be able to safely reduce the amount of noise in the error messages that are sent to me. (It should still report the error if the retries never complete successfully)

This is a very common scenario: Teams setup a warning mechanism that is highly effective when a system is first built. At that time, there are a myriad of possible unforeseen errors that could occur. There also hasn’t been enough operational history to feel that the system is stable, so being notified on every potential problem is still a welcome learning experience. As those problems are reduced or eliminated it builds trust in the new system. However, it’s also very common that once a team completes a project and does a moderate amount of post deployment bug fixes, they are asked to move on and prioritize a new project. Which gives no devoted / allocated time to maintaining small and inconsistent issues that arise in the previous project(s).

Unfortunately, the side effect of not giving the time needed to maintain and pay down the technical debt on the older projects is that you can become used to “little” problems that can occur on them; including ignoring the warning messages that they send out. And this creates an effect where you can start to distrust that the warning messages coming from a system are important, because you believe that you know the warning is “little” or “no big deal”.

The best way to instill confidence in the warning and error messages produced by a system is to ensure that the systems only send out important messages, separating the Signal from the Noise.

For my scenario above, the way I’m going to do this is to prevent these handled errors from sending out notification emails. This goes against best practices because I will need to alter the global error monitor in Powershell, $global:Error. But, given that my end goal is to ensure that I only receive important error messages, this seems like an appropriate time to go against best practices.

Below is a snippet of code which can be used to remove error records from $global:Error that fit a given criteria. It will only remove the most recent entries of that error, in order to try and keep the historical error log intact.

You need to be careful with this. If the error you’re looking for occurs within a loop with a retry policy on it, then you need to keep the errors which continued to fail beyond the retry policy, and only remove future errors when the retry policy succeeded. You can better handle the retry policy situation by using the –Last 1 parameter.

Update .NET Core Runtime Installer for SDK

on Monday, December 23, 2019

A while back, I wrote a post titled Monitor and Download Latest .NET Core Installer. The post was about a script which could be used to monitor the aspnet/aspnetcore releases page, and if a new releases came out the installer for the .NET Hosting Bundle would be downloaded. A piece of code I didn’t include in that post, was a script that then took the downloaded .exe installer and created an installation package from it. This installation package was targeted at servers that host ASP.NET Core web applications. This post is not about that secondary script.

Instead, this is a small tweak to the monitor/download script. Instead of downloading the .NET Hosting Bundling, it would download the SDK. And the installation package that it eventually creates (not included in this post) is targeted at build servers.

Default Configurations for PS Modules

on Monday, November 25, 2019

A common problem with Powershell modules is that they need to be configured slightly differently when being used for different needs. For example, developers may want a module to use a local instance of a service in order to do development or testing. But, on a server, the module might be expected to connect to the instance of a service specific for that environment. These are two separate groups of users, but each has the same need, a default configuration that makes sense for them.

One way we’ve found to help make this a little more manageable is to create a standardized way to configure local default configuration’s for developers, while creating an interface which can be used by service providers to set default configurations for use on the servers.

This comes about by standardizing on 4 functions:

  • Set-{ModuleName}Config –Environment [Prod|Test|Dev|Local]

    This is the function that most people will use. If you want to point that module to use a particular environments services, use this function.

    For developers, this is useful to point the module at their most commonly used environment. For a service they help build and maintain, that would most likely be local. But, for service they only consume, that is usually Prod.

    For module developers, this function can be used to set the default configuration for the module. In general, this turns out to be defaulted to Prod. If your not the developer of a service, and you are going to use a Powershell module to interact with that service, you’re generally wanting to point it to Prod. This is the most common use case, and module developers usually setup module defaults for the most common use case.

    For service developers that use the module within their services, this command is flexible enough for them to determine what environment their service is running in and set up the module to connect to the correct endpoints.
  • Save-{ModuleName}DefaultConfig

    This is mostly used by developers.

    Once you have the environment setup the way you want it, use the Save function to save the configuration locally to disk. We have had success saving this file under the users local folder (right next to their profile); so the settings are not machine wide, but user specific.

  • Restore-{ModuleName}DefaultConfig

    This function usually isn’t called by developers / end users.

    This function is called when the module loads and it will check if the user has a local configuration file. If it finds one, it will load the values into memory.

    Services usually don’t have a local configuration file.
  • Test-{ModuleName}Configured

    This function usually won't be called by the end user. It's used internally to determine if all the important properties are setup before saving the properties to disk.

To get people to adopt this strategy, you have to make it easy for module developers to add the functionality into their module. To do that there’s one more function:

  • Add-DefaultConfigToModule –ModuleName <ModuleName> –Path <Path>

    This will add 4 templated files to a module, one for each function. It will also update the .psm1 file to end with a call to Restore-{ModuleName}DefaultConfig.

Below is a very mashed together version of the files for the module.

The code does assume all the module configuration information is stored in $global:ModuleName

And, these files are to be placed within a subdirectory of the DefaultConfig module called /resources/AddTemplate:

Telemetry in PowerShellForGitHub

on Monday, September 30, 2019

The PowerShellForGitHub module has a number of interesting things about it. One interesting aspect is it’s implementation for collecting telemetry data, Telemetry.ps1. Telemetry is very useful to help answer the questions of:

  • What aspects of your service are being used most frequently?
  • What aspects are malfunctioning?
  • Are there particular flows of usage which can be simplified?

But, to be able to collect the data necessary to answer those questions you have to make the collection process incredibly easy. The goal would be to boil down the collection process to single line of code. And that’s what PowerShellForGitHub tried to do.

The level of data collection the module provides does take more than a single of line of code to use, but it’s so easily done as a part of the development process, it doesn’t feel like “extra work” or “overhead”. Here’s a snippet from GitHubRelease.ps1’s Get-GitHubRelease:

Looking through the function you can see a hashtable called $telemtryProperties created earlier on and it’s properties are slowly filled in as the function continues. Eventually, it gets to the point where a common function, Invoke-GHRestMethodMultipleResults is called and the telemetry information is passed off the underlying provider.

All of the hard work of setting up where the information will be collected and how it’s collected it abstracted away, and it boils down to be a subfeature of a single function, Invoke-GHRestMethodXYZ. Boiling everything down to that single line of code is what makes the telemetry in that module soo useful: it’s approachable. It’s not a headache that you have to go setup yourself or get permissions too, it just works.

To make it work at that level was no easy feat though! The code which makes all of that possible, including the amazing supplementary function’s like Get-PiiSafeString, is really long and involves the usage of nuget.exe to download and load Microsoft’s Application Insights and Event Tracing .NET libraries. These are hidden away in Telemetry.ps1 and NugetTools.ps1.

So, given the idea that “Telemetry is an incredibly useful and necessary piece of software development in order to answer the questions asked at the top of this article”, the new question becomes “How can you refactor the telemetry code from PowerShellForGitHub to be an easy to reuse package that any PowerShell module could take advantage of?

Monitor and Download Latest .Net Core Installer

on Monday, September 23, 2019

.NET Core has changed it’s IIS server installation model compared to .NET Full Framework. Full Framework updates were installable in offline installers, but they were also available through Windows Update/SCCM. However, with .NET Core the installer the IIS server installer, the “Hosting Bundle”, is only an offline installer that can be found by following these steps: (example for 2.2.7)

Even though this process is really quick, it can feel repetitive and feel like something you shouldn’t need to do. This feeling can be compounded if you missed that a new release was created weeks or months before and someone else points outs the update to you.

So, a small solution to these two minor inconveniences is to setup a watcher script which will monitor the AspNetCore teams github repository for new releases, and upon finding a new one will download the Hosting Bundle and notify you of the update. This solution could run on a nightly basis.

In this example script those previously mentioned pieces of functionality are provided by:

  • PowerShellForGithub\Get-GitHubRelease

    This function can be used to pull back all the releases for a github repository and the results can be filtered to find the latest stable release.

    In order to know if the latest release is newer than the release you currently have on your servers, you could simply check the date stamp for that release was the current day. However, for this sample script, it’s going to assume you can query an external system to find out the latest installed version.

  • Selenium \ SeleniumUcsb

    Selenium is used to navigate through the github release pages and find links to the download. It then downloads the Hosting Bundle installer by parsing through the pages to find the appropriate link. The parsing is actually a bit difficult to figure out sometimes, so an XPath Helper/tester for your browser can be really handy.

  • Emailing

    Send-MailMessage … yeah, that’s pretty straight forward.

Basic PowerShell Convertor for MSTest to xUnit

on Monday, September 9, 2019

This is just a quick script that can help convert a .Tests.csproj which was originally written for MS Test over to using xUnit. It probably doesn’t cover every conversion aspect, but it can get you moving in the right direction.

What it will convert:

  • Replace using Microsoft.VisualStudio.TestTools.UnitTesting; with using Xunit;
  • Remove [TestClass]
  • Replace [TestMethod] with [Fact]
  • Replace Assert.AreEqual with Assert.Equal
  • Replace Assert.IsTrue with Assert.True
  • Replace Assert.IsFalse with Assert.False
  • Replace Assert.IsNull with Assert.Null
  • Replace Assert.IsNotNull with Assert.NotNull

Powershell: Using a file hash to test for a change

on Monday, August 12, 2019

The PowershellForGitHub module is great! But … sometimes it can be a bit verbose when it’s trying to help out new users/developers of the module. This isn’t a bad thing in any way, just a personal preference thing. And, the module owner, Howard Wolosky, is really open to suggestions. Which is great!

So, I opened a ticket (PowerShellForGitHub Issue #124) to explain my confusion over the warning messages. And, to be fair, I explained my confusion in a very confusing way. But, he was nice enough to work through it with me and we found that something we needed was a way to tell if someone had updated a settings file after downloading the module on their machine.

Enter, Get-FileHash.

This command looks like it’s been around for quite a while, but it does the classic job of creating a hash of a file. And, that hash can be stored in code, so that it can be used to check if a change in the file has occurred.

So, how to use the check.

Here’s the original code:

And, here’s the updated code using Get-FileHash:

PowerShellForGitHub–Adding Get-GitHubRelease

on Monday, August 5, 2019

PowerShellForGitHub is an awesome powershell module for interacting with the GitHub API. It has a wide set of features that are already implemented and its supported by Microsoft(!!). You can also tell the amount of care the maintainer, Howard Wolosky, has put into it when you dig into the code and read through the inline documentation, contributing documentation and telemetry support(!!). BTW, if you ever need to create a PII transmittable string, check it out: Get-PiiSafeString.

One of the features I was looking for the other day was the ability to retrieve a list of releases for a repository. (I was building a script to monitor the dotnet/core releases; in order to build an ASP.NET Core Hosting Bundle auto-installer.)

I submitted a pull request of the update last week and was really impressed with all the automation in the pull request processing. The first thing that surprised me was the integrated msftclas bot (Microsoft Contribution License Agreements), which posted a legal agreement form that I (or the company I represent) consent to give Microsoft ownership of the code we contribute. It was soo smooth and easy to do.

Next was the meticulous level of comments and review notes on the pull request. If he made all those comments by hand, holy moly! That would be amazing and I would want to praise him for his patience and level of detail. Hopefully, some of the comments were stubbed out by a script/bot; which would be a really cool script to know about.

So, I’m gonna go through the comments and see if I can update this pull request.

  • *facepalm* Wow, I really missed changing the name GitHubLabels.ps1 to GitHubReleases.ps1 in the .tests.ps1 file.
  • White space in .tests.ps1: Ahhh … I can better see the white space formatting style now.
  • Examples missing documentation: Hahaha! My mistake. It looks like I started writing them and got distracted.
  • Telemetery: I loved the note:

    For these, I think it's less interesting to store the encrypted value of the input, but more so that the input was provided (simply in terms of tracking how a command is being used).

    Thank you for pointing that out! It makes complete sense.

In summary, a big Thank You to Howard Wolosky and the Microsoft team for making this module! It was a huge time saver and really informative on how to write Powershell code in a better way.

Pester Testing Styles

on Monday, July 29, 2019

Pester is a great testing framework for Powershell. And it can be used in a variety of different testing styles: TDD, BDD, etc. I’m going to look at two different styles, both of which are perfectly good to use.

TDD’ish with BeforeAll / AfterAll

Lines 4 through 7 are used to ensure that module don’t get repeatable imported, when this tests are run as part of a Test Suite. However, they will allow modules to be reloaded if you are running the individual test file within VSCode. For the most part, they can be ignored.

In this more Test Driven Development style test

  • The Describe blocks name is the function under test
  • And each It test is labelled to describe a specific scenario it is going to test
  • All the logic for setting up the test and executing the test are contained within the It block
  • This relies on the Should tests to have clear enough error messages that when reading through the unit tests output you can intuit what was the failing condition

This is a very straight forward approach and it’s really easy to see how all the pieces are setup. It’s also very easy for someone new to the project to add a test to it, because everything is so isolated. One thing that can really help future maintainers of a project is to write much lengthier and more descriptive It block names than the ones in the example, in order to help clarify what is under test.

Some things to note:

In this setup, the BeforeAll script is used to configure the environment to be ready for tests that are about to be run. Over time, this function has been replaced with BeforeEach, but for this example I’m using BeforeAll. The BeforeAll is setting up some values that I want available when the test is run, or a variable I want available when the test is run. I put a prefix of $script: on the variable created within the BeforeAll function because I have seen behavior where the variable was no longer defined outside of the scope of BeforeAll.

The AfterAll is a corresponding block to the BeforeAll, and is pretty self explanatory. The interesting part of the these two blocks is that they have to be declared within the Describe block and not within the InModuleScope block. They will not be run if they are declared in the InModuleScope block.

BDD’ish with try / finally

Lines 10 and 11 are used to ensure that that module has been configured correctly (for normal usage … not specific to the tests) and ensuring that the module isn’t being reloaded when being run in a Test Suite.

In this more Behavior Driven Development style test

  • Uses the Describe block to outline the preconditions for the tests
  • Immediately following the declaration of the Describe block, it has the code which will setup the preconditions
  • Uses the Context block to outline the specific scenario the user would be trying
  • And, immediately following the declaration, it has the code which will execute that scenario
  • Uses the It blocks to outline the specific condition that is being tested.
  • This requires more code, but makes it clearer what condition actually failed when reviewing unit test output

This is not as straight forward of an approach, as different areas of the code create the conditions which are being tested. You might have to search around a bit to fully understand the test setup. It also adds a little more overhead when testing multiple conditions as you will be writing more It block statements. The upside of that extra work is that the unit test output is easier to understand.

Some things to note:

In this setup, variable scope is less of an issue because variables are defined at the highest scope needed to be available in all tests.

The BeforeAll/AfterAll blocks have also been replaced with try/finally blocks. This alternative approach is better supported by Pester, and it can also help new developers make a key insight into the way Pester tests are run: They are not run in parallel, but instead are run in order from top to bottom. Because of this, you can use some programming tricks to mock and redefine variables in particular sections of the code without having to worry about affecting the results of other tests.

RSAT Setup on Windows 10 1903 - 0x800f0954

on Monday, July 15, 2019

Windows 10 1803 was the last time that Windows 10 had a separate RSAT download bundle. This the note from the download page:

IMPORTANT: Starting with Windows 10 October 2018 Update, RSAT is included as a set of "Features on Demand" in Windows 10 itself. See "Install Instructions" below for details, and "Additional Information" for recommendations and troubleshooting. RSAT lets IT admins manage Windows Server roles and features from a Windows 10 PC.

This is great! It makes re-installation of the RSAT tools just a little bit easier; and a little bit more aligned with automation.

A very nice Microsoft MVP, Martin Bengtsson, saw this new direction for installation and built out an easy to use installation script written in powershell. Here’s a blog post on what it does and how to use it.

The download of the script, execution and setup would have been pretty easy except for one thing … Error 0x800f0954.

It turns out that you need to enable a Group Policy that will allow your machine to download the optional RSAT packages from Windows Update servers instead of your on-premise Windows Server Update Services.

Luckily, Prajwai Desai has already figured this out and has an easy to follow set of instructions to update your Group Policy and allow for the download to occur.

Basic Install-WindowsTaskTemplate

on Monday, July 8, 2019

I don’t install powershell scripts as Windows Tasks every day (any probably need to find a way for another system to manage that responsibility), so it’s easy to forget how to do them. Here’s a quick template to install a Windows Task on a remote machine:

Fun Little Cryptogram

on Monday, July 1, 2019

There’s an interesting site https://ironscripter.us/ which creates powershell based scripting challenges for practicing DevOps thinking and continual learning. It’s kind of like a kata website with a funny “Battle for the Iron Throne” feel to it.

A few days ago they posted a really small cryptogram to find a hidden message within some text. I say really small because I have a coworker that is active in crypto games and the stuff he does is mind blowing (https://op011.com/).

Ironscripter’s challenge is more light hearted and just a quick game to help you think about powershell, string manipulation, visualizing data to make it useful and so on. So, here’s my solution to the challenge.

(I think I might go back later and use a language file to try and match the text in the possible solutions; instead of trying to look through them manually.)

(Thanks to David Carroll for pointing this site out: His solution)

Parse a SQL Script Before Execution

on Monday, June 17, 2019

Recently, we’ve added support for running database scripts as a part of our deployment pipeline. One of the steps added to the pipeline was a check to ensure the sql script that would be executed parses correctly. This is a useful evaluation far before a deployment, not just to prevent problems, but to give fast feedback to our teams so they can correct the issue.

The solution turned out to be much more generic than we had imagined. Because we didn’t yet know how generic the solution was we built some custom code for executing sql commands against a database. However, if we were to go back and do it a second time, we would probably use an open source tool to perform the execution of the scripts:

  • dbatools

    This is an amazing powershell module of database related commands and tools. It is designed to be used in DevOps pipelines and to make DBA’s lives easier.

    We looked at this module when first implementing the parsing functionality, but we wanted to get the output (including normal vs error output) in a particular way and we would need more time to get it to fit our exact needs.

    But! It’s an absolutely amazing collection of tools and we will definitely revisit it.
  • Invoke-SqlCmd2 (script)

    This script/module is a healthier and more secure version of Invoke-SqlCmd. It adds in functionality that makes your life easier, without adding on soo much that it becomes confusing to use.

    We started out using this, but then had some extra needs for error handling and output parsing that forced us to explore alternatives.
  • sqlcmd.exe

    This is what we ended up using. It’s a command line utility so it’s doesn’t have native powershell output. But, the output feels a lot like the output from SQL Server Management Studio. Because of that comfort, it made it the easiest to wrap our heads around for handling error and output parsing.

The Parsing Script

For Microsoft SQL Server, parsing is made possible by wrapping your SQL query in the set parseonly directive.

From there, you just need to send the script over to the SQL Server and execute it. If SQL Server sends back no output, then it parsed successfully. It’s nice and straight forward.

Lessons Learned

QUOTED_IDENTIFIERS

SQL Server Management Studio has a number of default settings which you are generally unaware of. One of those is the directive QUOTED_IDENTIFIER is set to ON. When using the sqlcmd.exe utility, you will need to use the –I parameter in order to turn on the same functionality.

The Invoke-DbUpdateScript.ps1 below uses the parameter.

The error message looks like this:

Msg 1934, Level 16, State 1, Server xxxxxx, Line 14

UPDATE failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

Linked SQL Servers & Transactions

We required that all queries which are run through the deployment pipeline must be wrapped in a transaction. This guarantees that if any error occurs (which will end the connection to the server) will be rolled back automatically. However, this could result an error message of: “Could not enlist in a distributed transaction.”

To work around the problem change the transaction isolation level from SERIALIZABLE to READ COMMITTED.

Could not enlist in a distributed transaction.

Scripts

Replacing Invoke-WebRequest with HttpClient

on Monday, June 10, 2019

I’ve written before about how frustrating the Error handling for Invoke-WebRequest and Invoke-RestMethod can be. But, there is another way to make web requests which will never update the global $Error object: write your own wrapper around HttpClient.

This method is much much more complicated than using Invoke-WebRequest, Invoke-RestMethod, or even Invoke-WebServiceProxy. But, it will give you complete control over the request and the response. And as a nice side effect, it’s cross platform compatible (runs on linux and windows).

Below is an example use of HttpClient to call Apigee’s OAuth Login endpoint.

(PS. The idea for using an HttpClient came from David Carroll’s PoShDynDnsApi powershell module. Which works with two implementations that use HttpClient (.NET Core and .NET Full Framework). The reason for two implementations is that DynDns requires one of their calls to perform a non-standard GET request with a body. Microsoft’s HttpClient implementation is pretty strict about following the rules and does not allow a body to be sent in GET requests. So, he had to use reflection to inject a body into his request. Each version of .NET had a different internal class structure that had to be set differently. It’s a pretty amazing work around.)

Don’t update IIS’ applicationHost.config too fast

on Monday, June 3, 2019

IIS’ applicationHost.config file is the persistent backing store for a servers IIS configuration. And, a running IIS instance will monitor that file for any changes on disk. Any changes will trigger a reload of the configuration file and an update to IIS’ configuration, including application pool updates. This is a really nice feature which allows for engineering teams to perform updates to IIS hosts using file operations; which makes it much more flexible to alternative configuration management solutions (hand written scripts, chef, puppet, etc).

Unfortunately, there is a risk involved with updating applicationHost.config outside of the standard appcmd.exe or powershell modules (webadministration and iisadministration). Because the file is read in from disk after each update, a series of rapid updates can cause a pseudo race condition. Even though the file system should prevent reads from occurring when a write is occurring, there seems to be a reproducible problem that IIS may only read in a partial XML configuration file (applicationHost.config) instead of the full file as intended. It’s almost as if updating the file either prevents the reading to finish, or it starts reading in the changes half way through. This only happens sometimes, but if your IIS server is busy enough and you perform enough writes to the applicationHost.config file you can get this error to occur:

The worker process for application pool ‘xxxxxxxxxxxxxx` encountered an error ‘Configuration file is not well-formed XML’ trying to read configuration data from ‘\\?\C:\inetpub\temp\apppools\xxxxxxxxxxxxxx\xxxxxxxxxxxxx.config’, line number ‘3’. The data field contains the error code.


An odd thing to note is that the error message has an unusual value for the .config file name. It uses the application pool name instead of the normal ‘web.config’ (ie. ‘\\?\C:\inetpub\apppools\apppoolname1\apppoolname1.config’).

This is pretty easy to make happen with a for loop in a script. For example:

To prevent the reading problem from happening, there are a number of ways:

  • Use appcmd.exe as Microsoft would suggest
  • Use the Powershell modules that Microsoft provides along with the Stop/Start-*ComitDelay functions
  • Or, put the script to sleep for a few seconds to let IIS process the previous update. This is the most flexible as you can perform updates using a remote network share; where the others requires an active RPC/WinRM session on IIS server. (Example below)

Creating Charts/Graphs with Powershell for Slack - Pt3

on Monday, April 15, 2019

Continued from Creating Charts/Graphs with Powershell for Slack – Pt2.

Send the Graph Back Into Slack

Since ChartJs was able to create a .png image of the chart, now we just have to figure out how to get the image into a message and send it back to slack.

In my imagination, the best possible approach would be to use Slack APIs files.upload endpoint to push the image up to slack and simply reference the image using slack’s private urls. However, I could not get this to work. Maybe someday in the future.

The PSSlack module (which is great!) does have an upload feature built into the New-SlackMessageAttachment command. But, in my experimentation I was only able to upload files or large text blocks; when I tried to upload images they never appeared as image. They just appeared as files that could be downloaded. Maybe I was doing something wrong.

So, I went a third route and did something which is pretty bad design. I used a website that I had access to in order to host the images and reference them as urls. This comes with the drawback that the images hosted on the website would need to be monitored for retention periods and cleanup. But, it’s a quick and easy way to get the image up there.

Below is a wrapper command which will use the image url in an PSSlack message. This will display the newly created graph in chat just as you would hope.

Script using the functions together:

Send-SlackHubotImageUrl:

Creating Charts/Graphs with Powershell for Slack–Pt2

on Monday, April 8, 2019

Continued from Creating Charts/Graphs with Powershell for Slack – Pt1.

Generate a graph/chart

So, this was an adventure that went down a wild number of paths. What I needed was a program that could run from powershell and generate a graph in .png/.svg format. The image would be used later to send a message in slack.

Skip down below to chartjs/canvas for a solution.

Initially I wanted to be able to build a graph using powershell, so I searched the https://www.powershellgallery.com/ and github for what they had under 'chart’ or ‘graph’. Here’s some of the ones I tried and I why I didn’t settle on them:

At this point, I changed directions and started looking for regular .NET or nuget packages which could do graphing. This also resulted in a dead end, usually because of price.

  • Live Charts (https://lvcharts.net/)

    I didn’t see a way to generate an image.
  • Highcharts .NET (https://www.highcharts.com/blog/products/dotnet/)

    This one came up a lot on stackoverflow answers and blog posts. I think it’s really good and the price tag shows that the developers believe that too.
  • DotNetCharting (https://www.dotnetcharting.com/)

    I think this could do the job, but it also costs money.
  • OxyPlot (http://www.oxyplot.org/)

    I don’t think I spent as much time on this as I should have. I was still a little sore at OxyPlotCli making it seem like it was impossible to force the setting of the X axis from 0 to 100. When working with the powershell version I used the OxyPlot source code to check if I could set the Y axis min/max values using reflection, but the X axis is always recalculated dynamically on line graphs. So, I assumed that would also be the case with the nuget version.

So, by this point I reached back in time to about 15 years ago and a friend was showing me the plot he had made in a just a couple hours with Gnuplot. And, he was using a Windows PC.

  • Gnuplot win64 mingw pre-compiled
    (http://tmacchant3.starfree.jp/gnuplot/Eng/winbin/)

    I really should have spent more time with this one. I only spent about 30 minutes on it and didn’t get a graph produced, so I scratched it. But, looking back now, it was probably the most straight forward product with the least amount of dependencies. I would really like to take a second look at this one.

At this point, I decided to try to stick with the two technology stacks I was already using (powershell & nodejs). Since, I felt like I had exhausted the powershell stack, I took a look at npm for the nodejs stack.

  • Google Charts (https://developers.google.com/chart/)

    Google Charts came up over and over again on stackoverflow and blog post answers. But, most of the answers had to do with an older (and recently deprecated 3/18/2019) version which allowed for web api calls to be made to google which would return the generated charts.

    The newer Google Charts runs completely in the browser and generates the graphs on the client side. To make this work, I would just need to be able to use a shim in nodejs to make it use the google charts as if it were in the browser. The recommended choice was JSDOM.

    However, before I really took the time to make this work I remembered that a co-worker got charting working using chartjs & canvas. So, I did a google search on that and …

chartjs/canvas on node charting

image

This finally did the trick. The software was pretty easy to use, with plenty of examples on their website and it was in a technology stack that had previous experience with.

The challenge with using this combination is that I wanted to make the charting functionality available from a Powershell command. So, doing the ridiculous, I built a Powershell module around ChartJS. (Powershell code which, if you remember from Pt 1, is being called from nodejs!)

Some notes on this code …

  • packages.json

    To use this module you will need to use npm install to pull down the node dependencies found in package.json.
  • ChartJs.psm1

    This will setup some variables which will be referenced else where.
  • New-ChartJsUtilizationGraph.ps1

    Obviously, this is the one that actually generates the graph. It expects data to be either CPU or Memory utilization data; anything else will probably cause an issue.

    In order to make a useful graph, we want as many data points from VMware as possible to display (see previous post). Each data points X value will have a datestamp associated with it. Unfortunately, that can make the X axis display become very dense / unreadable as the multiple timestamps display on top of each other (or they don’t display at all). To have better control over this, lines 41-52 ensure that there are only 5 datestamps displayed: first in time, 1/4 of the way through, 1/2 way, 3/4 of the way, and the most recent timestamp.

    The function works by generating the javascript which can actually run in nodejs. This script is placed in a temporary folder and the node_modules needed to use it are copied into that same folder (for local reference/usage).

    At the end of the script, it should clean up the temporary folder and files, but it won’t do that if an error occurs. I wanted to be able to debug errors, and if the temporary was always cleaned up … well, you get it.
  • GraphUtilizationTemplate.js

    This is a copy and paste of an example file I found on stackoverflow. Using the documentation, it didn’t take too long to change things to what I needed; but it also wasn’t straight forward. You’ll probably need to do some experimentation to find what you need.

    An important thing to note, xAxes > ticket > autoSkip: false is really important to make the labels on the x axis appear correctly.

Next up: Creating Charts/Graphs with Powershell for Slack–Pt3 (Sending an Image to Slack).

Creating Charts/Graphs with Powershell for Slack–Pt1

on Monday, April 1, 2019

This adventure came about from troubleshooting a problem with coworkers through slack. While we had hubot commands available to us to look at CPU and Memory usage for an given point in time, we would go to vSphere’s UI to screen grab Memory usage over time. It became quickly apparent that we needed a command to just generate these charts as we needed the in chat. So, our use case was:

Using historical VMWare/vSphere data generate a graph of CPU or Memory usage over time and upload it to slack using a hubot command.

This is actually way harder than it sounds because of one reason. Software that generates charts & graphs is pretty complicated. So, breaking down the use case into smaller sub-problems this is what it look like:

  • Build a Hubot (nodejs) command that will kick off the process.call into Powershell to execute the operations necessary.
  • Retrieve historical CPU/Memory data about a machine from our VMWare/vSphere infrastructure using PowerCLI.
  • Generate a graph/chart from the data using any graphing technology that will actually run (a) outside of a browser and (b) in our environment. This is much more difficult than it sounds. (A Future Post)
  • Send the graph back into slack so that it can be displayed in the channel where it’s needed. (Yet Another Future Post)

Build a Hubot Command

We currently run a hubot (nodejs) instance to handle ChatOps commands through Slack. Even though hubot is build on top of nodejs, we are a Microsoft shop and the majority of knowledge with scripting is in Powershell. So, long ago, when we implemented hubot we used PoshHubot to bridge the technology gap and allow the nodejs platform call our Powershell modules.

If we had to do it over again, with the current technology that available today, we would probably use Poshbot instead.

Anyways, we’ve been doing this for a long time, so this part of things wasn’t too new or difficult.

Retrieve Historical CPU/Memory Data

Within the debugging procedures that started all of this, we were pulling in graphs using screen grabs of vSphere. So, the best place to get data from would be vSphere and VMWare does a great job of making that information available through PowerCLI.

To do this, we used the Get-Stat command with either the cpu.usage.average or mem.usage.average stat being retrieved.

Quick note: I’m having a difficult time getting Connect-VIServer to work with a PSCredentials object. I’m not sure what the issue is, but for now the authentication process to the server is working because Connect-VIServer allows you to store credentials on a machine and reuse them. That was pretty nice of them.

The Get-Stat data is somewhat wonky at times. In the “1h” timeframe I’m using an IntervalSecs of 20 simply because that’s the only interval allowed. Each data point is always 20 seconds apart at 00, 20, and 40 seconds. If you use a Start and Finish range of over a week along with an IntervalSec amount, you could wait a real long time to get your data back; but you will get back all the data.

Because of all the data that will come back if you use an Interval amount, when you start to get a time range longer than a few hours it’s best to just let the Get-Stat command figure out what’s the appropriate interval amount to send back. That’s why on the “1d”, “1w”, “1m”, and “1y” timeframes I just use the Start and Finish parameters without an interval.

Both the cpu.usage.average and mem.usage.average data points return a percentage value back. This is fine for the CPU data, because we normally think about CPU usage in percentages. But, for memory, we normally think of its usage in GBs. So, there’s a quick section which converts the Memory usage percentage over to the actual amount of GBs used.

Next time, I’ll dig into New-ChartJsUtilizationGraph.

Selenium & Powershell

on Monday, March 25, 2019

Selenium is sort of a pseudo-industry standard for UI browser testing. There are others tools available (like cypress.io), but Selenium is a really well known / popular. And that’s why it’s ported or made available into so many other languages. It’s made available in Powershell using the Selenium module by Adam Driscoll (he was the creator of the Powershell Tools for Visual Studio).

The documentation in the github readme.md is short, but its all you really need to get started. But, you quickly start to run into the same problems the rest of the Selenium community runs into (Selenium – How to wait until page is completely loaded [duplicate], Selenium wait for Ajax content to load – universal approach).

So, here’s a quick function to help wait for a particular element on a page to load.

And, here’s a sample Pester test using the function.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.