Default Configurations for PS Modules

on Monday, November 25, 2019

A common problem with Powershell modules is that they need to be configured slightly differently when being used for different needs. For example, developers may want a module to use a local instance of a service in order to do development or testing. But, on a server, the module might be expected to connect to the instance of a service specific for that environment. These are two separate groups of users, but each has the same need, a default configuration that makes sense for them.

One way we’ve found to help make this a little more manageable is to create a standardized way to configure local default configuration’s for developers, while creating an interface which can be used by service providers to set default configurations for use on the servers.

This comes about by standardizing on 4 functions:

  • Set-{ModuleName}Config –Environment [Prod|Test|Dev|Local]

    This is the function that most people will use. If you want to point that module to use a particular environments services, use this function.

    For developers, this is useful to point the module at their most commonly used environment. For a service they help build and maintain, that would most likely be local. But, for service they only consume, that is usually Prod.

    For module developers, this function can be used to set the default configuration for the module. In general, this turns out to be defaulted to Prod. If your not the developer of a service, and you are going to use a Powershell module to interact with that service, you’re generally wanting to point it to Prod. This is the most common use case, and module developers usually setup module defaults for the most common use case.

    For service developers that use the module within their services, this command is flexible enough for them to determine what environment their service is running in and set up the module to connect to the correct endpoints.
  • Save-{ModuleName}DefaultConfig

    This is mostly used by developers.

    Once you have the environment setup the way you want it, use the Save function to save the configuration locally to disk. We have had success saving this file under the users local folder (right next to their profile); so the settings are not machine wide, but user specific.

  • Restore-{ModuleName}DefaultConfig

    This function usually isn’t called by developers / end users.

    This function is called when the module loads and it will check if the user has a local configuration file. If it finds one, it will load the values into memory.

    Services usually don’t have a local configuration file.
  • Test-{ModuleName}Configured

    This function usually won't be called by the end user. It's used internally to determine if all the important properties are setup before saving the properties to disk.

To get people to adopt this strategy, you have to make it easy for module developers to add the functionality into their module. To do that there’s one more function:

  • Add-DefaultConfigToModule –ModuleName <ModuleName> –Path <Path>

    This will add 4 templated files to a module, one for each function. It will also update the .psm1 file to end with a call to Restore-{ModuleName}DefaultConfig.

Below is a very mashed together version of the files for the module.

The code does assume all the module configuration information is stored in $global:ModuleName

And, these files are to be placed within a subdirectory of the DefaultConfig module called /resources/AddTemplate:

Submitting a Bug/Pull Request for Asp.Net Core

on Monday, November 18, 2019

It’s great that the Asp.Net Core team has made the project open source. It makes finding bugs, resolving them and submit updates/pull requests tremendously more satisfying than when you would call Microsoft support, report your bug and hope the product team considered your problem high enough priority to justify their time.

Last week I put together a post on a NullReferenceException within the Microsoft.AspNetCore.Http.ItemsCollection object. I finished the post by filing a bug with the AspNetCore team (#16938). The next step I needed to do with download the source code for Asp.Net Core and create a fix for it.

So, the standard forking of the repository and creating a new branch to create the fix in was easy enough, but now came the interesting part. Submitting the Pull Request and receiving their feedback. The first thing that was pretty amazing was that I submitted the fix around 10 AM on Saturday morning (Pull Request #16947). By noon on a Saturday some one from the Asp.Net Core team had reviewed the pull request and already found improvements to the code. The best part of the review is that they are very talented programmers and found multiple ways to improve the unit test submitted and even found other errors in the original file that was being updated. They didn’t just review the code, they looked for ways to improve the overall product.

The fix I found for the NullReferenceException was to use a null-conditional operator to ensure that the exception didn’t occur. But, what they did was search the entire class for every place this might occur and suggested where else a null-conditional operator could be applied to prevent future issues. They are detailed.

The parts of my pull request they had the most feedback on were the unit tests, and the suggestions were are useful to simplify the code and get at the core of what was being tested. When running the unit tests on my local machine, I could tell that they really focused in how to make the unit tests as fast and efficient as possible. The dotnet test runner could run the 169 unit tests in the Microsoft.AspNetCore.Http library in under 4 seconds. For comparison, I’ve mostly been working with Powershell unit tests for a while, and loading up the Powershell runtime and the Pester module, before even running the tests, usually takes a good ~5 seconds.

Overall it was a pretty easy process and they accepted the update for inclusion in the Asp.Net Core 5.0 preview1 release. Now, for getting the fix merged into Asp.Net Core 3.X, that was a little more difficult (#17068).

NullReferenceException in Http.ItemsCollection

on Monday, November 11, 2019

The other day a coworker and I were adding the ElmahCore/ElmahCore library to an AspNetCore web project and we ran into a strange NullReferenceException when using it. The problem didn’t occur when the library was used in an AspNetCore 2.2 project, and this was the first time trying it on an AspNetCore 3.0 project. So, we wanted to believe it was something to do with 3.0, in which case there would probably be other people that were running into the issue and Microsoft have a fix ready for it very soon. But, we needed to move forward on the project, so we wanted to find a work around in the meantime.

Personally, I found a bit of humor with this Exception, because at first glance it looked like it was occurring within an Exception Management system (Elmah). I mean the one thing that an Exception Management system definitely doesn’t want to do is throw Exceptions. However, being that I was already troubleshooting an Exception problem that seemed kind of funny, I leaned into the ridiculousness and decided to debug the Exception Management system with another Exception Management system, Stackify Prefix ( … it’s more of an APM, but it’s close enough).

The truly laugh out loud moment came when I hooked up Stackify Prefix, and it fixed the Elmah exception. Elmah started working perfectly normally once Stackify Prefix was setup in the application. So … one Exception Management system fixed another Exception Management system. (Or, at least, that’s how it felt.)

Of course, I needed to figure out what Stackify did to fix Elmah, so needed to pull Stackify back out of the application and really get into Elmah. Which meant grabbing a clone of ElmahCore to work with.

Even before having the source code available, I had the function name and stack trace of the Exception. So once I had the source code available, zeroing in on the code was pretty straight forward. But, what I found was unexpected.

As best I could tell, the problematic code wasn’t in Elmah, but was occurring from an underlying change inside of Microsoft.AspNetCore.Http.ItemsDictionary. It seemed like an internal field, _items, was null within the ItemsDictionary object. And, when the enumerator for the collection was used, a NullReferenceException was being generated.

(More detailed information at ElmahCore Issue #51)

It seemed like the workaround for the issue was to populate the request.Items collection with a value, in order to ensure the internal _items collection was not null. And to double check this, I loaded back up Stackify Prefix and checked what the value of the collection was when Stackify was involved. Sure enough, Stackify had populated the dictionary with a corellation id; and that’s why it fixed the issue.

For ElmahCore, I implemented a really bad code fix and created a pull request for them. It is a really bad fix because the solution triggers the exception, catches it, and then swallows it. Which makes the Elmah code continue to execute successfully. But, any .NET Profiler that ties into the really low level .NET Profiler APIs (these are the APIs which power APM solutions like Stackify Retrace, AppDynamics, etc) will record that exception and report as if its an exception occurring within your application.

At this point, the next steps are how to inform Microsoft of the problem and get a bug fix/pull request going for them. I’ve opened the bug (AspNetCore Issue #16938), but I will need to get the code compiling on my machine before submitting a pull request.

Performance Gains by Caching Google JWT kids

on Monday, November 4, 2019

To start with, a “kid” is a key id within a JSON Web Key Set (JWKS). Within the OpenID Connect protocol (which is kind of like an OAuth2 extension) Authentication Services can ensure the data integrity of their JWT tokens by signing them. And they sign the tokens with a private certificate. The token signature can then be verified using a public certificate; this is somewhat similar to SSL certificates for websites over https. With https, the public certificate is immediately given to the browser as soon as you navigate to the website. But, for JWT tokens your application will have to go “look up” the certificate. And that’s where OpenID Connect comes in.

OpenID Connect’s goal wasn’t to standardize JWT tokens or the certificate signing, it was a secondary feature for them. However, the secondary feature was pretty well done and OAuth2 enthusiasts adopted that part of the protocol while leaving the rest of it alone. OpenID Connect specified that their should be a “well known” endpoint where any system could go look up common configuration information about an Authentication Service, for example:

https://accounts.google.com/.well-known/openid-configuration

One of the standard values is ‘jwks_uri`, which is the link to the JSON Web Key Set. In this case:

https://www.googleapis.com/oauth2/v3/certs

In the example above, the entire certificate is in the `n` value. And the `kid` is the key to lookup which certificate to use. So, that’s what kids are; they’re the ids of signing certificates & algorithms.

So, where does the performance gain come in?

The performance gain for these publicly available certificates is that they can be cached on your application servers. If your application is going to use Google OAuth for authentication, and use JWT tokens to pass the user information around, then you can verify the token signatures using cached certificates. This puts all the authentication overhead on your application server and not in a synchronous callback to an Authentication Service.

But, there is a small performance penalty in the first call to retrieve the JWKS.

What’s the first call performance penalty look like?

Not much, about 500 ms. But, here’s what it looks like with an actual example.

First call that includes the JWT Token:

  • It reaches out to https://accounts.google.com/.well-known/openid-configuration which has configuration information
  • The configuration information indicates where to the get the “kids”: https://www.googleapis.com/oauth2/v3/certs
  • It downloads the JWKS and caches them
  • Then it performs validation against the JWT token (my token was expired in all of the screenshots, this is why there are “bugs” indicated)
  • Processing time: 582 ms
  • Processing time overhead for JWT: about 500 ms (in the list on the left side, the request just before it was the same request with no JWT token, it took about 99 ms)



(info was larger than one screenshot could capture)

Second call with JWT:

  • The caching worked as expected and the calls to google didn’t occur.
  • Processing Time: 102 ms
  • So, the 500 ms overhead of the google calls don’t happen when the caching is working.


(info was larger than one screenshot could capture)

Test Setup:

  • The first call is the application load time. This also included an Entity Framework first load penalty when it called a database to verify if I have permissions to view the requested record.
    • Processing Time: 4851 ms
    • This call did not include the JWT.
  • The second call was to baseline the call without a JWT.
    • Processing Time: 96 ms
  • The third call was to verify the baseline without a JWT.
    • Processing Time: 99 ms

Wrap Up …

So, it isn’t much of a performance gain. But, its enough to make caching the value and keeping all the authentication logic on your application server worth while.

(Ohh … and Stackify Prefix is pretty awesome! If you haven’t tried it, you should: https://stackify.com/prefix-download/)


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.