Use PowerShell to Process Dump an IIS w3wp Process

on Monday, August 27, 2018

Sometimes processes go wild and you would like to collect information on them before killing or restarting the process. And the collection process is generally:

  • Your custom made logging
  • Open source logging: Elmah, log4Net, etc
  • Built in logging on the platform (like AppInsights)
  • Event Viewer Logs
  • Log aggregators Splunk, New Relic, etc
  • and, almost always last on the list, a Process Dump

Process dumps are old enough that they are very well documented, but obscure enough that very few people know how or when to use them. I certainly don’t! But, when you’re really confused about why an issue is occurring a process dump may be the only way to really figure out what was going on inside of a system.

Unfortunately, they are so rarely used that it’s often difficult to re-learn how to get a process dump when an actual problem is occurring. Windows tried to make things easier by adding Create dump file as an option in the Task Manager.

image

But, logging onto a server to debug a problem is becoming a less frequent occurrence. With Cloud systems the first debugging technique is to just delete the VM/Container/App Service and create a new instance. And, On-Premise web farms are often interacted with through scripting commands.

So here’s another one: New-WebProcDump

This command will take in a ServerName and Url and attempt to take a process dump and put it in a shared location. It does require a number pre-requisites to work:

  • The Powershell command must be in a folder with a subfolder named Resources that contains procdump.exe.
  • Your web servers are using IIS and ASP.NET Full Framework
  • The computer running the command has a D drive
    • The D drive has a Temp folder (D:\Temp)
  • Remote computers (ie. Web Servers) have a C:\IT\Temp folder.
  • You have PowerShell Remoting (ie winrm quickconfig –force) turned on for all the computers in your domain/network.
  • The application pools on the Web Server must have names that match up with the url of the site. For example https://unittest.some.company.com should have an application pool of unittest.some.company.com. A second example would be https://unittest.some.company.com/subsitea/ should have an application pool of unittest.some.company.com_subsitea.
  • Probably a bunch more that I’m forgetting.

So, here are the scripts that make it work:

  • WebAdmin.New-WebProcDump.ps1

    Takes a procdump of the w3wp process associated with a given url (either locally or remote). Transfers the process dump to a communal shared location for retrieval.
  • WebAdmin.Test-WebAppExists.ps1

    Check if the an application pool exists on a remote server.
  • WebAdmin.Test-IsLocalComputerName.ps1

    Tests if the command will need to run locally or remotely.
  • WebAdmin.ConvertTo-UrlBasedAppPoolName.ps1

    The name kind of covers it. For example https://unittest.some.company.com should have an application pool of unittest.some.company.com. A second example would be https://unittest.some.company.com/subsitea/ should have an application pool of unittest.some.company.com_subsitea.


Apigee REST Management API with MFA

on Monday, August 20, 2018

Not too long ago Apigee updated their documentation to show that Basic Authentication was going to be deprecated on their Management API. This wasn’t really a big deal and it isn’t very difficult to implement an OAuth 2.0 machine-to-machine (grant_type=password) authentication system. Apigee has documentation on how to use their updated version of curl (ie. acurl) to make the calls. But, if you read through a generic explanation of using OAuth it’s pretty straight forward.

But, what about using MFA One Time Password Token’s (OTP) with OAuth authentication? Apigee supports the usage of Google Authenticator to do OTP tokens when signing in through the portal. And … much to my surprise … they also support the OTP tokens in their Management API OAuth login. They call the parameter, mfa_token.

This will sound crazy, but we wanted to setup MFA on an account that is used by a bot/script. Since the bot is only run from a secure location, and the username/password are already securely stored outside of the bot there is really no reason to add MFA to the account login process. It already meets all the criteria for being securely managed. But, on the other hand, why not see if it’s possible?

The only thing left that needed to be figured out was how to generate the One Time Password used by the mfa_token parameter. And, the internet had already done that! (Thank You James Nelson!) All that was left to do was find the Shared Secret Key that the OTP function needed.

Luckily I work with someone knowledgeable on the subject and they pointed out not only that the OTP algorithm that Google Authenticator uses is available on the internet but that Apigee MFA sign-up screen had the Shared Secret Key available on the page. (Thank You Kevin Wu!)

When setting up Google Authenticator in Apigeee, click on the Unable to Scan Barcode? link

image

Which reveals the OTP Shared Secret:

image

From there, you just need a little Powershell to tie it all together:

System.Configuration.ConfigurationManager in Core

on Monday, August 13, 2018

The .NET Core (corefx) issue, System.Configuration Namespace in .Net Core, ends with the question:

@weshaggard Can you clarify the expectations here for System.Configuration usage?

I was recently converting a .NET Full Framework library over to a .NET Standard library and ran into the exact problem in that issue, and I also got stuck trying to figure out “When and How are you supposed to use System.Configuration.ConfigurationManager?”

I ended up with the answer:

If at all possible, you shouldn’t use it. It’s a facade/shim that only works with the .NET Full Framework. It’s exact purpose is to allow .NET Standard libraries to compile; but it doesn’t work unless the runtime is .NET Full Framework. In order to properly write code using it in a .NET Standard library you will have to use compiler directives to ensure that it doesn’t get executed in a .NET Core runtime. It’s scope, purpose and usage is very limited.

In a .NET Standard library if you want to use configuration information you need to plan for two different configuration systems.

  • .NET Full Framework Configuration

    Uses ConfigurationManager from the System.Configuration dll installed with the framework. This uses the familiar Configuration.AppSettings[string] and Configuration.ConnectionStrings[string]. This is a unified model in .NET Full Framework and works across all application types: Web, Console, WPF, etc.

  • .NET Core Configuration

    Uses ConfigurationBuilder from Microsoft.Extensions.Configuration. And, really, it expects ConfigurationBuilder to be used in an ASP.NET Core website. And this is the real big issue. The .NET Core team focused almost solely on ASP.NET Core and other target platforms really got pushed to the side. Because of this, it’s expecting configuration to be done through the ASP.NET Configuration system at Startup.

And, for now, I can only see two reasonable ways to implement this:

  • A single .NET Standard Library that uses compiler directives to determine when to use ConfigurationManager vs a ConfigurationBuilder tie-in.

    This would use the System.Configuration.ConfigurationManager nuget package.

    Pros:
    - Single library with a single nuget package
    - Single namespace

    Cons:
    - You would need a single “Unified Configuration Manager” class which would have #ifdef statements throughout it to determine which configuration system to use.
    - If you did need to reference either the .NET Full Framework or .NET Core Framework the code base would become much more complicated.
    - Unit tests would also need compiler directives to handle differences of running under different Frameworks.
  • A common shared project used in two libraries each targeting the different frameworks.

    This would not use the System.Configuration.ConfigurationManager nuget package.

    This is how the AspNet API Versioning project has handled the situation.

    Pros:
    - The two top-level libraries can target the exact framework they are intended to be used with. They would have access to the full API set of each framework and would not need to use any shims/facades.
    - The usage of #ifdef statements would be uniform across the files as it would only need to be used to select the correct namespace and using statements.
    - The code would read better as all framework specific would be abstracted out of the shared code using extension methods.

    Cons:
    - You would create multiple libraries and multiple nuget packages. This can create headaches and confusion for downstream developers.
    - Unit tests would (most likely) also use multiple libraries, each targeting the correct framework.
    - Requires slightly more overhead to ensure libraries are versioned together and assembly directives are setup in a shared way.
    - The build system would need to handle creating multiple nuget packages.

Apigee TimeTaken AssignVariable vs JS Policy

on Monday, August 6, 2018

Apigee’s API Gateway is built on top of a Java code base. And, all of the policies built into the system are pre-compiled Java policies. So, the built in policies have pretty good performance since they are only reading in some cached configuration information and executing natively in the runtime.

Unfortunately, these policies come with two big draw backs:

  • In order to do some common tasks (like if x then do y and z) you usually have to use multiple predefined policies chained together. And, those predefined policies are all configured in verbose and cumbersome xml definitions.
  • Also, there’s no way to create predefined policies that cover every possible scenario. So, developers will need a way to do things that the original designers never imagined.

For those reasons, there are Javascript Policies which can do anything that javascript can do.

The big drawback with Javascript policies:

  • The system has to instantiate a Javascript engine, populate its environment information, run the javascript file, and return the results back to the runtime. This takes time.

So, I was curious how much more time does it take to use a Javascript Policy vs an Assign Message Policy for a very simple task.

It turns out the difference in timing is relatively significant but overall unimportant.

The test used in the comparison checks if a query string parameter exists, and if it does then write it to a header parameter. If the header parameter existed in the first place, then don’t do any of this.

Here are the pseudo-statistical results:

  • Average Time Taken (non-scientific measurements, best described as “its about this long”):
    • Javascript Policy: ~330,000 nanoseconds (0.33 milliseconds)
    • Assign Message Policy: ~50,000 nanoseconds (0.05 milliseonds)
  • What you can take away
    • A Javascript Policy is about 650% slower or Javascript has about 280,000 nanoseconds overhead for creation, processing and resolution.
    • Both Policies take less that 0.5 ms. While the slower performance is relatively significant; in the larger scheme of things, they are both fast.

Javascript Policy

Javascript Timing Results

image

Assign Message Policy

Assing Message Timing Results

image


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.