XProj/SDK … or use

on Monday, August 31, 2020

In the previous post, Use XProj/SDK project files with ASP.NET Web Apps, I describe creating a custom AfterBuild Target which would copy over the files from the standard `/bin/{configuration}/{architecture}` dll output location to the standard `/bin` webapp output location.

A coworker pointed output can be controlled with the `<OutDir>` build property. Which is way easier to use. So, here’s an updated .csproj file:

Use XProj/SDK project files with ASP.NET Web Apps

on Monday, August 24, 2020

The new SDK style project files (sometimes called VS2017 project files, and at one point referred to as xproj files) were not designed to work with ASP.NET Web Application projects. If you’re looking to use the newer SDK project files, then Microsoft is hoping you would use them with ASP.NET Core web apps. However, the SDK project format is soo much easier to work with than the older style, that it’s painful to go back to the old files and their associated packages.configs once you’ve moved to the new style.

So, if you were to convert a .NET 4.8 Web App’s .csproj file to an SDK style project file what are the problems that now occur:

  • You can’t target a webapp as an output type with the SDK style project file. The closest you have is the ability to target framework net48 with a library/dll output type (the default type).
  • I think that might be it?

How do you overcome that challenge:

  • If your output type is a library/dll and you set your targetFramework to net48, then you will create an output directory at /bin/{Debug|Release|Xxxx}/net48 which contains all the dlls and other references that would have normally gone into the web app’s /bin folder. So, you are producing the files that you need.
  • You just need to copy those files into the root /bin folder for IIS/IIS Express to run the website normally. To do that you can add a “CopyToBin” Target to you .csproj file. This target will run after the build completes.
  • After that, you will want to directly modify the .csproj file  to associate files which are commonly grouped together; such as Web.*.config files.

Here is an example:

Unfortunately, if you do this; it will help makes things work on your local machine. But, it won’t really help for your build process. If you use a third party tool to do builds for you, you’ll need to create a custom script which will run after your build completes, but before the results are packaged for deployment. This would need to be a custom solution for your environment. But, the basic outline would look something like this:

  • Have your build system check that the .csproj file is (a) building a “web app” (however you define that), (b) a net4X application, and (c) using an SDK style csproj file.

    With that many checks needed before performing an action; you know this isn’t a great idea.
  • Once verified, you’ll want to copy all the normal content files from the source code to a designated output location (css, js, imgs, views?) and then recreate the /bin directory using the output from the build.

Microsoft.Extensions.DependencyInjection - ASP.NET

on Monday, August 17, 2020

There is an fantastic Stackoverflow answer on how to use Microsoft.Extensions.DependencyInjection inside of a WebAPI 2 project (ASP.NET Full Framework). While it’s not cutting edge, it is a good middle ground solution when rewriting an entire ASP.NET application to ASP.NET Core seems out of the realm of possibility.

I took the code snippet and broke it apart a little bit to create a reusable project to house it. It’s not great, so I don’t really think it’s worth creating a github repo or a nuget package, but if you want to drop it into a project in your code base it could help out.

Here’s an example usage in a .NET 4.8 ASP.NET MVC / WebApi 2 based project:

And, it relies on a DependencyInjection.AspNet.WebApi library, which is targeting framework net48 (here’s the .csproj):

And, here’s the original stackoverflow posts code, just slightly modified:

Powershell Range Operator Performance

on Monday, August 10, 2020

This is a truly silly experiment, but it caught my interest. I was discussing Iron Scripter Challenges with thedavecarroll and he was using switch statements with range operators (PSGibberish.psm1):

What struck me as odd was the idea that the range operators might be calculating each of their ranges at runtime, on each execution of the function.

So, I ran a couple of experiments and the range operators are pretty neat. Here’s what I think (with no real definitive proof to support) is happening with them:

  • Range Operators used within Switch statements, that are contained within Functions are Cached.
    • It seems like when the function is JIT’d, the Range Operator value is calculated and Cached.
    • So, there’s no reason to pre-calculate the values and reference them within the function.
    • And, if you do reference variables from outside the function, looking up variables that require a scope lookup can also be time consuming. (Although, performance isn’t why people turn to powershell in the first place.)
  • Range Operators used within a Switch statement outside of a Function are not cached (like a code block).

To determine this, I ran a series of test against a function which focused on executing the switch statement which used range operators:

To determine how much time was spent making the function call and setting the $a variable, this function was used. This is noted as “Calling a Function Overhead”.

Switch Avg Execution Time = Total Avg Execution Time – Calling a Function Overhead

The results were:

The results indicate that both the Range Operator when run inside of a Function, and the Explicitly Scoped Cached Values have about the same running time. Which might indicate that when the function is JIT’d, it calculates the Range Operator values and caches them.

The large increase in running time between Range Operator and Cached Values not in Func might indicate that searching for variables outside of the function scope has a relatively costly penalty by comparison.

And, finally the Range Operator that was run outside of a Function was mostly likely calculated on each execution. While relatively expensive, its surprisingly fast. C# usually uses 10,000 ticks per millisecond, so that’s ~0.19 milliseconds for compilation and execution.

Full Test Script:

Diagnosing Slow node builds on Win2016

on Monday, August 3, 2020

In a move a from a Windows 2012 R2 build server to a Windows 2016 build server, the nodejs build step nearly doubled in it’s execution time. This seemed odd, since everything else was pretty much the same on the new server. So, what could the difference be?

Fortunately, a coworker point me towards the Windows Performance Recorder from Microsoft’s Assessment and Deployment Kit (Windows ADK). This worked really well in troubleshooting the issue, and I just wanted to drop in some screen grabs to show it’s visualizations.

The build was on-premise, so I did have access to install the kit** and control the execution of the Windows Performance Recorder to coincide with the execution of the problematic step. This would have been much more difficult on a hosted build server.

Getting the visualization comes by way of a two-step process.

  • First Windows Performance Recorder is used to track analysis information from all over your system while the issue is occurring. You can track different profiles, or record more detailed information in particular areas through manual configuration.
  • Once the problem has been recorded, the analysis information can then be pulled up in Windows Performance Analyzer. Which has a pretty nice interface.

First, here’s a screenshot of Windows Performance Analyzer from the “dotnet publish” (ie. npm init/build) step on the older Windows 2012 R2 server. In the screenshot, the step started by running node.exe and performing the init command. Which would copy over the npm packages from the local npm-cache. This would take about 60 seconds to complete.

However, when performing the same build/same step on the new Windows Server 2016 instance, node.exe wasn’t the dominant process during npm’s init phase. Instead another process was dominant (greyed out in the screenshot), which ran for nearly the same length of time as node.exe and seemed to mirror the process. Because the other process was competing for CPU time with node.exe, the node process took nearly 200 seconds to complete (up from 60 seconds).

So, what was the other process?

MsMpEng.exe, aka. Windows Defender, the classic anti-virus software. On the Windows Server 2016 image I was using, Windows Defender was pre-installed and doing it’s job.

I didn’t take a screenshot of it, but using the Disk IO dashboard I was able to drill into what files MsMpEng.exe was reading and something struck me as odd. It almost looked as if Windows Defender was virus checking the file before it was read before the copy, and read again at the destination after the copy. I’m not sure if that’s the case, but it did seem odd.

For the resolution, I added some Path Exclusion rules to the realtime file scanning capability of Windows Defender. These were specific paths used by the build system and we know those files should be coming from trusted sources. I still left on realtime process scanning and also ensured the scheduled scans were setup, which would look through all the files.

The final result of adding the excluded paths reduced the overall time for the npm init section down to 80s (down from 200s, but also up from 60s on the old, not bad); and MsMpEng.exe was still reporting that is was performing realtime virus scans on the process itself.

A quick sidenote: The offline installer is kind of odd. To do the offline installer, you run the online installer and direct it to download it’s files/resources to the same directory where the online installer’s adksetup.exe is at. The next time you run adksetup.exe, it will detect the the files have already been downloaded and present a different set of options when it runs.

Best Practices should not be the end of a conversation

on Monday, July 27, 2020

Sometimes, Best Practices can be used as an end all to a conversation. No more needs to be said, because Best Practices have laid out the final statement … and that doesn’t really feel right.

Best practices weren’t always best practices. At some point a new technology came around and people started working with it to create their own practices. And the practices that worked stuck around. Over time, those practices might be written down as suggested practices for a particular technology stack. And, when coming from the authoritative source for a technology stack, they might be labeled as Best Practices.

But, usually, when I hear Best Practices used as an end all to a conversation, it’s not in reference to a particular technology stack. It’s used as a generalization, to explain guidance to approach an area. The guidance is supposed to help people who haven’t done something before start off in the right direction. It’s supposed to be a starting point. And I think you’re supposed to continue to study the usage of those practices, to determine what the right practices are for your environment and your technology stack. Maybe even setup criteria to evaluate if a practice is working successfully in your environment. And, then change a practice if it doesn’t meet your needs.

That isn’t a trivial thing to do. You have to first understand where the practices came from and what they were accomplishing. But, once you do, you should be able to see where their limitations are and where they can be expanded. Sometimes a technology stack wasn’t available when a practice was written, and that changes the possible ways a desired outcome can be achieved. To change a practice, you have to be knowledgeable of what outcomes are trying to be achieved, and the pitfalls that come with them; and then make a decision based on the trade-offs of going to a new practice.

The only way to create a new practice, is if Best Practices are the start of a conversation, not the end of one.

(Maybe we could also drop the word “Best”, and just make them practices?)

More Andon Cord

on Monday, July 20, 2020

I was listening to Gene Kim’s new podcast, Idealcast, interview with Dr. Steven Spear (Decoding the DNA of the Toyota Production System), and the subject of Andon Cords came up. Since I had recently written a post on Andon Cords, I was very curious if their conversation would line up with my thoughts or if it would show a different angle or new depths. The text of the conversation was (from Ep 5):


Gene Kim

The notion of the Andon Cord is that anyone on the front line would be thanked for exposing an ignorance/deficiency for trying to solve a problem that the dominant architecture or current processes didn't foresee.

Dr. Steven Spear

That's right. Basically, the way the Andon Cord works is you, Gene, have asked me, Steve, to do something and I can't. And, I'm calling it to your attention. Such that the deficiencies in my doing become a reflection of the deficiencies in your thinking / your planning / your designing. And we're going to use the deficiencies in my doing as a trigger to get together and improve on 'your thinking and my doing' or 'our thinking and our doing'.

Gene Kim

And the way that's institutionalized amplifies signals in the system to self correct.


To me, that definitely feels like a new angle or view point on Andon Cords. It still feels like it aligns with the “popular definition”, but it’s more clear in it’s description. It seems to follow a line of thinking that “We’ve got a problem occurring right now; let’s use this as an opportunity to look at the problem together. And, then let’s think about ways to improve the process in the future.” Which feels like a more directed statement than “the capability to pause/halt any manufacturing line in order to ensure quality control and understanding.”

But, the definition Gene (Mr. Kim?) and Dr. Spear’s give does imply something I want to point out: Gene’s scenario is one where the person using the Andon Cord is someone directly involved in the processing line. It’s by someone who is directly on the line and seeing a problem as it’s happening. The cord isn’t being pulled by someone who wasn’t asked to be involved in the process.

I wonder if there are any texts on recognizing when an Andon Cord is being used inappropriately? Is that even a thing?

Record Request Body in ASP.NET Core 3.0–Attempt 2

on Monday, July 13, 2020

In the original post (Record Request Body in ASP.NET Core 3.0), the ITelemetryInitializer was creating some unexpected behavior. It was preventing the Operation ID associated with each request from changing/being updated. So, all the requests that were going through the system were being displayed on the same Performance graph. This created a nearly unusable performance graph as all the request were squished together and unreadable for timing purposes.

So, I needed to remove the ITelemetryInitializer from the code. But, I still wanted to record the JsonBody. The work around I used (which isn’t great) was to create a fake dependency on the request and record the body within the properties of the dependency.

Here’s the code:

Baseline C# Objects to Populate Jira DevInfo Pt. 2

on Monday, July 6, 2020

From the previous post, Baseline C# Objects to Populate Jira DevInfo Pt. 1:

Jira has this great “Development Information” (DevInfo) that can be associated with your work items. Which has an API described here. The information provided in the development tabs are for Branches, Commits, Pull Requests, Builds, Deployments and Feature Flags. Which is a way to have visibility into all the development/code activity that is related to a particular work item. It’s a great way to connect everything together.

On the previous post, there’s also a list of “gotcha’s” with the Jira documentation and a list of things could be improved.

But, this post is about the baseline C# objects which can be used to push information to the Atlassian/Jira DevInfo API.

Baseline C# Objects to Populate Jira DevInfo Pt. 1

on Monday, June 29, 2020

Jira has this great “Development Information” (DevInfo) that can be associated with your work items. Which has an API described here. The information provided in the development tabs are for Branches, Commits, Pull Requests, Builds, Deployments and Feature Flags. Which is a way to have visibility into all the development/code activity that is related to a particular work item. It’s a great way to connect everything together.

It really is great, but there are some pieces of the puzzle that can be improved. For example:

  • Currently, github also associates code commits, pull requests, and build/action information into an issues history. But, github’s layout intermingles those changes within the history of an issue to create a timeline. This allows for a reviewer of an issue, to visually see when a change was made, what the context was surrounding that change. Maybe there was a compelling argument made by a team member half way through an issue being worked on. And, that argument resulted in 5 files changing; you can see that in the github history.

    But, Jira’s history won’t show you that because the conversation history (Jira Comments) do not intermingle the code commits, pull requests, builds or deployments to create a timeline history.

    That would be a really nice improvement. And, on a personal note, if would make reviewing my coworkers work items a lot easier.
  • The Commits display (screenshot above) has a weird little bug in it. The Files Count Column (right side) should be able to display a count of all the files within a commit. The File Details Display, the list of files associated with a commit (“MODIFIED  PullRequest/Send-AzDPRStatusUpdates.ps1/” in the screenshot), will only show the first 10 files from the commit. But the File Count Column isn’t showing the total count of files, it only showing the count of files in the File Details Display that number (“1 file” in the screenshot). This seems to be a bug, but I haven’t reported it yet.

    (PS. Why is there a ‘/’ on the end of “PullRequest/Send-AzDPRStatusUpdates.ps1/”? The information submitted to the API did not have a slash on the end.)
  • The Documentation is REALLY CONFUSING when it comes urls. All of the examples in the documentation present url structures that look like this:

    https://your-domain.atlassian.net/rest/devinfo/0.10/bulk

    Except, that’s not the right url!!

    All the APIs have an “Authorization” section in their documentation, which has a link to Integrate JSW Cloud with On-Premise Tools. And BURIED in that documentation is this quick note:

    The root URL for OAuth 2.0 operations is: https://api.atlassian.com/jira/<entity type>/0.1/cloud/<cloud ID>/

    Note that this URL is different to the URLs used in the documentation. However, you easily translate from one to the other. For example, POST /rest/builds/0.1/bulk translates to POST https://api.atlassian.com/jira/builds/0.1/cloud/<cloud ID>/bulk.

    And I agree that it’s easy to translate. But, you have to first know that you need to translate it. Maybe, an alternative direction to take is to update the OAuth 2.0 APIs documentation to use the correct urls? Or, explicitly state it on all the API documentation, so that you don’t have to find it in a separate page?

Atlassian/Jira does provide this really great C# SDK for working/reading Jira issues. But, the SDK doesn’t contain any objects/code to work with the “DevInfo”. So, I want to post a couple baseline objects which can be used in an aspnetcore/netstandard application to push information to the DevInfo endpoint in Atlassian/Jira Cloud …

But, before doing that, this post will be of a few baseline objects to authenticate with the Atlassian/Jira OAuth 2.0 endpoint.

The next post will contain objects to use with the “DevInfo” API.







Powershell for working with AzureDevOps Web Hooks

on Monday, June 22, 2020

Recently, I was working with the AzureDevOps Web Hooks infrastructure and I really enjoyed the tooling that they had provided. In particular, I liked the interface they provide which shows:

  • The number of successful and failed requests sent to a web hook/service,
  • The history of those requests,
  • And the details of what was actually sent and received (the payloads)

This visibility provided by the tooling was tremendously helpful in developing, debugging and monitoring.

There’s also a fair amount of documentation put together by the AzureDevOps team to make using these services easier.

However, I have two gripes about the documentation.

  • It took me a while to find the documentation because I didn’t know the exact search terms to look for. A generalized search for “azuredevops api .net client” will land you in the “Integrate Application” area of the documentation, rather than the desired “Service hooks” area.
  • When it got down to the details of which libraries to use and how to use them, the documentation became a bit thin. For example, the AzureDevOps website has a really cool piece of functionality where they hide the “git” event types when the project has no git repositories within it. So, I was thinking about “querying the possible input values” on a particular project to see if the response limited the values similarly. To do that, I originally looked in the .NET API browser for a ServiceHooks*HttpClient object. But, that doesn’t exist. So, I looked at the Publishers – Query Input Values api from the REST documentation. And, the first property I should put in the request body is “currentValues”, which is of type object (what are the details on that?), and has no description given. If I’m trying to query for the current values, what should I put in for “currentValues” and in what format? It becomes apparent that the request and response are using the same object, and that property is only used in the response. So you can ignore that property on the request. But, why doesn’t the documentation state that?

Enough with my tribulations, here’s the results:

  • These are a set of functions for working with Web Hooks (even though it’s misnamed as ServiceHook).
  • These Web Hooks are created for the very specific needs that I wanted. Which were to create Web Hook integrations with a webservice I built for analyzing code commits, pull requests, and builds.
  • The powershell is a wrapper aroound the .NET Client library (nuget) Microsoft.VisualStudio.Service.ServiceHooks.WebApi version 15.131.1, which isn’t the latest version. Since it’s not the latest version of the library, that difference might account for the documentation on the .NET Client libraries not lining up with it. For example, the ServiceHooksPublisherHttpClient does not exist in the .NET API browser.

Record Request Body in ASP.NET Core 3.0

on Monday, June 15, 2020

Application Insights is a great tool, but it doesn’t record the body of a request by default. This is for good reason, payloads can be large and can sometimes contain sensitive information. But … sometimes you just need to record them.

When you do a google search there’s a great StackOverflow post (View POST request body in Application Insights) which gives you a lot of hope that it can be setup easily. But, with all the advancements in ASP.NET Core 3.0, it’s not quiet as easy as that post makes it look.

Here’s the obstacles you may need to overcome:

  • In ASP.NET Core 3.0, the request’s Body is no longer available to be read after a particular point in the application pipeline. You will need to create a middleware component to “EnableBuffering”. (This was done for purposes of speed as the underlying layers of the stack were replaced with Span’s. There’s a new Request.BodyReader that works with the spans for high performance, but it’s also not available to be read after a particular point in the application pipeline.)
  • The ITelemetryInitializer runs after a request completes. This means that the request’s body is disposed of by the time the initializer runs and records the event. You will have to record the body somewhere after “EnableBuffering” is enabled and before the Action completes. Like inside of an IActionFilter.
  • You may not want to record the body of everything that flows through your website, so an ActionFilterAttribute can make it easy to select which action you would like to record.

So, here’s some code that can help accomplish that:

What’s an Andon Cord?

on Monday, June 8, 2020

I’ve seen a couple of great explanations of an Andon Cord, but I feel like there’s another side to them. Something that hasn’t already been written about in John Willis’ The Andon Cord, Six Sigma Daily’s The Andon Cord: A Way to Stop Work While Boosting Productivity, or even Amazon’s Andon Cord for their distributed supply chain (example 1, 2).

Personally, I’ve heard two descriptions of the Andon Cord and one of them makes a lot of sense to me, and the other one is the popular definition. The popular definition is that an Andon Cord is the capability to pause/halt any manufacturing line in order to ensure quality control and understanding. Anyone can pull the Andon Cord and review a product that they are unsure about. Which intuitively makes sense.

The second definition I heard was from Mike Rother’s book, Toyota Kata, and wasn’t so much of a definition as a small glimpse into the history the Andon Cord. The concept that we know today started at Toyota before they made cars. This was back when they were making sewing machines. And, you need to imagine that the sewing machine manufacturing line was modeled off of Henry Ford’s Model T production lines. So there was a belt that ran across the factory floor and in between stations that people worked at. The Sewing Machines would be mounted to the line and would move from one station to the next on a timed interval (lets say every 5 minutes). So, each person would be working at their station and they would have 5 minutes to complete the work of their station; which was usually installing a part and then some sort of testing. If the person on the line felt that they couldn’t complete their work on time, then they should pull the Andon Cord to freeze the line. This ensured that no defective part/installation continued on down the factory line. The underlying purpose of not having a bad part go down the line is that disassembling and reassembling a machine to replace a defective part is very expensive compared to stopping the line and fixing it while the machine is at the proper assembly level. This makes complete sense to me.

The second definition makes a lot more sense to me than the first because of one unspoken thing:

Anyone on the assembly line can pull the Andon Cord. The Andon Cord can be pulled by anyone, but it’s supposed to be for the people who are actually on the assembly and have expert knowledge about their step within the overall process. It’s their experience and knowledge on that particular product line which makes them the correct person to pull the cord. It’s not for people from other product lines to come over and pull their lines cord.

This is a classic problem that I’ve run into time and again. On multiple occassions, I have seen the Ops and Management side of businesses hear that “anyone can pull the Andon Cord” and immediately start contemplating how they can use the cord to add Review Periods into process lines and allow anyone to put the brakes on a production deployment if anyone doesn’t understand it.

But those ideas seems counter-productive to the overall goals of DevOps. You don’t want to add a Review Period as that just delays the business value from getting to the end customer. And you don’t want to stop a release because someone who isn’t an expert on a product has a question about it; you want that person to go ask the experts, and then you want the experts on a Product Line to pull the Andon Cord.

Now, in an idealized world, all the people involved in a products deployment process would all be on the same Product Team. That team would be made up of Dev, Ops, and other team members. And all of those team members would be experts on the product line and would be the right people to pull the cord.

However, the majority of businesses I’ve talked with have separate Devs and Ops/Engineering teams. Simply because that structure has been lauded as a very cost effective way to reduce the companies expenditure on Ops and allow for their knowledge to be centralized and therefore non-redundant. But, when the Ops team is separate from the Dev team then the Ops team has a sense that they are a part of every product line; and they should have ownership over allowing any release to go to production. Even when they are not experts on the product line and have no knowledge of what a change actually does.

This sense of ownership that Ops (and to some degree Management) have often manifest in the form of asking for a review period between the time a deployment has passed all of it’s testing requirements and it actually goes out to production. This review period should start with a notification to the customers and usually end a few hours/or a day later so that Operations, Management, and Customers all have time to review the change and pull the cord if they have concerns about the change. Except, the Customer and the Product Team are the only ones on that line who are really experts on the product. And for customers that work alongside their product teams, they usually know what’s coming long before scheduling; and customers that don’t work alongside their product teams usually won’t be involved at all at this point.

So, if the above is true, then Operations (and Management) wouldn’t be experts on the product line at this point in the release process, so why would they be pulling the cord at this point?

For Management, I’m not sure. But, for Operations, they are experts on the Production environment that the deployment will be going into. Ops should be aware if there are any current issues in the production environment and be able to stop a deployment from making a bad situation worse. But, that isn’t a product line Andon Cord. That’s an Andon Cord for an entire environment (or a subsection of an environment). The Operations team should have an Andon Cord to pause/halt all deployments from going into Production if something is wrong with that environment. Once the environment has been restored to a sense of stability, then Operations should be able to release the cord and let the queued deployments roll out again. (sidenote: Many companies that are doing DevOps have communication channels setup where everyone should be aware of a Production environment problem; this should allow for “anyone” to pull the Environment Andon Cord and pause deployments for a little while.)

Finally, in the popular definition of the Andon Cord there is a lot of attention paid to human beings pulling the Andon Cord, but not a lot of explicit statements about machines pulling the Andon Cord. For me, I see it as both groups can pull the Andon Cord. It seems like everyone intuitively understands that if unit tests, or smoke tests, or a security scan fails then the process should stop and the product should go back to the developer to fix it. What I don’t think people connect is that that’s an Andon Cord pull. It’s an automated pull to stop the process and send the product back to the station that can fix the problem with the least amount of rework required. To see that though, you have to first recognize that a CI/CD automated build and deployment process is the digital transformation of a factory floors product line. Your product moves from station to station through human beings and automated tooling alike (manual commit, CI build, CI unit tests, manual code review, manual PR approval, CI merge, CI packaging, CD etc.), and at every station there is a possibility of an Andon Cord pull.

AzureDevOps PR Policies with Powershell

on Monday, June 1, 2020

After tinkering with AzureDevOps Pull Request Statuses, I started thinking about automating the creation of the policies. The build policies need to be setup on an individual repository basis because it requires a build definition to be associated with it. Some policies could be moved to the Project level (like the Minimum Number of Reviewers or Merge Strategy policies). But, I’m not sure if you can override a Project level policy in an individual branch.

So, I went searching to see if anyone had already done it and Jesse Houwing had: Configuring standard policies for all repositories in Azure Repos. I really liked his approach to exploring the API and using the json created by AzureDevOps to help construct his solution. I also was really interested by the idea of using the Azure CLI to perform the updates.

However, I was still pretty enamored with how easy the .NET SDK libraries were to use and thought I could continue to use them with Powershell to perform similar updates. So, here’s some sample code which can create a simple policy like: At least one reviewer is required on a Pull Request and that reviewer can be the requestor.

Update AzureDevOps PR Status’ with Powershell

on Monday, May 25, 2020

AzureDevOps integrates third party CI pipeline components through a Pull Request Status API (example usage with node.js). The end goal of the status API is to allow you to integrate any number of 3rd party tooling into your pull request validation processes. Hopefully creating something that looks like this:

The Build succeeded link comes from the built-in Build Policy of Azure DevOps Branch Protection. But, for any other service, you’ll need to integrate it’s results using an external service. That’s how the Deploy succeeded link was created in the screenshot above.

There are examples on how to use the API for external services (like the node.js example), but I didn’t run across any examples for powershell. So, here’s a quick example.

The example code:

  • Load’s up some dlls from AzureDevOps .NET Client Libraries.
  • Loads up the VSTeam powershell module.
  • Signs into Azure DevOps using your Personal Access Token (both VSTeam and the .NET Client libraries need to be signed into separately).
  • Uses VSTeam to retrieve your Project’s internal Guid.
    • You can do this with the .NET libraries too. I just want to support the usage of the VSTeam module.
  • Uses the .NET Libraries to retrieve your Repositories internal Guid.
  • Creates a GitPullRequestStatus object and populates it with some information.
  • Uses CreatePullRequestStatusAsync to update the status.
    • The great part of CreatePullRequestStatusAsync is that the display will always show the most recently supplied status for an integration component. So, you don’t have to worry about “updating” the status. Just call CreatePullRequestStatusAsync repeatedly throughout your process to update the status.

Book Review? A Seat at the Table

on Monday, May 18, 2020

Mark Schwartz’ A Seat at the Table (amazon, audible) is as close of description of DevOps to the one I have in my head as I’ve ever read. It definitely doesn’t have all the answers, and it asks some questions where the offered answers don’t feel like they satisfy all aspects of the question. But, it’s always an earnest answer which tries to address as many aspects as it can while still being very coherent and relatable.

Some pieces that I haven’t heard from other books (which seemed to match my own thoughts):

  • Projects which are bound by start and ends dates aren’t enough for long term success. By placing an artificial end date on a project it only ensures that the product will become stale and unmaintained after that end date. Even if you were to have product teams (Project to Product) it still isn’t enough, because that only secures knowledgeable resources to be available as future needs are determined. You also need to flatten the management structure to give the product team full agency over the product direction. So that the team can pull feedback directly from their customers and determine the future needs themselves. This is very hard to achieve because of the need for the team to be knowledgeable about the long term needs of the company and it’s strategies.
  • In a lot of Lean Management books (like James P. Womack’s works) there are incredibly vivid and tangible descriptions of how to implement Lean Principles within a Product line. But, they feel somewhat vague about the role of management within Lean production. Even Gemba Walks (2nd Edition) description felt like it was saying “the leadership should set a direction and strategize on how to execute in that direction without creating confusion”. It’s very vague and kind of hand wavy.

    But, in this book, Mark Schwartz very clearly states that management is a form of Waste. “If your not coding, your a waste” (<—take that with a grain of salt, it’s taken out of context). The goal of management is to set a direction and remove as many impediments as possible from the path to achieve that goal. Management adds value by removing the impediments and reducing the number of interruptions. And you don’t need multiple levels of management to do that. For me, that helps explain why Lean Management books are soo thin on the topic of what Managements role is: Management just isn’t a massively important part of product creation in Lean.

Of course no two people ever see eye-to-eye on everything. There was one statement that really threw me for a loop and I’m puzzling to better understand his view point:

In a chapter describing Quality in Software development he stated that all bugs should fall into two categories: either they should be fixed immediately, or they should be accepted as acceptable and never recorded/addressed as backlog items.

Now, I’m definitely on board with the idea that not all bugs are equal and there are some that should be fixed right away: Build is broken (test failure, deployment failure), production bug, practically anything that prevents work from being completed. But, when a bug isn’t an immediate fix, I feel like it should still be added to the backlog to be fixed during the next sprint. Maybe I need to read/listen to the chapter again and see if he made some caveats around when you should change it from a bug report to a feature request, or if the bug report comes from an end user it’s always an immediate fix. I don’t know.

I really hope that one day he might write another book that tackles real world problem scenarios that he’s run into and how he overcame them. I feel like the majority of DevOps books bring up that there are almost always conflicts between colleagues surrounding producing functionality vs meeting security and compliance goals; but then the authors wave a magic wand by saying “and this leads to discussions that ultimately result in value being created.” But, I haven’t read a book that digs in and describes an actual situation that occurred, getting into the details of the difficulties (from conflicts in opinion, conflicts over policy interpretation, conflicts in ideology, etc) and then describes all the tools and strategies that we used to overcome each of the difficulties. Maybe I have read that book and I’m just too dense to have noticed it.

camelCase Enums in Swashbuckle

on Monday, May 11, 2020

In an earlier version of Swashbuckle.AspNetCore, SwaggerGenOptions came with two extensions method that helped facilitate returning camelCase values from your enum data types.

However, in newer versions of Swashbuckle, those methods have been deprecated and they ask you to use the built-in features of your preferred JsonSerializer (this example is for Newtonsoft’s Json.NET):

These options will produce the expected camelCased data to flow across the wire. But, it won’t update the documentation generated by Swashbuckle to show the camelCased values. Instead Swashbuckle will continue to show the enum values:

The Unchase.Swashbuckle.AspNetCore.Extensions collection has an Enum’s extension which will provide the documentation that you’re looking for, but it still expects you to return numerical values (instead of the camelCased names).

I was looking for both:

  • Return camelCased values across the wire, and
  • Update the swagger/OpenApi documentation to display the camelCased values

Of course, I couldn’t be the only one looking for something like this and there was a nice piece of starting code on stackoverflow (Swagger UI Web Api documentation Present enums as strings), which grew into an EnumDocumentFilter.cs class (its specific to Newtonsoft’s Json.NET). The document filter produces output that looks like this:

Swashbuckle Duplicate SchemaId Detected

on Monday, May 4, 2020

I’m sure this is rarely run into, but every once in a while, you might reuse the name of  class between two namespaces. For example:

  • YourCompany.CommonComponentLibary.SystemOperation
  • YourCompany.ProjectXXXXX.SystemOperation

If both of those classes are then exposed as return types from an API endpoint, there is a chance that Swagger might throw an error that looks something like this:

The Swashbuckle team has run into this / thought about this and there is a function called CustomSchemaIds to handle it. The function’s usage looks a little like this (default implementation):

As best as I can tell the intention behind this function is to generate out the expected name for a given Type, completely agnostic of any external information.

Not using external information creates helps ensure the result should always be the same, because their is no “state of the system” information used to create unique names. This makes the names consistent despite processing order or updates to the underlying code base. So, it’s a good thing.

But … it can make for rather large names. Using the above code snippet as a example, the FullName is a unique name, but it contains a lot of information about how you internally generated the name. I’m not looking to have that information concealed for any security or risk purpose, it can just be hard to read in a json/yaml format or even in a Swagger UI display.

So, it might be easier to create a new Custom Schema Id which would try to stick with the default implementation, but alter it to return a longer name if conflicts occur. For example, what if the rules were:

  • Use the default implementation of the Class Name (without Namespace) when possible.
  • If the Class Name has already been used, then start prefixing Namespace names (from closest to the Class Name, back down to the root Namespace last).

An example of this might be:

  • YourCompany.CommonComponentLibary.SystemOperation –> SystemOperation
  • YourCompany.ProjectXXXXX.SystemOperation –> ProjectXXXXX.SystemOperation

But, to do that you would need to know about the current registered types (ie. external information).

Interestingly enough, the code which calls the CustomSchemaId/SchemaIdSelector function has access to a SchemaRepository class which contains exactly that information. But, it doesn’t pass the SchemaRepository into the function. The SchemaRespository was available within the code at the time of the SchemaIdSelector’s introduction (Swashbuckle.AspNetCore.SwaggerGen-5.0.0-rc2), so it could have been passed in. But, sometimes it’s just hard for see weird use cases like the one I’m describing.

There is a way to implement the use case I’m describing, by replicating the SchemaRepository within your own code. It doesn’t take a lot of effort, but it can feel like you’re doing something wrong. Here’s what that can look like:

As a sidenote, I think I’ll touch on is the large change that came with version 5.0.0-rc3. That version introduced/switch over to OpenApi 3.0 and the Microsoft.OpenApi libraries. It was a great change, but it also changed the way that the SchemaIdSelector worked. In the 5.0.0-rc2 version (the first version that introduced the selector), the selector would only be called once per type. In 5.0.0-rc3+, it started to be called on every type lookup. This means when you’re writing a custom selector, the selector needs to detect if a type has been selected before, and return the same value as it did previously.

Debug Builds with xprojs

on Monday, April 27, 2020

I recently switched an older project that was using the VS2015 .csproj file to the newer VS2017/“xproj” style file that came about with dotnet. I love the new xproj style files because they are so much cleaner/simpler and they integrate the nuget package information. It’s also fantastic that you can open and edit those files in Visual Studio without have to unload the project. Just update the file and watch as Visual Studio dynamically reloads the interface with the new information. Fantastic!

But, something that I didn’t expect to happen was losing line number information in my stack traces when exceptions occurred. So, it was surprising when our Test environment didn’t return the line numbers of an exception after switching over to the xproj style file.

What was missing?

I didn’t use an auto-conversion tool to do the conversion, so I mistakenly dropped a few important details from the file. The critical piece that dropped was the <DebugSymbols> and <DebugType> flags. The older style csproj file looked like this:

One of the key things in that Conditional is that the BuildConfiguration is named “Int” (rather than “Debug”).

This just needed to be replicated in the new csproj:

As you can see they are pretty much the same.

What if you don’t want to set those values in your .csproj?

When I was trying to figure out what happened, I ran across a really cool piece of functionality that is in the new msbuild system. If you set the BuildConfiguration to “Debug” (instead of “Int”, like I had), it will automatically set the <DebugSymbols> and <DebugType> to values that will produce line numbers in your StackTraces/Exceptions. Very friendly!

ExceptionDetailConverter Example

on Monday, April 20, 2020

Thanks to Alan’s sharp eyes, he saw that I had forgotten to put in examples for the IYourBussApiExceptionDetailConverter and YourBussApiExceptionDetailConverter into the post called ExceptionHandler Needed. Which was a follow-up post to Create a Custom ProblemDetailsFactory.

So, here are some examples, which should definitely be customized to fit your environments needs:

Again, Thanks for catching that!

Book Review: The Art of Business Value

on Monday, April 13, 2020

Mark Schwartz’s The Art of Business Value (itrevolution, audio) doesn’t feel as similar to other DevOps books because it focus’ is on a different aspect, the CIO/MBA analysis of Business Value. It’s a shorter book than others, but that’s mostly because it takes a hard look at mathematical analysis of value propositions. And, he also has a way of bringing together ideas from multiple sources that is compact and directly associates there values for comparison.

Because the book is focused on the definition of Business Value, it needs to discuss the history of prior works which define and guide thought leadership on that topic. He goes into details on ROI, NVB, PV and others. Giving examples of how they can be applied, and in what situations they can be misleading. It can feel a bit like when The Machine That Changed the World’s started describing the equations that could put dollar values to every step and aspect in the supply chain. This is important to do, and difficult without having seen it done before. But, ultimately, his point is that the prior works don’t quiet fit right because of lost opportunity costs. And, that he hopes the work being done on new models, like Beyond Budgeting, can help teams/companies figure out the model that works for them. (There is no one size fits all approach.)

The part of the book that stuck with me was “Learning Bureaucracy vs Authoritative Bureaucracy”. Bureaucracy isn’t a bad thing, it can just go bad really fast. You have to have a balancing act between the stability and efficiencies that Bureaucracy provides and creativity and improvements that Generative cultures provide. When bureaucracy’s define standards and best practices they remove some of the chaos, but it soo easy to over-step just a little bit and create unnecessary slow downs (in the forms of approvals, reviews, audits, testing overhead) which don’t provide Business Value at the cost that they come for.

(picture from Scaling Agile @ Spotify with Henrik Kniberg)

In the end, Mark Schwartz does give some prescriptions of what could be used to help improve the understanding of business value. And, I interpret that advice as creating a Learning Culture and building knowledgeable team members (he has a lot more detail). There seems to be an overarching theme of moving away from a single person that defines what Business Value is and into a team of people that can define and test for Business Value; he even mentions Eric Reis’ Lean Startup as describing a starting model. These are some of the ways to help connect the implementers understanding of what creates business value with managements understanding. Which he notes is one of the areas that Lean Management and Agile are lacking in detail; how does management best perform it’s role in leadership and guidance and connect that vertical information flow into the horizontal business process flows?

(PS. Also check out Henrik Kniberg’s notes on Trust. In my opinion, Trust is the most important factor in creating efficiencies. And Trust vs Security is the classic debate that just does not seem to have a good solution.)

Reading ApiVersion with ASP.NET Core Versioning

on Monday, April 6, 2020

ASP.NET Core Versioning is the red headed step child of the ASP.NET Core family. Unlike Dependency Injection, Routing, and HealthChecks, Versioning doesn’t have a page on the official ASP.NET MSDN documentation. What makes that odd is that Microsoft has internal REST Versioning Guidelines that they follow (AzureDevOps REST API). And, most API companies strongly suggest using versioning with APIs. But, again, Microsoft doesn’t have a page for it in their official MSDN documentation?

So, what does the ASP.NET Core API Versioning team have in the way of documentation? Their github repository, a good wiki guide, and a set of examples that can help you get started right away.

While the examples can help you get started, it does get a bit tougher when you want to do a little more advanced scenario. Here’s an example:

  • Url: https://somesite.company.com/apis/v1/hr/jobs
  • Optional header: x-comp-api-version: 1.2
  • In the example, the /v1 part of the url segment indicates the major version of the API to use. And, the API will return the result of the latest minor version implementation of that major version. Optionally, you can override the minor version used with the optional header.

So, the ASP.NET Core Versioning team did setup a way to do a fallback. Where if the url segment didn’t contain the api version, then it could fall back and inspect the headers for an api version. But, they didn’t foresee using one convention to override or refine another convention. But, really, who would pre-plan for that scenario?

What they did do was setup an interface, IApiVersionReader, which you can use to implement your own custom logic.

But, here’s where the real problem comes in. The ASP.NET Core Versioning Team started their work back when ASP.NET was the standard language of the day. So, the .NET Core side feels a lot more like a bolt on for an ASP.NET Versioning system, rather than a subsystem specifically designed to work with the Dependency Injection subsystem of ASP.NET Core.

One of the places where this “after-thought” seems to stick out is when you try to implement an IApiVersionReader in ASP.NET Core. In ASP.NET Core …

  • You do not define your IApiVersionReader within the Dependency Injection System
  • You do not tell the ApiVersioning system of the type that you would like generated for each request
  • Nor do you create a factory
  • You do define your IApiVersionReader as a singleton instance within the ApiVersionOptions class
  • And, you can only define your ApiVersionOptions class during ConfigureServices with AddApiVersioning, not in the Configure method with UseApiVersioning.

So, what if your IApiVersionReader needs a new instance of another class for each request that’s processed? Well … you can pass an IServiceProvider to your singleton instance at creation time. Just make sure you’ve got everything wired up in your IServiceCollection before creating your provider:

PAGELATCH_UP and tempdb Contention

on Monday, March 30, 2020

The first thing to remember about PAGELATCH_UP is that while it’s not a lock, it can block.

The screenshot above is from the SQL Observability Tool: Idera SQL Diagnostics Manager for SQL Server. It’s well worth the price.

An observability tool that takes to time to highlight particular activity with bright red backgrounds is definitely giving you a hint that an improvement can be made. In the screenshot above, the boxes with the red background are queries that are blocked by another query. And the reason for the blocking in outlined in the red blocks on the right side: PAGELATCH_UP on wait resources 2:1:1, 2:5:32352, and 2:3:48528.

So, what does that mean?

Well … I first needed to know, what is PAGELATCH_UP? Here are some resources to get started (at the time of this writing these links work; I sure hope they continue to work in the future):

  • Diagnosing and Resolving Latch Contention on SQL Server (PDF)

    A 2011 whitepaper from Microsoft’s SQL Experts on how to work through latch problems.

    It starts out with a description of all the different latches, and describes PAGELATCH_UP and a type of hold on a page in the memory to read the page with a potential for updating it.

    A similar description can be found on the MSDN Doc, Q&A on Latches in SQL Server Engine.

    Near the beginning of the Diagnosing PDF, there is a section on likely scenario’s that will produce a PAGELATCH_UP contention. One of those scenarios is “High degree of concurrency at the application level”, which is exactly the situation that produced the screenshot above. That screenshot was produced by having around 1000 simultaneous users trying to read the from same table at the same time, repeatedly.

    For a “High degree of concurrent at the application level” problem, the whitepaper suggested reading Table-Valued Function and tempdb Contention (broken link: http://go.microsoft.com/fwlink/p/?LinkID=214993).
  • Table-Valued Function and tempdb Contention / SQLCAT’s Guide To: Relational Engine (PDF)

    So, “Table-Valued Function and tempdb Contention” was a blog post on the Microsoft websites at some point in time. And, it has been referenced in a number of other blog posts. However, the original post is no longer available.

    But, in 2013 the Microsoft SQLCAT team took the blog post in it’s entirety and copied it into a new whitepaper they were writing, SQLCAT’s Guide To: Relational Engine. This is a fantastic blessing, but using google searches, it really felt like that knowledge had been lost.

    Within the whitepaper it describes some key information, PAGELATCH_UPs, PFS and SGAM.

    Less detailed information about these components can be found in TempDB Monitoring and Troubleshooting: Allocation Bottleneck.

    The important information that the Table-Valued Function and tempdb Contention whitepaper describes is that a PAGELATCH_UP block on Wait Resource 2:1:1 is a block that is waiting to read the first line of the first page of the Page Free Space (PFS) page. In fact, all of those Wait Resource reference are PFS pages. You can determine if a Wait Resource is a PFS page by dividing the 3rd number by 8088. If it is evenly divisible by 8088, then it is a PFS page. (32352 / 8088 = 4, 48528 / 8088 = 6).

    The guide then goes on to describe what is happening in the background: The system is looking to allocate some tempdb space in order to hold temporary results as it is processing through a query. In order for the tempdb to reserve some free space it will need to request the amount of space it needs from SGAM, which will in turn ask the PFS system where free space is available. At that point in time, the PFS is read and if it finds free space it will mark that space as allocated (requiring a PAGELATCH_UP handler to read/write the PFS information).

    The way the original PFS handler algorithm was written, it first searches from free space at the very first PFS page (2:1:1) and then sequentially looks through PFS pages until it finds free space.
  • Recommendations to reduce allocation contention in SQL Server tempdb database

    SQLCAT’s Guide To: Relational Engine goes on to give a few suggestions on how to improve tempdb PFS contention. Contention has been around for a while and one of the ways that performance can be improved is to split tempdb into multiple files. In the wait resource above the 2:5:32352 number can be read as:

    2 = Database Id, 2 is always tempdb
    5 = File 5, databases can be split into multiple physical files on disk
    32352 = Page 32352, pages are 8-KB in size and they are the smallest allocated size of memory/space the system creates. Page 1 and every page evenly divisible by 8088 are PFS pages.

    The very commonly suggested way to approach to splitting tempdb in multiple files is to make a temp physical file for each logical core on the server. If you 8 cores, then make split tempdb into 8 evenly sized files. The reason for this, is that it can allow each core to search a different physical file looking for free space. This reduces contention on the files and their associated latches.
  • PFS page round robing algorithm improvement in SQL Server 2014, 2016 and 2017

    Around May of 2018 the SQL Server team also came up with an improved way to search PFS files in order to reduce contention. Instead of always starting out with 2:X:1 as the first page read in each file, instead it would remember the last page it found space in and start the next search in the first page following that previously stopping point. This has an overall affect of evenly distributing tempdb allocations over the whole of the allocated space.

    Many years ago, spreading data out of the disk would have been a performance loss. Especially on spindle hard drives. This is the reason that defrag’s and dbcc shrink push the data next to each other on disk.

    However, I don’t know if using regular spindle drives on a SQL server is a very common practice anymore. If you’re running in the cloud or in an on-premise data center, it’s likely that the backing storage is a SAN array of some kind. It might be a SAN array of spindle disks, but it’s still probably a group of disks working together to make a high performance array. At that point, does it really matter if the data is grouped together? (That would be pretty interesting to look up.)

    If it doesn’t need to be grouped together, then leaving the tempdb allocations evenly spread seem like a performance increase without any downsides.

Hopefully, that long wall of text helps lower the amount of research needed to find out what PAGELATCH_UP is and why it might be blocking threads in a SQL Server database.

Functional Testing Harness in ASP.NET Core

on Monday, March 23, 2020

The ASP.NET Core team has made testing a first class system within their ecosystem, and it really shows. One of the aspects they made a few steps easier is functional testing of web applications. Both ASP.NET and ASP.NET Core have similar goals of creating a TestClient (GetTestClient) which can be used to perform the actual functional testing, but the ASP.NET Core teams HostBuilder pattern makes it just a touch easier to configure your TestServer.

One of the very cool things with ASP.NET Core is that if you are writing functional tests for your web application, you can use your actual Startup.cs class to configure your TestServer. This saves a lot of configuration overhead that’s involved in setting up unit tests (which still should be created). With functional tests, you usually want to see how many parts of the system work together, but you still want to stub/mock some of the external connections.

So, how can you let the system wire itself up and then mock out just the external connections? The HostBuilder makes it pretty easy to do just that. You can use your normal  Startup.cs class to configure the system and then add on an extra .ConfigureServices() function which will add in your mocks.

Here’s what that might look like:

And here’s what some code that uses it could look like:

Octopus Deploy Configuration Transforms

on Monday, March 16, 2020

For the last couple of years, I’ve worked with Octopus Deploy as our main Deployment system. Prior to Octopus Deploy, we used custom made Powershell scripts that were built as extensions to TFS XAML builds.

The nice part of having all of our deployment logic in Powershell scripts is that we were able to reuse those scripts with Octopus Deploy. However, there are many features within Octopus Deploy which we were happy to ditch the scripts for and use what “came out of the box”. One of those places is Octopus Deploy’s Configuration Transforms.

ASP.NET (Web.Config)

Configuration Transforms are more of a legacy feature at this point, since they are designed to perform XML transforms for ASP.NET Web.Config files. ASP.NET Core’s appSettings.json files (and it’s corresponding Configuration system) are fine without performing environment specific transformations.

To use the Configuration Transforms feature, you only need to update your Deploy a Package step to use the Configuration Transforms feature and then setup a search pattern to find the files you want transformed.

ASP.NET Core (appSettings.json)

Of course, with ASP.NET Core you don’t really need to do configuration transforms anymore. However, if you do need to provide the functionality to transform appSettings.json files you can do that with a Powershell script.

Quick sidenote: Using Octopus Deploy, you can use the JSON Configuration Variables to perform substitutions in JSON files. However, the feature is designed to have the substitution values provided by Octopus variables.

Here’s a quick Powershell script which will:

Book Review: Mindset: The New Psychology of Success

on Monday, March 9, 2020

This wasn’t the normal DevOps book that I enjoy reading, but it wasn’t too far from it either. The focus of the book was on creating a mindset such that your goal is to learn from the activities you do, rather than focus on winning or achieving a prize. This isn’t a new concept, but the history she relates helps paint a more detailed picture of how people can fall into a mindset where completing a task is the goal, rather than gaining the deeper understanding of how you complete a task.

Her description of how people fall into this mindset comes from a tremendous amount of research and first hand experience educating children. Her theory is that education systems that put pressure on teachers to teach towards a test often set an unfortunate precedence that passing the test is the highest priority. This changes the goals, or mindsets, of the teachers so that they can rationalize that if they get the children to memorize the answers and regurgitate them, then they have completed their task. That if the children pass the test, it corresponds with the false belief that the children have learned. Extending from that idea, this sets up classrooms where children are given tests throughout the year and they either pass or fail the test, and then they move on to the next subject. Which creates a psychological barrier that the children either know it or they don’t, and there’s no way that can possibly change. Creating a false impression that the test has decided, this is all they will ever know.

Instead, research from Dr. Dweck and others have shown that if you change the goal from testing into fostering positive learning experiences that can take root in children, then the children are self motivated to tackle hard problems using hard work, overcome discouragement through a desire to improve their own knowledge and abilities, and create a virtuous cycle by looking at knowledge gain as the real reward. This is very similar to the Lean practices of Continuous Improvement (Toyota Kata stresses continuous improvement as does Lean Startup), Dr. Westrum’s Generative Culture, and W. Edwards Deming’s thoughts on education (The New Economics: For Industry, Government, Education).

I truly enjoyed the many examples that she had working with children, and the advice she gave on how to encourage children to learn is equally applicable to adults; so there is a lot of value within those pages.

However, I would encourage others to skip the chapters which use sports examples. Dr. Dweck can be a bit single minded in her focus to connect success with a learning mindset. While a willingness to continually grow and improve are necessary to achieve great success, it’s often a breakthrough within a particular field or a combination of improvements that create a new framework which actually creates the success; it doesn’t come from just having a learning mindset.

Reference Microsoft.AspNetCore.Routing in Library

on Monday, March 2, 2020

So, I’m confused by the Migrate from ASP.NET Core 2.2 to 3.0 documentation when it comes to the deprecated libraries.

In the Remove obsolete package references section, there is a long list of packages hidden under “Click to expand the list of packages no longer being produced”. The package from that list that I’m going to focus on is Microsoft.AspNetCore.Routing. That’s the package that contains IEndpointRouteBuilder.

The article explains that the removed packages are now available through the shared framework Microsoft.AspNetCore.App. So, let’s test out if that works. I’m going to:

  • Create a new class library which will reference IEndpointRouteBuilder.
  • Attempt to get that class library to successfully compile.

The examples in this post can be found on github at IEndpointRouteBuilderDemo.

Here’s the sample IEndpointRouteBuilderExtensions.cs class that will be used in our test:

So, let’s try it with the suggested .csproj file settings:

And the error we get back is:

C:\Program Files\dotnet\sdk\3.1.102\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.FrameworkReferenceResolution.targets(283,5): error NETSDK1073: The FrameworkReference 'Microsoft.AspNetCore.App' was not recognized

Okay … so, what can we do? Well, there is a closed issue on github about this problem: AspNetCore Issue #16638, Cannot find the AspNetCore Nuget packages for 3.0 (specifically routing). The response to that issue is to do what was demonstrated above. So, what else can we try.

That issue accurately describes that the Microsoft.AspNetCore.Routing dll is embedded in Microsoft.AspNetCore.App, which is located on disk at C:\Program Files\dotnet\packs\Microsoft.AspNetCore.App.Ref\3.1.2\ref\netcoreapp3.1\Microsoft.AspNetCore.Routing.dll. But, how can you get it referenced propely?

One way to get it referenced is to change the .csproj file to use the SDK of Microsoft.NET.SDK.Web and change the TargetFramework to netcoreapp3.1. Like this:

But, when you do that, you get a new error:

CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point

Which, kind of makes sense. The project has been changed over to a netcoreapp, so it kind of expects to create an executable. The great part about executables is that they also create .dlls, which is what we are looking for in the first place.

We just need to get it to compile in order to get the .dll. To do that, let’s create a DummyMain.cs class which will provide the required static ‘Main’ method:

Which provides a successful compile:

Done building project "IEndpointRouteBuilderDemo-Compiles.csproj".

Build succeeded.
    0 Warning(s)
    0 Error(s)

Of course, this isn’t the ideal result. And, it would be hard to believe that the ASP.NET Core team expected this to occur. So, it’s most likely my misunderstanding of how to reference the Shared Framework correctly in order to prevent the need for the rest of these workarounds to occur.

Hopefully the Microsoft team will be able to shed more light on this in AspNetCore Issue #19481, Reference IEndpointRouteBuilder in a class libary (ASP.NET Core 3.1).

Which they did!

So, the documentation from Migrate from ASP.NET Core 2.2 to 3.0 doesn’t use the Microsoft.NET.SDK.Web SDK reference in the .csproj file. It uses the Microsoft.NET.SDK reference instead. Along with the Shared Framework reference, this allows for the class library to be compiled without needing the DummyMain method:

Convenient Rebinds for ASP.NET Core DI

on Monday, February 24, 2020

ASP.NET Core’s Dependency Injection system is a solid implementation and can provide solutions for most DI needs. But, like most DI systems, it is an opinionated API (and rightly so). I moved to the ASP.NET Core implementation from Ninject and, in doing so, there were a couple of Ninject methods that I really missed. Especially the .Rebind functions.

These are the functions that will take a interface-to-implementation binding in the system, and remove the old binding and set up a new one; with new lifetime scoping and new configuration/implementation details. With ASP.NET Core’s system, they really want the developer of the application to setup exactly the bindings that they desire at the very beginning of the program. And, the first binding that’s put in place should be the last binding made for that interface-to-implementation approach.

Their approach is well reasoned and it has it’s merits. It should lower overall confusion and needed knowledge when trying to figure out what bindings are being used. If you start reading Startup.cs’s ConfigureServices and you find a binding declaration, that should be the correct binding which will be resolved at runtime.

However, because of Ninject’s .Rebind functions, I am stuck in the mind set that bindings should be flexible as new subsystems are added. If you make a library, MyLib, that has a default caching implementation that uses InMemory caching, then your library will most likely setup a binding of IMyLibCache to MyLibInMemoryCache. If I then create an add-on library that implements caching using redis, MyLib.Redis, then I want to be able to swap out the binding of IMyLibCache with a new binding to MyLibRedisCache.

With the prescribed API of ASP.NET Core’s DI system, the way you would do this in code would look something like this:

But, that just feels backwards. When you were writing your original code, you would have to know upfront that someone in the future would have a need to use a different caching system. So, you would have to have the forethought to create the binding using .TryAddTransient() instead of .AddTransient().

It would feel much more natural if it was written like this:

So, that’s the Ninject thinking that is stuck in my head. And, because of it, here are a few convenience overloads which can make working with IServiceCollection a little bit easier:

Implementation Efficiency Frustration Tiers

on Monday, February 17, 2020

For me, a lot of stress and anxiety about working efficiently comes from the momentary feelings of being ineffective. If I need to accomplish a work item, how long will it take to complete that work item? How many sub-tasks do I need to complete before I can complete the work item? How many of those do I understand and can do with a minimal amount of effort, and how many do I need to do research for before I can even begin implementing them? The more time, energy, and amount of knowledge that has to be gained to complete a work item, the more stressful it becomes to complete it.

So, I wanted to take a moment and start to break down those feelings into categories. I read Scott Hanselman’s Yak Shaving post years ago, and it has become a part of the shared language among the development teams I work with. Before reading that post, I had described the act of Yak Shaving as “speed bumps”; but I would have to explain it every time I used it. Hopefully, getting this written down can help me define a language so I can communicate this feeling more easily.

At the moment, the feeling of implementation efficiency can be broken down as:

Tier 3

This is when you need to implement something, but in order to do it you are going to need to learn a new technology stack or a new paradigm in order to complete it. The task you’re trying to complete could be something as trivial as adding Exception Handling to an application, but in order to do it, you’re going to research APM solutions, determine which best fits your needs and then implement the infrastructure and plumbing that will allow you to use the new tool.

An example of this might be your first usage of Azure Application Insights in an ASP.NET Core application. Microsoft has put in a tremendous amount of work to make it very easy to use, but you’ll still need to learn how to Create an Application Insights resource, add Application Insights into an ASP.NET Core application, re-evaluate if you created your Application Insights resource correctly to handle multiple environments, and then most likely reimplement it with Dev, Test, Prod in mind, determine what common parameters which are unique to your company should always be recorded, and then work with external teams to setup firewall rules, develop risk profiles and work through all the other details necessary to get a working solution.

Tier 3 is the most frustrating because you have learn so much yourself just to get to your end value. So, for me, it’s the one that I also feel the most nervous about taking on because it can feel like I’m being incredibly inefficient at doing so much work to produce something that feels so small.

Tier 2

This is when you already have all the knowledge of how to do something and you understand what configuration needs to take place, but you are going to have to do the configuration yourself. When you know at the beginning exactly how much work it will take to complete, there is a lot less frustration because you can rationalize the amount of time spent for the end value that’s achieved. The moment this becomes frustrating is when the extra work that you’re putting in is a form of Yak Shaving. For example, when you are dealing with a production issue, and you realize that you’re going to need to implement component X in order to get the necessary information in order to solve the problem, that’s the moment you heavily sigh because you realize the amount of hand work you’re going to have to put in place just to get component X working.

This level of efficient usually happens when your working on the second or third project that you’ve used a particular technology stack with. Let’s use Application Insights as the example again. You’ve probably already developed some scripts which can automatically create the Application Insights instances, and you’re comfortable installing the nuget packages that you need, but you still need to run those scripts by hand, and setup permissions by hand, and maybe even request firewall rules to be put in place. None of these tasks will really take up too much time, but it feels like wasted time because your not producing the real end value that you had in mind in the first place.

Tier 1

This is when the solution is not only well known to yourself, but your organization has developed the tooling and infrastructure to rigorously minimize the amount of time spent on implementing the solution. This doesn’t come cheap, but the peace of mind that comes with having an instantaneous solution to a problem are the moments that make work enjoyable. The ability to stumble upon a problem and think, “Oh, I can fix that”, and within moments you’re back to working on whatever you were originally doing creates a sense that any problem can be overcome. It removes the feeling that you’re slogging through mud with no end in sight, and instead that feeling is replaced with confidence that you can handle whatever is thrown at you.

It’s rare that you can get enough tooling and knowledge built up in an organization that Tier 1 can be achieved on a regular and on going basis. It requires constant improvement of work practices and investment into people’s knowledge, skillsets, and processes to align the tooling and capabilities of their environment with their needs.

When creating working environments, everyone starts out with a goal of creating a Tier 1 scenario. But, it seems pretty difficult to get there and maintain it.

This one of the pieces I can find very frustrating about security. There is a lot of information available about what could go wrong, and different risk scenarios, but their just isn’t a lot of premade tooling which can get you to a Tier 1 level of Implementation Efficiency. People are trying though: OWASP has the Glue Docker image, and Github automated security update scanner is fantastic, and NWebSec for ASP.NET Core is a step in the right direction. But, overall, their needs to be a better way to get security into that Tier 1 of Implementation Efficiency zone.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.