XProj/SDK … or use

on Monday, August 31, 2020

In the previous post, Use XProj/SDK project files with ASP.NET Web Apps, I describe creating a custom AfterBuild Target which would copy over the files from the standard `/bin/{configuration}/{architecture}` dll output location to the standard `/bin` webapp output location.

A coworker pointed output can be controlled with the `<OutDir>` build property. Which is way easier to use. So, here’s an updated .csproj file:

Use XProj/SDK project files with ASP.NET Web Apps

on Monday, August 24, 2020

The new SDK style project files (sometimes called VS2017 project files, and at one point referred to as xproj files) were not designed to work with ASP.NET Web Application projects. If you’re looking to use the newer SDK project files, then Microsoft is hoping you would use them with ASP.NET Core web apps. However, the SDK project format is soo much easier to work with than the older style, that it’s painful to go back to the old files and their associated packages.configs once you’ve moved to the new style.

So, if you were to convert a .NET 4.8 Web App’s .csproj file to an SDK style project file what are the problems that now occur:

  • You can’t target a webapp as an output type with the SDK style project file. The closest you have is the ability to target framework net48 with a library/dll output type (the default type).
  • I think that might be it?

How do you overcome that challenge:

  • If your output type is a library/dll and you set your targetFramework to net48, then you will create an output directory at /bin/{Debug|Release|Xxxx}/net48 which contains all the dlls and other references that would have normally gone into the web app’s /bin folder. So, you are producing the files that you need.
  • You just need to copy those files into the root /bin folder for IIS/IIS Express to run the website normally. To do that you can add a “CopyToBin” Target to you .csproj file. This target will run after the build completes.
  • After that, you will want to directly modify the .csproj file  to associate files which are commonly grouped together; such as Web.*.config files.

Here is an example:

Unfortunately, if you do this; it will help makes things work on your local machine. But, it won’t really help for your build process. If you use a third party tool to do builds for you, you’ll need to create a custom script which will run after your build completes, but before the results are packaged for deployment. This would need to be a custom solution for your environment. But, the basic outline would look something like this:

  • Have your build system check that the .csproj file is (a) building a “web app” (however you define that), (b) a net4X application, and (c) using an SDK style csproj file.

    With that many checks needed before performing an action; you know this isn’t a great idea.
  • Once verified, you’ll want to copy all the normal content files from the source code to a designated output location (css, js, imgs, views?) and then recreate the /bin directory using the output from the build.

Microsoft.Extensions.DependencyInjection - ASP.NET

on Monday, August 17, 2020

There is an fantastic Stackoverflow answer on how to use Microsoft.Extensions.DependencyInjection inside of a WebAPI 2 project (ASP.NET Full Framework). While it’s not cutting edge, it is a good middle ground solution when rewriting an entire ASP.NET application to ASP.NET Core seems out of the realm of possibility.

I took the code snippet and broke it apart a little bit to create a reusable project to house it. It’s not great, so I don’t really think it’s worth creating a github repo or a nuget package, but if you want to drop it into a project in your code base it could help out.

Here’s an example usage in a .NET 4.8 ASP.NET MVC / WebApi 2 based project:

And, it relies on a DependencyInjection.AspNet.WebApi library, which is targeting framework net48 (here’s the .csproj):

And, here’s the original stackoverflow posts code, just slightly modified:

Powershell Range Operator Performance

on Monday, August 10, 2020

This is a truly silly experiment, but it caught my interest. I was discussing Iron Scripter Challenges with thedavecarroll and he was using switch statements with range operators (PSGibberish.psm1):

What struck me as odd was the idea that the range operators might be calculating each of their ranges at runtime, on each execution of the function.

So, I ran a couple of experiments and the range operators are pretty neat. Here’s what I think (with no real definitive proof to support) is happening with them:

  • Range Operators used within Switch statements, that are contained within Functions are Cached.
    • It seems like when the function is JIT’d, the Range Operator value is calculated and Cached.
    • So, there’s no reason to pre-calculate the values and reference them within the function.
    • And, if you do reference variables from outside the function, looking up variables that require a scope lookup can also be time consuming. (Although, performance isn’t why people turn to powershell in the first place.)
  • Range Operators used within a Switch statement outside of a Function are not cached (like a code block).

To determine this, I ran a series of test against a function which focused on executing the switch statement which used range operators:

To determine how much time was spent making the function call and setting the $a variable, this function was used. This is noted as “Calling a Function Overhead”.

Switch Avg Execution Time = Total Avg Execution Time – Calling a Function Overhead

The results were:

The results indicate that both the Range Operator when run inside of a Function, and the Explicitly Scoped Cached Values have about the same running time. Which might indicate that when the function is JIT’d, it calculates the Range Operator values and caches them.

The large increase in running time between Range Operator and Cached Values not in Func might indicate that searching for variables outside of the function scope has a relatively costly penalty by comparison.

And, finally the Range Operator that was run outside of a Function was mostly likely calculated on each execution. While relatively expensive, its surprisingly fast. C# usually uses 10,000 ticks per millisecond, so that’s ~0.19 milliseconds for compilation and execution.

Full Test Script:

Diagnosing Slow node builds on Win2016

on Monday, August 3, 2020

In a move a from a Windows 2012 R2 build server to a Windows 2016 build server, the nodejs build step nearly doubled in it’s execution time. This seemed odd, since everything else was pretty much the same on the new server. So, what could the difference be?

Fortunately, a coworker point me towards the Windows Performance Recorder from Microsoft’s Assessment and Deployment Kit (Windows ADK). This worked really well in troubleshooting the issue, and I just wanted to drop in some screen grabs to show it’s visualizations.

The build was on-premise, so I did have access to install the kit** and control the execution of the Windows Performance Recorder to coincide with the execution of the problematic step. This would have been much more difficult on a hosted build server.

Getting the visualization comes by way of a two-step process.

  • First Windows Performance Recorder is used to track analysis information from all over your system while the issue is occurring. You can track different profiles, or record more detailed information in particular areas through manual configuration.
  • Once the problem has been recorded, the analysis information can then be pulled up in Windows Performance Analyzer. Which has a pretty nice interface.

First, here’s a screenshot of Windows Performance Analyzer from the “dotnet publish” (ie. npm init/build) step on the older Windows 2012 R2 server. In the screenshot, the step started by running node.exe and performing the init command. Which would copy over the npm packages from the local npm-cache. This would take about 60 seconds to complete.

However, when performing the same build/same step on the new Windows Server 2016 instance, node.exe wasn’t the dominant process during npm’s init phase. Instead another process was dominant (greyed out in the screenshot), which ran for nearly the same length of time as node.exe and seemed to mirror the process. Because the other process was competing for CPU time with node.exe, the node process took nearly 200 seconds to complete (up from 60 seconds).

So, what was the other process?

MsMpEng.exe, aka. Windows Defender, the classic anti-virus software. On the Windows Server 2016 image I was using, Windows Defender was pre-installed and doing it’s job.

I didn’t take a screenshot of it, but using the Disk IO dashboard I was able to drill into what files MsMpEng.exe was reading and something struck me as odd. It almost looked as if Windows Defender was virus checking the file before it was read before the copy, and read again at the destination after the copy. I’m not sure if that’s the case, but it did seem odd.

For the resolution, I added some Path Exclusion rules to the realtime file scanning capability of Windows Defender. These were specific paths used by the build system and we know those files should be coming from trusted sources. I still left on realtime process scanning and also ensured the scheduled scans were setup, which would look through all the files.

The final result of adding the excluded paths reduced the overall time for the npm init section down to 80s (down from 200s, but also up from 60s on the old, not bad); and MsMpEng.exe was still reporting that is was performing realtime virus scans on the process itself.

A quick sidenote: The offline installer is kind of odd. To do the offline installer, you run the online installer and direct it to download it’s files/resources to the same directory where the online installer’s adksetup.exe is at. The next time you run adksetup.exe, it will detect the the files have already been downloaded and present a different set of options when it runs.

Best Practices should not be the end of a conversation

on Monday, July 27, 2020

Sometimes, Best Practices can be used as an end all to a conversation. No more needs to be said, because Best Practices have laid out the final statement … and that doesn’t really feel right.

Best practices weren’t always best practices. At some point a new technology came around and people started working with it to create their own practices. And the practices that worked stuck around. Over time, those practices might be written down as suggested practices for a particular technology stack. And, when coming from the authoritative source for a technology stack, they might be labeled as Best Practices.

But, usually, when I hear Best Practices used as an end all to a conversation, it’s not in reference to a particular technology stack. It’s used as a generalization, to explain guidance to approach an area. The guidance is supposed to help people who haven’t done something before start off in the right direction. It’s supposed to be a starting point. And I think you’re supposed to continue to study the usage of those practices, to determine what the right practices are for your environment and your technology stack. Maybe even setup criteria to evaluate if a practice is working successfully in your environment. And, then change a practice if it doesn’t meet your needs.

That isn’t a trivial thing to do. You have to first understand where the practices came from and what they were accomplishing. But, once you do, you should be able to see where their limitations are and where they can be expanded. Sometimes a technology stack wasn’t available when a practice was written, and that changes the possible ways a desired outcome can be achieved. To change a practice, you have to be knowledgeable of what outcomes are trying to be achieved, and the pitfalls that come with them; and then make a decision based on the trade-offs of going to a new practice.

The only way to create a new practice, is if Best Practices are the start of a conversation, not the end of one.

(Maybe we could also drop the word “Best”, and just make them practices?)

More Andon Cord

on Monday, July 20, 2020

I was listening to Gene Kim’s new podcast, Idealcast, interview with Dr. Steven Spear (Decoding the DNA of the Toyota Production System), and the subject of Andon Cords came up. Since I had recently written a post on Andon Cords, I was very curious if their conversation would line up with my thoughts or if it would show a different angle or new depths. The text of the conversation was (from Ep 5):


Gene Kim

The notion of the Andon Cord is that anyone on the front line would be thanked for exposing an ignorance/deficiency for trying to solve a problem that the dominant architecture or current processes didn't foresee.

Dr. Steven Spear

That's right. Basically, the way the Andon Cord works is you, Gene, have asked me, Steve, to do something and I can't. And, I'm calling it to your attention. Such that the deficiencies in my doing become a reflection of the deficiencies in your thinking / your planning / your designing. And we're going to use the deficiencies in my doing as a trigger to get together and improve on 'your thinking and my doing' or 'our thinking and our doing'.

Gene Kim

And the way that's institutionalized amplifies signals in the system to self correct.


To me, that definitely feels like a new angle or view point on Andon Cords. It still feels like it aligns with the “popular definition”, but it’s more clear in it’s description. It seems to follow a line of thinking that “We’ve got a problem occurring right now; let’s use this as an opportunity to look at the problem together. And, then let’s think about ways to improve the process in the future.” Which feels like a more directed statement than “the capability to pause/halt any manufacturing line in order to ensure quality control and understanding.”

But, the definition Gene (Mr. Kim?) and Dr. Spear’s give does imply something I want to point out: Gene’s scenario is one where the person using the Andon Cord is someone directly involved in the processing line. It’s by someone who is directly on the line and seeing a problem as it’s happening. The cord isn’t being pulled by someone who wasn’t asked to be involved in the process.

I wonder if there are any texts on recognizing when an Andon Cord is being used inappropriately? Is that even a thing?


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.