Convenient Rebinds for ASP.NET Core DI

on Monday, February 24, 2020

ASP.NET Core’s Dependency Injection system is a solid implementation and can provide solutions for most DI needs. But, like most DI systems, it is an opinionated API (and rightly so). I moved to the ASP.NET Core implementation from Ninject and, in doing so, there were a couple of Ninject methods that I really missed. Especially the .Rebind functions.

These are the functions that will take a interface-to-implementation binding in the system, and remove the old binding and set up a new one; with new lifetime scoping and new configuration/implementation details. With ASP.NET Core’s system, they really want the developer of the application to setup exactly the bindings that they desire at the very beginning of the program. And, the first binding that’s put in place should be the last binding made for that interface-to-implementation approach.

Their approach is well reasoned and it has it’s merits. It should lower overall confusion and needed knowledge when trying to figure out what bindings are being used. If you start reading Startup.cs’s ConfigureServices and you find a binding declaration, that should be the correct binding which will be resolved at runtime.

However, because of Ninject’s .Rebind functions, I am stuck in the mind set that bindings should be flexible as new subsystems are added. If you make a library, MyLib, that has a default caching implementation that uses InMemory caching, then your library will most likely setup a binding of IMyLibCache to MyLibInMemoryCache. If I then create an add-on library that implements caching using redis, MyLib.Redis, then I want to be able to swap out the binding of IMyLibCache with a new binding to MyLibRedisCache.

With the prescribed API of ASP.NET Core’s DI system, the way you would do this in code would look something like this:

But, that just feels backwards. When you were writing your original code, you would have to know upfront that someone in the future would have a need to use a different caching system. So, you would have to have the forethought to create the binding using .TryAddTransient() instead of .AddTransient().

It would feel much more natural if it was written like this:

So, that’s the Ninject thinking that is stuck in my head. And, because of it, here are a few convenience overloads which can make working with IServiceCollection a little bit easier:

Implementation Efficiency Frustration Tiers

on Monday, February 17, 2020

For me, a lot of stress and anxiety about working efficiently comes from the momentary feelings of being ineffective. If I need to accomplish a work item, how long will it take to complete that work item? How many sub-tasks do I need to complete before I can complete the work item? How many of those do I understand and can do with a minimal amount of effort, and how many do I need to do research for before I can even begin implementing them? The more time, energy, and amount of knowledge that has to be gained to complete a work item, the more stressful it becomes to complete it.

So, I wanted to take a moment and start to break down those feelings into categories. I read Scott Hanselman’s Yak Shaving post years ago, and it has become a part of the shared language among the development teams I work with. Before reading that post, I had described the act of Yak Shaving as “speed bumps”; but I would have to explain it every time I used it. Hopefully, getting this written down can help me define a language so I can communicate this feeling more easily.

At the moment, the feeling of implementation efficiency can be broken down as:

Tier 3

This is when you need to implement something, but in order to do it you are going to need to learn a new technology stack or a new paradigm in order to complete it. The task you’re trying to complete could be something as trivial as adding Exception Handling to an application, but in order to do it, you’re going to research APM solutions, determine which best fits your needs and then implement the infrastructure and plumbing that will allow you to use the new tool.

An example of this might be your first usage of Azure Application Insights in an ASP.NET Core application. Microsoft has put in a tremendous amount of work to make it very easy to use, but you’ll still need to learn how to Create an Application Insights resource, add Application Insights into an ASP.NET Core application, re-evaluate if you created your Application Insights resource correctly to handle multiple environments, and then most likely reimplement it with Dev, Test, Prod in mind, determine what common parameters which are unique to your company should always be recorded, and then work with external teams to setup firewall rules, develop risk profiles and work through all the other details necessary to get a working solution.

Tier 3 is the most frustrating because you have learn so much yourself just to get to your end value. So, for me, it’s the one that I also feel the most nervous about taking on because it can feel like I’m being incredibly inefficient at doing so much work to produce something that feels so small.

Tier 2

This is when you already have all the knowledge of how to do something and you understand what configuration needs to take place, but you are going to have to do the configuration yourself. When you know at the beginning exactly how much work it will take to complete, there is a lot less frustration because you can rationalize the amount of time spent for the end value that’s achieved. The moment this becomes frustrating is when the extra work that you’re putting in is a form of Yak Shaving. For example, when you are dealing with a production issue, and you realize that you’re going to need to implement component X in order to get the necessary information in order to solve the problem, that’s the moment you heavily sigh because you realize the amount of hand work you’re going to have to put in place just to get component X working.

This level of efficient usually happens when your working on the second or third project that you’ve used a particular technology stack with. Let’s use Application Insights as the example again. You’ve probably already developed some scripts which can automatically create the Application Insights instances, and you’re comfortable installing the nuget packages that you need, but you still need to run those scripts by hand, and setup permissions by hand, and maybe even request firewall rules to be put in place. None of these tasks will really take up too much time, but it feels like wasted time because your not producing the real end value that you had in mind in the first place.

Tier 1

This is when the solution is not only well known to yourself, but your organization has developed the tooling and infrastructure to rigorously minimize the amount of time spent on implementing the solution. This doesn’t come cheap, but the peace of mind that comes with having an instantaneous solution to a problem are the moments that make work enjoyable. The ability to stumble upon a problem and think, “Oh, I can fix that”, and within moments you’re back to working on whatever you were originally doing creates a sense that any problem can be overcome. It removes the feeling that you’re slogging through mud with no end in sight, and instead that feeling is replaced with confidence that you can handle whatever is thrown at you.

It’s rare that you can get enough tooling and knowledge built up in an organization that Tier 1 can be achieved on a regular and on going basis. It requires constant improvement of work practices and investment into people’s knowledge, skillsets, and processes to align the tooling and capabilities of their environment with their needs.

When creating working environments, everyone starts out with a goal of creating a Tier 1 scenario. But, it seems pretty difficult to get there and maintain it.

This one of the pieces I can find very frustrating about security. There is a lot of information available about what could go wrong, and different risk scenarios, but their just isn’t a lot of premade tooling which can get you to a Tier 1 level of Implementation Efficiency. People are trying though: OWASP has the Glue Docker image, and Github automated security update scanner is fantastic, and NWebSec for ASP.NET Core is a step in the right direction. But, overall, their needs to be a better way to get security into that Tier 1 of Implementation Efficiency zone.

3rd Party Event Tracing Calls in Apigee

on Monday, February 10, 2020

Apigee has information on their website which makes event tracing of calls to a 3rd party system relatively easy. But, the information is spread out over a couple of pages. To provide this functionality effectively, you’ll want to use two different features together:

  • Use a PostClientFlow to ensure the event logging is performed after the response is sent to the client.
    &nsbp;
  • Use a ServiceCallout Policy, with the <Response /> element removed. This will ensure the call to the 3rd party system is done as a Fire-and-Forget call, rather than one that waits for a response before continuing processing.

    There is a MessageLogging Policy, which is specifically designed for this logging scenario. However, the MessageLogging policy doesn’t allow for Header information to be added into the call; and there are a number of 3rd party logging systems (like Splunk) which use the Authentication header to verify the incoming caller.

The end result of making these changes looks a little like this:

The 5 steps within the workflow taht are grouped by a red box show a group of 2 service calls which are each logging to separate 3rd party systems (we wanted to compare the two products to see which would fit our needs better). In the top left red box is the complete processing time within Apigee, 78 ms. And the small red box at the bottom right (in Postman) is the amount time from the client’s perspective, just 46 ms.

To do this, you’ll want to setup a shared flow that will make the ServiceCallout's. Remember that each ServiceCallout should remove it’s <Response> element:

Once that’s in place, you’ll just need use the shared flow as part of a <PostClientFlow> within you API’s. I wish this was an element I could use within the Post-proxy Flow Hook; that way I could add it to all APIs in one place.

A communication benefit of microservices

on Monday, February 3, 2020

Recently, a blog post called Monoliths are the Future caught a coworkers attention, and he had some interesting questions that stemmed from it:

You know what's interesting; I feel like we all get this sense that we _have_ to be doing microservices

Like any existing architecture is garbage and, regardless of your business or technical constraints, you are failing if you don't immediately make a wholesale switch over to microservices

And so now I feel like I see more and more articles saying, "Hey, whoever told you that you needed to stop everything and do microservices was wrong. You should take both designs into consideration and implement something that meets the needs of your business/technical constraints."

I'm just interested in where the all-or-nothing perspective leaked in; was it our perception as self-conscious technologists? Or was it click-baity writing?

For me, I think a fair share of the attention that microservices gets stems from the email that Jeff Bezo’s wrote around 2002 that was made famous by a Google+ rant from Steve Yegge. In the post he outlined Jeff Bezo’s “Big Mandate” of:

  • All teams will henceforth expose their data and functionality through service interfaces.
  • Teams must communicate with each other through these interfaces.
  • There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  • It doesn’t matter what technology they use.
  • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

This list wasn’t shared on the internet until 2011, and SOA and microservices had already become popular at that point. But, what Bezo’s list did was create the building blocks for AWS, and AWS’s incredible success is what I consider to be one of the largest factors in people looking at and believing that microservices are the way to be successful. Many people don’t understand the reason behind AWS’s microservice based success, they simply believe the false equivalency that they will achieve AWS’ successes by using microservices.

But, for me, that list from Bezo’s captures an underlying management strategy that is far more impactful that the microservices themselves. What that list succinctly describes is that all interactions between the systems at Amazon/AWS would now be based around contracts and interfaces that are developed and managed by service providers. This means that the service provider is responsible for working with their customers to develop a usable and meaningful service contract that guarantees that if the client can provide inputs X then their system will provide service Y. They guarantee that the service will be available 24 hours a day. They guarantee that it will be usable without requiring any communication between the two separate teams, without the overhead of one-off reviews and without any specialized approval processes.

This contract-first always-available platform approach effectively breaks one of the most difficult constraints on all projects: Communication Overhead.

In The Mythical Man Month, Fred Brooks describes that when you add another person onto a project you’re also adding an exponential number of points of communication to that project team. If you had 10 people on a project and you add an 11th, then you just added 10 more avenues of communication and slow down.

Within teams that are highly effective, a significant fraction of their productivity comes from their shared knowledge about what they’re building and the goals they’re trying to achieve together. Agile seemingly builds on the back of the lessons learned from the Mythical Man Month by trying to reduce that communication overhead. Agile uses the daily stand-up to build shared knowledge and shared vision on a daily basis. The daily stand-up is where a team member can quickly ask “I want to do X because I think it will give us benefit Y. Is everyone on board with that?” Because everyone at the daily team meeting has been sharing their knowledge, that statement wouldn’t require a great deal of time explaining the context and history of how the thought came to be or why it would be beneficial. The reduction in explanation time between those team members is one of the aspects that makes that team effective.

Oppositely, when you have to communicate outside of your team, that’s when you introduce a communication constraint. Bringing another team into the conversation and bring them up to speed takes time. You have to explain to them where your team is at and get them into your teams mindset. The hardest part of that conversation is that while you’re explaining the background and reasoning for how your project got to it’s current point, the new team is going to view every decision that was made along the way through their current understanding and their guiding principles; not through your teams current understanding and guiding principles. This leads to multiple levels of slow down and overhead, not to mention the hardest thing of all: disagreement.

What Jeff Bezo’s email did was lower the overall cross team communication overhead company wide at Amazon. If your project could satisfy the requirements of interface/contract X, then the service would provide the results Y at any time of day with no waiting for two teams to find time to meet, evaluations of requirements and agreement on purpose of product, or the time consuming process of bringing individuals with differing view points into alignment.

But, many people don’t think about that side of it. I feel like the original designers of microservices have tried to stress the importance of Team Autonomy in Microservices as a key component to ensuring that each team/each service provider can create new functionality without requiring them to derive agreement and get sign-off from another team. In a monolithic database project that I have worked on, I have seen this problem occur around the very few but very important tables that multiple teams share. They require a great deal of communication overhead to ensure all teams are aware, have analyzed and reviewed, and have implemented plans to handle any consequences of an update; all done before even the first action can be taken on the update.

But, as Fred Brooks said, there is No Silver Bullet, and reducing cross team communication overhead is just one piece of a much larger and more complicated puzzle of making an effective working environment.

So, I think “the all-or-nothing perspective leaked in” because of a combination of things: AWS’s great success, Martin Fowler (et.al.) evangelizing mircoservices, and the audience that was absorbing this information not having the full perspective of what was truly driving the benefits.

As if I haven’t already given my two cents … here were my actual thoughts on the original blog post, Monoliths are the Future:

  • There were a number of statements within the post (and I have not listened to the audio) which made me believe that an underlying problem the speakers were grappling with was low code quality standards and coding practices at their companies. For example, statements like “we lost all of our discipline in the monolith”, “they’re initiating things and throwing it over a network and hoping that it comes back”, and “Now you went from writing bad code to building bad infrastructure”.

    I believe that ensuring quality within the products and services you provide is a critical necessity for anything to be successful. In Lean Six Sigma practices, defects are one of the critical wastes. You must ensure high quality and resilient code in order to reduce the amount of time spent on rework.

    You don’t have to use microservices, or Gang of Four, or XYZ to ensure high quality standards; but the company you work for has to define high quality services and products as one of their highest priorities. From there, the people at the company will develop the standards, processes, and tooling to ensure that they are creating high quality products and also monitoring that those standards are upheld every day.
     
  • The line “There are reasons that you do a microservice. So, to me a microservice makes sense in the context of…” was a silver lining to me.

    The writer was outlining that when he can see value with using a particular approach, then he is on board with making that approach successful. This is probably the most important aspect of choosing any architectural approach. If the people that are implementing the approach can see the value within it, then they are not only on board with making it happen, they will find ways to make it better than it was originally designed.
     
  • There is an aspect of microservices where monolithic datastores are separated into small autonomous datastores. This separation is to improve team autonomy and lower communication overhead. But, there is a flip-side to creating those small autonomous datastores. Besides the obvious duplication of data, it also creates a new need of bringing the data back together in order to analyze it from a system wide perspective. Whenever data stores are broken apart, there is a need to create a new centralized datastore for reporting and business insights. These have most recently been coming up as Big Data, Data Lakes, and other data collectors for Business Intelligence and Data Analytics platforms.

    So, eventually, you always get back to a monolithic data store; but maybe not a monolithic application.
     
  • My last thought is pretty negative. Again, I haven’t listened to the audio, so this might be completely off base. I just don’t get a strong sense that the author of the article or the speakers being quoted are really thinking about things from an overall workload productivity perspective. How are all the people of the company working together to make the company’s product? What are the processes that the company has in order to make those products? And, what are the most critical foundational principles that the company needs to do in order to make those processes as effective as possible? If you have the answers to those questions, then the question of microservices vs monoliths will have a clear answer.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.