Book Review? Change Management (by ProSci)

on Monday, April 26, 2021

In many ways, the book Change Management: The People Side of Change (Jeffrey Hiatt, Timothy Creasey) is a introductory book that encourages you to visit ProSci’s website and purchase more of their products. It is written in way that demonstrates they understand the problems that you are facing, and it informs you that their ADKAR model will be an instrumental part of the solutions to your problems. It then tells you that you can learn more about their model by visiting their website and purchasing products X, Y, and Z. (sidenote: I did go and purchase one of those products.)

For the most part, the books guidance isn’t too ground breaking compared to other leadership and change enabled books (Company-Wide Agility with Beyond Budgeting, Open Space & Sociocracy; Decide & Deliver; The Lean Startup), but it did have a couple of points that I thought they highlighted better than others:

The Importance of Executive Sponsorship

They did a really good job of highlighting Executive Sponsorship and continued to bring it up through the text in order to reinforce it’s importance. They outlined that there is a difference between Executive Support, which they consider to be approving a project to move forward and providing funding, and Executive Sponsorship, which is active involvement in the project by the sponsor. The sponsor must continually be involved in the projects’ activities and repeat the Awareness goals to the team(s) so that everyone is heading in the same direction and they have the same expectations of outcomes.

The Acknowledgement of Resistance

The book brings up resistance to change over and over again, which is great. But, I was disappointed with the majority of the books guidance on how to deal with resistance; especially when it comes to people who will not change. In Chapter 2 (pg 40) of the book they put forward this bullet point:

  • Employee resistance is the norm, not the exception. Expect some employees to never support the change.

I applaud the authors for writing that employee resistance is the norm, not the exception; as many books don’t spend time on that subject. But, I became frustrated with the book in it’s handling of the details for Expect some employees to never support the change.

What is the books advice for handling employee’s that will never support the change?

The main body of the book never gets into any details on this subject; and there are instances where the generalized advice would not work when an employee will never support the change (when they would always be combative and resistant). Because of this, the main body of the book never sets up how to handle conflict resolution and decision making.

Instead the book continually states that you should use the ADKAR framework to pin point where the resistance is coming from, how it’s being expressed by each individual, and that it’s the responsibility of the organization change management team to have strategies for group change resistance and individual managers should have strategies for dealing with individual change resistance. Great, but … what are those strategies?

By the end of the main body of the book (which ends on pg 95, of a 145 pg book), you are given no strategies. Then, in Appendix A they tell you that you can find the detailed strategies by purchasing their Best Practices book for $395. And, that you can have much more effective conversations with your “front-line employees” if you prepare them by supplying them with copies of Employee’s Survival Guide to Change for $15 each (<--- this is the product I bought).

The reason I bought Employee’s Survival Guide to Change is because Appendix D (pg 138-145) is a list of some frequently asked questions and answers from that supplemental book. And, in that appendix, they wrote something that surprised me. Something that I have seen soo many project management books try to avoid:

What are the consequences of not changing?

The consequences to you of not changing depend on how critical the change is to the business and your role. For changes that are less critical to business success or that do not directly impact you, the consequences may be minimal. However, if you elect not to support the change, and the change is critical to the success of the organization, the possible consequences are:

  • Loss of employment
  • Reassignment or transfer with the potential for lower pay
  • Lost opportunities for promotion or advancement in the organization
  • Reduced job satisfaction as you fight the organization and the organization fights you

They actually wrote that an employee might need to be fired. It’s not in the best interest of a book trying to tell you that there is a way to help your employees move in another direction to say that you may need to fire some of them. Because of the awkwardness of that dichotomy, I see many books either give the topic of employee separation a single line nod (The Machine that Changed the World, Gemba Walks) or a paragraph (Decide & Deliver), and then walk away from the subject. But, my hope is that the Employee’s Survival Guide to Change contains that really difficult to write text which explains that if an organization is going in one direction and an employee is going in a different direction, then it’s best if they separate ways. Without separating, it just creates conflict every single day which will cause emotional drain, blockages in work, and a dysfunctional environment.

(I think the book also did a good job of counter-balancing the quote from above with some other guidance which appeared directly following it; so I’ve added that guidance to the bottom of this post.)

It’s been a bit puzzling to me that some people who put up combative resistance to change do it believing that if things change they will lose their jobs, even when the company say’s that their not going to lose their job. What puzzles me is that when the employee becomes unwilling to change to the point of disruptive behavior, don’t they realize that by not allowing the company to pursue it’s goals they are forcing the company into a position where the company will need to separate from them in order to move forward. It’s like a self-fulfilling prophecy, “I’m gonna lose my job if the company changes, so I’ll repeatedly push back against every change every day until they see the error of their ways. Dang, why did I get fired?”

Lack of Conflict Management and Timelines

I know the book points to further reading to find out how to handle resistance. But, nothing inside of this book deals with the concept of Conflict Management. The book puts forward that there will be resistance to change, not just from individual contributor’s but also from middle managers. Let’s imagine a situation where a “front-line employee” is resistant to a change, and the most effective person to be able to convince them that the change is worth while is their direct supervisor. But, what if the direct supervisor also doesn’t believe in the change. So, the direct supervisor (the book titled this person as a middle manager) is supporting the front-line employees resistance, and is expressing their own resistance to the change to the other middle managers. This could continue up the chain. The books advice is that eventually the Executive Sponsor is the one who will have to be the person with the responsibility to work through the change resistance with their direct subordinate. And, then once their direct report is convinced, that direct report should work with their subordinate and so on. But, none of these level’s of convincing has a time line set to them.

This is one of the most troubling parts of the authors message for me. They setup the idea that a Change Management framework will be used in conjunction with a Project Management framework. And, I have to assume, they are implying that timelines need to be an aspect of the Project Management framework (“earmarks adequate time”). But, in contradiction, they also say that each individuals change management process will not be on a schedule that you can control; people will change on their own timelines.

This creates a problem when a middle manager, who is critical to the approval of a decision needed to move a project forward is resistant to the change. That resistant manager can now block everything. It seems unreasonable to have everyone else have to wait for single individual to go through the non-timetabled individual change management process. So, when this occurs, how do you solve that conflict? For that, I think you need a Decision Making & Conflict Resolution framework.

I think the ADKAR framework gives concrete terminology to emotional aspects of projects which aren’t highlighted in many other frameworks, but I don’t think it can work without a Decision Making & Conflict Resolution framework. You need a methodology used within the business to handle conflicts which can block and derail progress. I have to assume that one of the reasons this book highlights the importance of Executive Sponsorship is so that the Executive Sponsor (the person with the highest authority) can push through these moments, ensuring that decisions are made in a timeframe that will not derail a project. If that’s the case, the problem I have with this books guidance is that the person at the top becomes their own bottleneck. They can’t be everywhere are once to instantly provide decisions; and if individuals realize they can bottleneck a decision by demanding that the Executive Sponsor is the only one that can make a decision they will use it as a weapon to delay and prevent change. It needs to be the responsibility of the Executive Sponsor to put forward a Decision Making & Conflict Management framework that can work when they are unavailable. It needs to be a framework that can be learnt and used by the lower levels. Ideally, at the implementation team level.

The timeliness of Decision Making is a key factor to the overall success of a projects outcomes (Decide & Deliver). ProSci’s book describes a world where Change Management is a critical factor to the success of a change. In the world that it creates, it doesn’t describe what success is; nor does it give any weights or measures on how it evaluates success (you’ll need to buy another book to know that information). But, from what I’ve seen in the world that I live in, success has many components to it. One of the most important is the speed of delivery, which is directly related to the speed of decision making. When making a decision, you want to make a good decision, but what you need is to make a decision. If you wait to have all the information in order to make an high quality decision, or wait to have everyone’s approval, you aren't going to make a decision in a timely manner. For many decisions, you don’t need to have perfect confidence that there is no risk, you just need to be able to reverse the decision quickly if you choose the wrong path. If you can recognize that a decision is a two-way door, one that you can reverse the direction of, then just choose a direction and go. You can always reverse it if you choose wrong. (High Velocity Decisions)

It's very difficult to get teams to change their thinking when they don't believe that a decision can be reversed easily. It's a practice that you have to show is possible, and show that it's possible frequently in order to build up confidence in the ability to change directions. You need to build an environment where change change occurs daily in order to change people mindesets on what's possible. To do this, encourage your teams to allow change on a daily basis and to become comfortable with experimentation.

Is Using Rewards Based Reinforcement the Best Way Possible?

ADKAR stands for Awareness, Desire, Knowledge, Ability, Reinforcement. These are the 5 steps they prescribe for viewing and understanding The People Side of Change.

In Chapter 5 (pg 87-88), the book give’s this definition for reinforcement:

Reinforcement - the organization encourages and rewards successful change through its culture, values and initiatives; support of change competency is reinforced and resistance to change is identified and managed; change is part of “business as usual”.

I immediately had difficulty accepting this definition because I believe that long term reinforcement is not created through rewards. The science for behavioral change sticking around after a reward-based reinforcement is removed is not entirely clear. Behavior Modification is sometimes referenced to support the argument that behavioral change will stick around, but there are studies that show that once the reward disappears then the behavior disappears (Todd, 2013 or Schedules of Reinforcement).

For me, a behavior sticks around if the person believes in the value that the behavior provides them. In the book they call this “What’s In It For Me?” (WIIFM). I believe that, as a manager, you have to demonstrate the value that the change will provide to your “front-line employees” and once they recognize the value themselves, they will self perpetuate the change in behavior. Rewards can be used between the Awareness and Desire stages in order to encourage people to try out a change; but the change has to create enough value to the employee on it’s own for that person to keep doing it.

My second thought is something I’ve already beaten to death in this post: what are the methodologies to implement “resistance to change is … managed”? Especially in the context of “Expect some employees to never support the change”, what then?

Change needs to become an expected part of every day work. As managers, you should strive to implement continuous improvement which comes in small incremental changes that are occurring every day. Managers need to coach their teams to embrace the Lean principles of using the scientific method to elicit concerns about change, develop ways to transform those concerns into measurable tests, allow the change to occur within a trial context and use the results and feedback to either refine the change, accept it or reject it (The Lean Startup).

Instead, what I continuously see around me is the mere mentioning of a concern being enough to not allow any change to occur at all. And unsubstantiated concerns being given legitimacy as an acceptable reason to prevent any experimentation from occurring. This is why I believe that having a Decision Making and Conflict Resolution framework is so important. It’s also a critical component to making fast decisions, which is the most impactful aspect of decisions for achieving business outcomes.

---------

(Change Management’s Appendix D follows the section entitled “What are the consequences of not changing?” (written above) with these sections. These sections give a good balance to the overall points that the authors are trying to communicate.)

What are the benefits of supporting the change?

The benefits of supporting the change, especially a change that is critical to the success of the organization, include:

  • Enhanced respect and reputation within the organization
  • Improved growth opportunities (especially for active supporters of the change)
  • Increased job satisfaction (knowing you are helping your organization respond effectively to a rapidly changing marketplace)
  • Improved job security

What if I disagree with the change or feel they are fixing the wrong problem?

Be patient. Keep an open mind. Make sure you understand the business reasons for the change. However, don’t be afraid to voice your specific objections or concerns. If your objections are valid, chances are good they will come to light and be resolved. If you feel strongly against a specific element of the change, let the right people know and do it in an appropriate manner.

I find the wording above to be understanding, supportive, and helpful; but it doesn’t come with a guide on how to handle disagreement and conflict when “[valid objections] come to light and [must] be resolved”.

XProj/SDK … or use

on Monday, August 31, 2020

In the previous post, Use XProj/SDK project files with ASP.NET Web Apps, I describe creating a custom AfterBuild Target which would copy over the files from the standard `/bin/{configuration}/{architecture}` dll output location to the standard `/bin` webapp output location.

A coworker pointed output can be controlled with the `<OutDir>` build property. Which is way easier to use. So, here’s an updated .csproj file:

Use XProj/SDK project files with ASP.NET Web Apps

on Monday, August 24, 2020

The new SDK style project files (sometimes called VS2017 project files, and at one point referred to as xproj files) were not designed to work with ASP.NET Web Application projects. If you’re looking to use the newer SDK project files, then Microsoft is hoping you would use them with ASP.NET Core web apps. However, the SDK project format is soo much easier to work with than the older style, that it’s painful to go back to the old files and their associated packages.configs once you’ve moved to the new style.

So, if you were to convert a .NET 4.8 Web App’s .csproj file to an SDK style project file what are the problems that now occur:

  • You can’t target a webapp as an output type with the SDK style project file. The closest you have is the ability to target framework net48 with a library/dll output type (the default type).
  • I think that might be it?

How do you overcome that challenge:

  • If your output type is a library/dll and you set your targetFramework to net48, then you will create an output directory at /bin/{Debug|Release|Xxxx}/net48 which contains all the dlls and other references that would have normally gone into the web app’s /bin folder. So, you are producing the files that you need.
  • You just need to copy those files into the root /bin folder for IIS/IIS Express to run the website normally. To do that you can add a “CopyToBin” Target to you .csproj file. This target will run after the build completes.
  • After that, you will want to directly modify the .csproj file  to associate files which are commonly grouped together; such as Web.*.config files.

Here is an example:

Unfortunately, if you do this; it will help makes things work on your local machine. But, it won’t really help for your build process. If you use a third party tool to do builds for you, you’ll need to create a custom script which will run after your build completes, but before the results are packaged for deployment. This would need to be a custom solution for your environment. But, the basic outline would look something like this:

  • Have your build system check that the .csproj file is (a) building a “web app” (however you define that), (b) a net4X application, and (c) using an SDK style csproj file.

    With that many checks needed before performing an action; you know this isn’t a great idea.
  • Once verified, you’ll want to copy all the normal content files from the source code to a designated output location (css, js, imgs, views?) and then recreate the /bin directory using the output from the build.

Microsoft.Extensions.DependencyInjection - ASP.NET

on Monday, August 17, 2020

There is an fantastic Stackoverflow answer on how to use Microsoft.Extensions.DependencyInjection inside of a WebAPI 2 project (ASP.NET Full Framework). While it’s not cutting edge, it is a good middle ground solution when rewriting an entire ASP.NET application to ASP.NET Core seems out of the realm of possibility.

I took the code snippet and broke it apart a little bit to create a reusable project to house it. It’s not great, so I don’t really think it’s worth creating a github repo or a nuget package, but if you want to drop it into a project in your code base it could help out.

Here’s an example usage in a .NET 4.8 ASP.NET MVC / WebApi 2 based project:

And, it relies on a DependencyInjection.AspNet.WebApi library, which is targeting framework net48 (here’s the .csproj):

And, here’s the original stackoverflow posts code, just slightly modified:

Powershell Range Operator Performance

on Monday, August 10, 2020

This is a truly silly experiment, but it caught my interest. I was discussing Iron Scripter Challenges with thedavecarroll and he was using switch statements with range operators (PSGibberish.psm1):

What struck me as odd was the idea that the range operators might be calculating each of their ranges at runtime, on each execution of the function.

So, I ran a couple of experiments and the range operators are pretty neat. Here’s what I think (with no real definitive proof to support) is happening with them:

  • Range Operators used within Switch statements, that are contained within Functions are Cached.
    • It seems like when the function is JIT’d, the Range Operator value is calculated and Cached.
    • So, there’s no reason to pre-calculate the values and reference them within the function.
    • And, if you do reference variables from outside the function, looking up variables that require a scope lookup can also be time consuming. (Although, performance isn’t why people turn to powershell in the first place.)
  • Range Operators used within a Switch statement outside of a Function are not cached (like a code block).

To determine this, I ran a series of test against a function which focused on executing the switch statement which used range operators:

To determine how much time was spent making the function call and setting the $a variable, this function was used. This is noted as “Calling a Function Overhead”.

Switch Avg Execution Time = Total Avg Execution Time – Calling a Function Overhead

The results were:

The results indicate that both the Range Operator when run inside of a Function, and the Explicitly Scoped Cached Values have about the same running time. Which might indicate that when the function is JIT’d, it calculates the Range Operator values and caches them.

The large increase in running time between Range Operator and Cached Values not in Func might indicate that searching for variables outside of the function scope has a relatively costly penalty by comparison.

And, finally the Range Operator that was run outside of a Function was mostly likely calculated on each execution. While relatively expensive, its surprisingly fast. C# usually uses 10,000 ticks per millisecond, so that’s ~0.19 milliseconds for compilation and execution.

Full Test Script:

Diagnosing Slow node builds on Win2016

on Monday, August 3, 2020

In a move a from a Windows 2012 R2 build server to a Windows 2016 build server, the nodejs build step nearly doubled in it’s execution time. This seemed odd, since everything else was pretty much the same on the new server. So, what could the difference be?

Fortunately, a coworker point me towards the Windows Performance Recorder from Microsoft’s Assessment and Deployment Kit (Windows ADK). This worked really well in troubleshooting the issue, and I just wanted to drop in some screen grabs to show it’s visualizations.

The build was on-premise, so I did have access to install the kit** and control the execution of the Windows Performance Recorder to coincide with the execution of the problematic step. This would have been much more difficult on a hosted build server.

Getting the visualization comes by way of a two-step process.

  • First Windows Performance Recorder is used to track analysis information from all over your system while the issue is occurring. You can track different profiles, or record more detailed information in particular areas through manual configuration.
  • Once the problem has been recorded, the analysis information can then be pulled up in Windows Performance Analyzer. Which has a pretty nice interface.

First, here’s a screenshot of Windows Performance Analyzer from the “dotnet publish” (ie. npm init/build) step on the older Windows 2012 R2 server. In the screenshot, the step started by running node.exe and performing the init command. Which would copy over the npm packages from the local npm-cache. This would take about 60 seconds to complete.

However, when performing the same build/same step on the new Windows Server 2016 instance, node.exe wasn’t the dominant process during npm’s init phase. Instead another process was dominant (greyed out in the screenshot), which ran for nearly the same length of time as node.exe and seemed to mirror the process. Because the other process was competing for CPU time with node.exe, the node process took nearly 200 seconds to complete (up from 60 seconds).

So, what was the other process?

MsMpEng.exe, aka. Windows Defender, the classic anti-virus software. On the Windows Server 2016 image I was using, Windows Defender was pre-installed and doing it’s job.

I didn’t take a screenshot of it, but using the Disk IO dashboard I was able to drill into what files MsMpEng.exe was reading and something struck me as odd. It almost looked as if Windows Defender was virus checking the file before it was read before the copy, and read again at the destination after the copy. I’m not sure if that’s the case, but it did seem odd.

For the resolution, I added some Path Exclusion rules to the realtime file scanning capability of Windows Defender. These were specific paths used by the build system and we know those files should be coming from trusted sources. I still left on realtime process scanning and also ensured the scheduled scans were setup, which would look through all the files.

The final result of adding the excluded paths reduced the overall time for the npm init section down to 80s (down from 200s, but also up from 60s on the old, not bad); and MsMpEng.exe was still reporting that is was performing realtime virus scans on the process itself.

A quick sidenote: The offline installer is kind of odd. To do the offline installer, you run the online installer and direct it to download it’s files/resources to the same directory where the online installer’s adksetup.exe is at. The next time you run adksetup.exe, it will detect the the files have already been downloaded and present a different set of options when it runs.

Best Practices should not be the end of a conversation

on Monday, July 27, 2020

Sometimes, Best Practices can be used as an end all to a conversation. No more needs to be said, because Best Practices have laid out the final statement … and that doesn’t really feel right.

Best practices weren’t always best practices. At some point a new technology came around and people started working with it to create their own practices. And the practices that worked stuck around. Over time, those practices might be written down as suggested practices for a particular technology stack. And, when coming from the authoritative source for a technology stack, they might be labeled as Best Practices.

But, usually, when I hear Best Practices used as an end all to a conversation, it’s not in reference to a particular technology stack. It’s used as a generalization, to explain guidance to approach an area. The guidance is supposed to help people who haven’t done something before start off in the right direction. It’s supposed to be a starting point. And I think you’re supposed to continue to study the usage of those practices, to determine what the right practices are for your environment and your technology stack. Maybe even setup criteria to evaluate if a practice is working successfully in your environment. And, then change a practice if it doesn’t meet your needs.

That isn’t a trivial thing to do. You have to first understand where the practices came from and what they were accomplishing. But, once you do, you should be able to see where their limitations are and where they can be expanded. Sometimes a technology stack wasn’t available when a practice was written, and that changes the possible ways a desired outcome can be achieved. To change a practice, you have to be knowledgeable of what outcomes are trying to be achieved, and the pitfalls that come with them; and then make a decision based on the trade-offs of going to a new practice.

The only way to create a new practice, is if Best Practices are the start of a conversation, not the end of one.

(Maybe we could also drop the word “Best”, and just make them practices?)


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.