Copying Build Task Groups with VSTeam

on Monday, December 17, 2018

So, we finally got around to updating our VSTS / Azure DevOps XAML based builds to Pipeline builds (the slick web based builds). This effort is just in time to get the functionality switched over before XAML builds get disabled on Azure DevOps. From Brian Harry’s Blog:

By the end of 2018, we will remove all support for XAML builds in all Team Services accounts.  By that time, all customers will need to have migrated to the newer build system version because their XAML builds can no longer be run.  We will continue to support the web based experience for viewing previously completed XAML based builds so that you have access to all your historical data.

The process of converting over these builds has been tremendously helped by the excellent open source VSTeam Powershell module (github). The creator of this module, DarqueWarrior (Donovan Brown), is amazingly talented and particular in his development practices. And, I love him for it.

Using his/his teams module as the underlying framework it was pretty quick and easy to build out a little additional functionality to copy Task Groups from one project to another. I would love to contribute the addition back to the project, but I just don’t have the time to put together the meticulous unit tests, follow the excellent coding standards, and integrate it into the underlying provider. I’m still in a race to get these builds converted before they get turned off.

So, here’s a quick gist of building some copying functionality on top of VSTeam:

How much overhead does Apigee add to a request?

on Monday, December 10, 2018

So, I got asked this question earlier today and was surprised that I never memorized an answer.

It’s definitely dependent on your configuration, but for our purposes it looks like it’s about 60 ms. And, it’s probably less than that (see below).

image

The total time states 267ms, but the response actually sends back to the client around the 220ms mark. Of those 220ms, about 162ms is spent making the round trip to the application server to process the request. Below is a more detailed break down. But, you should be aware that many of the 1ms values listed below are actually < 1ms. The total values are probably lower than the values quoted.

image

MyGetUcsb–Unlisting and Soft Deletes

on Monday, December 3, 2018

In a previous post about moving packages between feeds in MyGet I introduced a Remove-MyGetPackage command. The command’s default functionality was to perform a “hard delete”. Which would completely remove the package from the registry rather than unlisting it. This made sense in the context of a Move-MyGetPackage command. Because you want to move the package from one to feed to another.

But, it introduced some other problems when actually using it to Delete packages.

Sometimes, the day after you publish a package you find a bug, do a quick fix and publish a new version of the package. In those scenarios, I thought it was okay to delete the old package because anyone that had downloaded the old version would just update to the new version when they found it was no longer available in the feed. This turned out to be way more difficult than I thought.

Here’s what I didn’t understand:

The way nuget updates are designed, they need to have the original package available to them before doing an update.

So, if you download someone else’s source code and you try to build it, you can run into a “Package Not Found” error during the process. This might happen because I deleted that version of the package. My assumption would be that the person who downloaded the code would check MyGet and see that a newer version of the package is available and Update-Package to the new version. However, this is where the problem lies.

Update-Package requires that the previous version of the package to be available before it will perform the update. And since the package doesn’t exist anymore, it can’t do that.

And this is by design. As such, the nuget API makes the delete operation a soft delete (unlist). And, I was overriding the original designers intentions by defaulting to a hard delete. So, I did two things to get back inline with the original designers game plan:

  • Added an –Unlist parameter to Remove-MyGetPackage
  • Added a wrapper function called Hide-MyGetPackage (that’s the best verb I could find for “Unlist”).

VSTS moves to Azure DevOps

on Monday, November 26, 2018

A few weeks ago Microsoft announced rebranding Visual Studio Team Services (VSTS) as Azure Dev Ops. One of the things that will be coming down the pike is a rebranding of the dns host name for VSTS.

Org URL setting

The change will take an organization url from https://{org}.visualstudio.com/ to https://dev.azure.com/{org}.

It’s not a big switch, but one that you need to plan for. We’re looking to do these things:

  • Update Firewall Rules
  • Update Build Agents
  • Update Your Visual Studio Source Control Settings
    • This is really simple and straight forward
  • Update Your Team Projects to use the new source control address (TFSVC)
    • We’re not yet on git, so this is one for the slow pokes

Here is a script to help find all your .sln files and update them to the new address.

Adding a /deploycheck to all Apigee API Proxies

on Monday, November 19, 2018

Apigee has a way to perform healthchecks against Target Servers in order to ensure that requests are routed to a healthy application service. But, what about this rare scenario: An API Proxy is being replaced/updated and the new API Proxy never gets deployed to ‘prod’. And, the prod endpoint no longer has an API Proxy handling requests for it.

In the scenario where the API Proxy is accidently not deployed to ‘prod’, the only way to test for the mistake is using an outside tester. And, there are a lot of services out there that can provide a ping or healthcheck to do that:

In all of those scenarios, you will need the API Proxy to respond back that it’s up and running. In this particular scenrio (“The the API Proxy running in prod?”), we don’t need a full healthcheck. All we need is a ping response. So …

Here’s a quick way to add an /upcheck (ping response) endpoint on to every API Proxy using the Pre-Proxy Flow Hook. To do this …

  • Create an /upcheck response shared flow (standard-upcheck-response)
  • Create a standard Pre-Proxy shared flow which you can add and remove other shared flow from.
  • Setup the standard Pre-Proxy shared flow as the Pre-Proxy Flow Hook.

Create the standard-upcheck-response shared flow

Create the standard-preproxy shared flow to plan for future additions to the flow hok

And finally setup the flow hook

image

Apigee CORS Headers During Api Key Failure

on Monday, November 12, 2018

In a previous post, I mentioned sending OPTIONS responses so Swagger UIs can call a webservice without getting an error.

Unfortunately, there’s a second scenario where Swagger UI can conceal an error from being displayed because the error flow doesn’t include CORS headers.

The Problem Scenario

If your API Key doesn’t validate, then an error will be generated by the VerifyApiKey Policy and will return an error message to the Swagger UI browser without any CORS headers attached. This is what it looks like …

You’re in the browser and you ask for the Swagger UI to send a request with a bad API Key and you get back a “TypeError: Failed to fetch” message. And, when you look at the console you see No ‘Access-Control-Allow-Origin’ header is present.

image

When you switch over to the network view, you can see that the initial OPTIONS response came back successfully. But, you actually got a 401 Unauthorized response on your second request.

image

If you look further into the second request, you will find the error response’s headers don’t contain the Access-Control-Allow-Origin header.

image

If you then pull up the Trace tool in Apigee, you can see that the Verify API Key policy threw the error and the request returned before having any CORS headers applied to it.

image

How to Fix This

So, what we need to do is add CORS headers onto the response before it’s sent back. And, to do that we can use the Post-proxy Flow Hook. These are generally reserved to do logging tasks, but we are going to use them to add headers.

image

This Post flow will now add all of the headers on every response. So, the Apigee Trace tools output now looks like this:

image

Which will now send the CORS response headers to the browser:

image

And that will result in the real error message appearing in the Swagger UI Tester:

image

The Shared Flow used in the pictures above is some over done. Here is a much simpler Flow Task modeled after the previous post on the topic. This would be quick and easy to setup:

MyGetUcsb – Move a package between feeds

on Monday, November 5, 2018

MyGet is pretty dang cool, but the delete functionality was a little surprising. Specifically, this is the delete functionality through the nuget API. The delete functionality through the websites UI is fantastic and really easy to follow.

The NuGet team put together great documentation why a delete operation is considered to be an “unlist” operation. They even have policy statements about it. The weird part is that even though the standard DELETE operation should unlist the package in MyGet, my experimentation didn’t show that happening. Instead the package kept being listed.

But, I have diligent co-workers that were able to not only make the package unlist, but they found out how to do a hard delete. I’m not sure how they found out about ‘hardDelete=true’, but if they found it by reading deeply into the sample code provided by MyGet then I am truly impressed.

The code sample demonstrates functionality that is also available as method Move-MyGetPackage in the MyGetUcsb powershell module.


Creative Commons License
This site uses Alex Gorbatchev's SyntaxHighlighter, and hosted by herdingcode.com's Jon Galloway.