Some ideas on feature flags

In the last weeks I have had several interesting conversations on feature flags, so I just wanted to do a little brain dump on my ideas/views on feature flags, this are my particular views at this moment, they can change :). Also, at this moment I have not heavly used them in production, I have tried them in production, but just in little scenarios, this is what I have learned and discussed with people who have been using the a lot.

First things first, for me feature flags is all about experimentation and testing hypothesis. I just use feature flags which are short lived so the code related to the flag itself is not so long inside the code, usually they are “if’s” or lambdas, so once tested the hypothesis I would clean them up, if hte hypothesis for a particular feature works, the rest of the code remains, if not, the rest of the code is cleaned. So when using a feature flag, it is important to track its  life cycle so no “dead code” is left behind.

Next thing is almost obvious, if we are testing an experiment or a hypothesis, how and when we do know it worked or not?. We must work with business on this particular decisions and ask the stakeholders and team that question, this will lead us to something which is crucial, which metrics do we need to track for the feature we are experimenting, so be sure to track the usage and provide good metrics and a way to check them so you can take a decision about the feature. Probably taking the decision on which metrics and when to go ahead with the feature, it is harder than the implementation.

How we will handle the activation/deactivation? Some features can be even enabled/disabled by the users itselves, but not always, so we will have to decide how to do it, maybe we will enable them based on percentages, geolocation, or even automatically based on metrics, so when we go ahead with a feature flags system, be sure to know how are you going to be activating them during it’s life. For me this goes tight with the deployment of our application, so we must support enabling/disabling them not only via a UI or configuration files, but from outside systems which can interact with them and the system decided for rolling-out them with for example APIs or similar mechanism.

Last but not least, visibility, I started saying I prefer short lived flags, so how many flags we have in our code? which is the state (enabled/disabled) of them for our user base? keep in mind to keep them tracked in any mechanism you can provide so no “dead code” is left behind.

Just for you to check out a couple of feature flags systems, if you are on .NET Core you have this library, created by Xabaril Team, which is interesting and I know they are working on improve it. Also if you are looking for commercial products, probably Launch Darkly is one of the feature flags systems which is growing fast in use and capabilities.

Feel free to comment on this via comments or just reach me out in twitter.

Opinion: What and why YAML Azure Pipelines

In the last days I have had several conversations around YAML Azure pipelines, some people asking me whya they should go from old “moving boxes” mode to YAML Azure Pipelines. So here goes my opinion on this, disclaimer, this is just and opinion post, don’t expect here instructions, but just my one (not event two) cent.

What are YAML pipelines? this is the new model for build pipelines, instead of using the visual designer, we go directly with configuration as code, using YAML language to define our build process. YAML has been around for some time, it is widely used for Docker, Kubernetes, and other build systems like Appveyor, so let’s just say it is an standarized way to define “things” in code. So it seems like the natural evolution to go for our builds, as more and more people and companies are going into this, making it a “natural” language to express. Is it difficult? for sure, it has it’s problems, usually around the spaces and tabbing issues, but more and more tools (you have plugins for VS Code and other tools in example) comes in our help.

Why YAML pipelines? Configuration as code.  Our code evolves, and our build configuration usually evolves, not at the same pace for sure, but it evolves, so we must keep configuration tied to code, and which method could be better than represnting oour configuration as code?. So we end up storing our way to build right next to the code (another opinion: keep your code in the same repo as your YAML builds). So when we make changes to our structure code, adding or removing projects, new ways to build, the way to build is right with the code, in the case we need to build and older version, we will have the exact build configuration right with it, therefore we won’t have to make fancy things like recover and old version of the build from the definion history like with the “boxes” approach.

Is it ready for use? Yet I can agree sometimes the documentaion is not up to date, we should really go with the YAML builds, it is easy to start with. In any build definition we can get the YAML, clicking this button:

image

discover world-famous copy watches uk on the online website.

all of the most adored modern watches is the www1.replica-watches.to.

imitation watches can be yours for a fraction of the original price.

best quality www.swissreplica.me fast shipping.

And also, for each of the tasks we have in and old model definition, we can also export YAML, so we can start learning from here:

image

I’m not saying you just go right away and modify all your existing builds and move them yo YAML, but next build definition you create, go with YAML and start using configuration as code.

Environment variables and Azure Pipelines

Some days ago talking with Sergio Navarro (if you don’t follow him you should) he raised a question about setting environment variables in an Azure Pipeline so a container can use it in the build. The answer was easy, the great news are: all variables declared in an Azure Pipeline are “executed” (I couldn’t find a better word) as environment variables in the agent, and not only that, all environment variables in an agent can be retrieved as simple variables in the pipeline.

This is easier to understand with a very simple example, I’m using a Windows agent and just plain Powershell, but this can be adapted to Linux/Mac agents using bash, just one important thing to take care: Linux/MAc environment variables are case sensitive, Windows are not case sensitive.

We have this YAML build (hmmm the coloring on the second line is not correct):

image

It is just a simple build with one Powershell task, printing out the usual PATH environment variable, using the common syntax of Pipelines variables $(variable-name) and then printing out. We also configured the variables in the properties of the Azure Pipeline with a Demo variable:

image

When we run the build, we obtain the following output:

image

Check it out:

  • The first Write-Host, has printed out the PATH environment variable without any problem, and using it as it was a simple Pipeline variable.
  • The second Write-Host has printed out the content of the demo Pipeline variable but using it as a computer environment variable on the agent.

With this simple demo I hope it gets more clear, and you are ready to use this feature in your Azure Pipelines.

Universal packages with Azure DevOps Artifacts

If you are already familiar with Azure Artifacts in Azure DevOps probaly you know about Nuget, Maven, or NPM. But there is another kind of artifact less known, and which is really useful, the Universal Packages.

Surely you have need to share a set of files with different projects, we are not talking of assemblies, nodejs packages, just plain files like for example an HTML, images, or what-ever-the-file, some people will go with git submodules for example, but we know this is usually painful. So here comes Universal Packages (Disclaimer: at the time of publishing this post this is in public preview).

Universal Packages is just a type of package which store any set of files, and when downloaded it just unpack this files locally. Let’s imagine we want to share a couple of files in our repo which we cant to share with other projects and repositories:

image

Of course this files will change over time (probably …), so we keep them in a repository, but also, we will keep the Universal Package versioned, so we just create a new build definition in Azure Pipelines to publish the package, and we will use the Universal Packages task which comes with Azure Artifacts extension. With this definition (for this demo scenario):

image

In this task definition, apart from the obvious display name, we have several other values:

  • Command: Which command we want to execute agains the Universal Package, it can be Publish, when we are publishing a new version of the package or Download to download it locally during the build for use it in another project.
  • Path to file(s) to publish: It points to the folder or particular file in the repository to generate the package with this files. (If we are downloading this changes to the folder in which to download the package files).
  • Feed location: Just to use with this or another Azure DevOps collection.
  • Destionantion feed: which feed we will use to publish the package.
  • Package name: name of the package to publish. If it does not exists it will create a new one with this name.
  • Version: how we want to generate the new version to publish, I highly recommend to use SemVer package versioning.
  • Description: just a description for the package listing.

Once executed the build pipeline, if everything is correct we will see the Universal Package in the feed:

image

So, now, how we consume it locally? for use in pipelines we have already seen we can use the same task as Publish, but for download, so let’s see how to consume it locally. For this we will need:

  1. Latest version of Azure CLI
  2. Azure DevOps extension of Azure CL(install with this command line: az extension add –name azure-devops)

Now we will login to Azure DevOps from the Azure CLI, we have to run two commands (NOTICE: depending on your url for your Azure Devops subscription it can change to https:/[your-org].visualstudio.com more info):

  1. az login
  2. az devops configure –defaults organization=https://dev.azure.com/[your-organization] project=ContosoWebApp

Next step, if everything has gone correctly, we can download locally the package to a folder with this command line:

And the package will download locally with the version you just set in the folder, showing the two files we published.

Try it out and let me know how it goes, I’m find it particular useful to avoid sharing files with submodules or whatever other tricks, and keep them versioned.

More info here.

Azure DevOps from Hosted XML process to inherited process

Migrating a Team Foundation Server Azure Devops Server from on-premises to cloud, it is common we have done some customizations to the process templates, which can’t be migrated directly to any of the existing templates in Azure DevOps Services.

When this happens, for every project with customizations, a new Hosted XML process is created in the Azure Devops Services destination, if navigate to: https://dev.azure.com/organization/_settings/process?_a=all you will see something like this, with the list of all the projects:

image

As you can see in the figure we have the three basic process template of the inherited process and then a line for every project with its own process which are known as Hosted XML processes. To get full advantage of the new customization UI we can migrate from the Hosted XML to the inherited process. But before proceed review first this article, as some of the process customizations that can be done in the old Hosted XML process are not preserved when migrating to inherited process: https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/upgrade-support-hosted-to-inherited?view=vsts

The migration to inherited process is done in two steps: clone the process as inherited process, move the existing process to the inherited process.

Clone process

To clone the process, click on the ellipsis icon in the selected project and click on Clone process to inherited:

image

This will show a summary of what is going to do and what will be migrated and what will be lost, on of the most important things which will be lost is any workflow restriction you set up in the old process, as the new process model allows Work Items to go from any state to any state:

image

When you click on continue, it will ask for a new name to the inherited process template and which of the main templates to be set up as parent:

image

Just click on Ok and after a while we will have the new inherited process template:

image

Migrate projects

To finish the migration process, now we have to go to each of the projects we want to apply the new inherited process, click on the name Hosted XML project you want to migrate to see its details, and go to Projects tab where you must see the project belonging to this Hosted XML process template:

image

If you click on the ellipsis on the dropdown menu you will see an option to change process (is the only option available) and if you click it a screen asking to select the destination template will be shown, in which you will be able to select the previously migrated inherited process template, just click ok and you are done.

Now you will have the process using the new inherited process template, and you can start doing customizations with the new model, using the UI directly from Azure DevOps Services rather than using the old XML customization process.

Hope this helps you.

Variable library groups and Azure Key Vault

In my previous post, I explained how to use Azure Key Vault values in a particular Build/Release, but as Vinicius Moura posted in twitter, there is another option using Libraries in your Team Project, and I wanted to just give my opinion or how I use one or another option.

Just to introduce it, Libraries is a method to create Task Variables which can be used across different Builds/Releases in a particular Team Project. To create a Library go to the Build and Release hub in VSTS, click on Library  and then click, on the upper right corner + Variable Group.image

Here we can define a new Library and define its variables and values, just giving the Library a name to then use it in Builds/Releases, and create variables.

image

Also we have the option to link secrets from Azure Key Vault, as Vinicius pointed. Just one thing, when you link a library to Azure Key Vault secrets, you can’t add new manual variables for that library. In the process of linking to an Azure Key Vault, you will need to select an Azure Subscription, and authorize VSTS for both access the subscription and the desired Azure Key Vault, so be sure you are using an account with the proper rights.

image

Then you can just click on  + Add  to add new variables from the Azure Key Vault, as, differently from the Azure Key Vault task, it does not adds the contents from the Vault automatically.

image

Once you create the Library with your custom variables or the Azure Key Vault connection, you can just use it in a Build or Release, in the variables section when editing a build, clicking on Link variable group and selecting the variable group.

2018-02-24_17-52-58

When you link the Library group you can start using the variables with the usual syntax on your Build/Release: $(myvariable)

So, now that I explained Libraries, why and when choose this or the task option? Well for me it is just a question on how you are going to use it, and also the related security. By default, if I need values which must be shared across different builds/releases I would go for libraries for sure, but when it comes to Azure Key Vault, they are used to keep things protected and secret, if you create a library with its default permissions, you are sharing the possibility of using this secrets (and potentially leak them) to any VSTS user with contributor permissions in that particular Team Project, so if you do not pretend that, you must take care of setting the appropriate permissions on the Library, this is not a problem or anything, it is something you are suppose to do and take care of.

When you use the task, only people allowed to edit that particular build will be able to use this secrets, of course, you have to take care of this permissions too.

So, in my opinion, if you don’t need to share the variables across several builds/releases, or you just want to keep them as restricted as possible, go with the tasks. If you want to protect a set of values that you want to re-use across builds and releases, go with the libraries.

Just remember, this is not a rule, this is just my opinion, and is always subject to change a lot Smile

Using Azure Key Vault Secrets for your VSTS Releases

DISCLAIMER: I’m assuming you already have created your Azure Key Vault, and stored secrets/certificates in it, if you are not familiar with this (it is super-easy) you can check this article: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-get-started

This is a short but, at least for me, very useful topic for keeping your deployment secrets more secure. Most of the times, keeping secrets as secret variables on your VSTS definitions is enough, at last, they can’t be “human read” once set the value.

But sometimes, the person who keeps the secret won’t be editing the VSTS Release, and of course they are not going to tell anyone the secret also. If you have an Azure account, well we are talking about VSTS so probably you have it, you have Azure Key Vault, here you can store, secrets and certificates, securely in Azure, only users with the proper rights will be able to access them, but we have also a task in VSTS Releases (and Builds) which can retrieve this secrets (if your VSTS SPN connection has the proper rights) and use them as variables in your Releases/Builds:

image

How this works? when using this task, we have to configure the Azure Subscription Connection, Key vault name and a filter (comma separated values) for the names of the keys/certificates you want to retrieve or you can use * to retrieve all.

image

When it retrieves the secrets it will store them in VSTS variables, which can be accessed in subsequent tasks with the usual format $(name). To be more clear, let’s say you have a secret in your  Azure Key Vault which is called “sqlPassword”, when retrieved by this task, it will create a variable which we can in subsequent tasks as $(sqlPassword). Of course this task must be called before trying to use the variable.

But first of all we have to give, explicitly, permissions to the SPN used by VSTS to connect Azure Subscription, to the desired Azure Key Vault. This can be done thought the portal or via this Powershell:

Login-AzureRmAccount -SubscriptionId <Subscription ID>;
$spn=(Get-AzureRmADServicePrincipal -SPN <Service Principal Id>);
$spnObjectId=$spn.Id;
Set-AzureRmKeyVaultAccessPolicy -VaultName <Vault Name> -ObjectId $spnObjectId -PermissionsToSecrets get,list;

In this Powershell we have to change <Subscription ID> to the ID in which our Key Vault exists, it must be also the same subscription we select as subscription when we configure the task in the previous step.

Also we need to change the <Service Principal Id> to the ID of the SPN we are using to connect to Azure in the task configuration. To get it you have to go to Services in VSTS:

image

Select the endpoint you are using to connect to Azure and select Update Service Configuration, so you will see the value:

imageimage

As you can see we can also get the Subscription ID here if in doubt.

Possible problem: if when executing this Powershell you get this error, you need to update to the latest version of Azure Powershell (https://github.com/Azure/azure-powershell/issues/4953):

WARNING: Could not load file or assembly ‘Microsoft.Data.Services.Client, Version=5.6.4.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’

And that’s all, if you configured everything correctly, when you run the Release you will be able to access your secrets as VSTS Release/Build variables to use them across the Release/Build, so go and store your secrets securely in Key Vault.

Debugging deployed Azure Web Apps with VSTS Symbol Server

Obviously, debugging an already deployed application, doesn’t matter if Azure or any other environment, is something we must use as a last resort, I would always prefer to be able to reproduce some kind of situation with a local test and correct it. But sometimes we would need to debug an already deployed application, and it is never as easy as click F5 in our Visual Studio. We must connect to remote processes, make sure we have the right version of the code, and more important, be able to match the binaries with the code we have, because surely you always build your applications as Release to deploy them (and if not … run, run and do it).

Having the right version of the application is easy we have branches, tags, and other tools which allows us to locate the right code version. Attach to remote processes is something maybe a little bit more complex, but fortunately we have Remote Debugging tools, and even more, in Azure Web Apps, we can directly enable it from Visual Studio or the portal, and connect automatically, we will see this later.

To make sure we match the binaries with the code we have symbols, but we need to have the right symbols for the binaries, something we can get via a Symbol Server, this is something you can setup by yourself, but luckily now we have a Symbol Server included in VSTS, well it is still in preview mode, but it is something rather interesting to setup and worth it. Let’s start on how to set it up.

Disclaimer: I’m not digging into how to create a build definition, or deploy an application to Azure Web App using Release Management, so if you are not familiar with this kind of tasks first make familiar with these, or just leave me a comment if you find it interesting for next blog posts.

Disclaimer II: You will need Visual Studio 2017 updated to the last update for doing this.

Enabling the feature

As a preview feature, we must first turn it on for our account, or ask something with the needed privileges to turn it on, we just click on the top right, in our account icon, select Preview features and enabled it for the account.image

But in this case, this is not a stand-alone feature, this feature needs the Package Management extension from the Visual Studio Marketplace, which, remember, is not free, but the symbol server added to the package management extension is worth the price.

Publishing symbols

Once set-up the account we can start publishing our symbols. Usually (if not … again … run) we have one build, n deployments. So this is something we are doing during the build. So let’s go to edit the build definition used to generate artifacts for deploy, and add a new step after the build step, and add the publish symbols task.

image

By default, when adding the task, it is added with the version 1.*, but we will select version 2.* (preview) version of the task, and in the Symbol server type parameter, we select Symbol server in this account … I will remind you the need of Package Management extension.

image

This is the only change we need to do in our build or releases. So let’s go to next step.

Configuring Visual Studio to consume VSTS Symbol Server

We must configure Visual Studio for a couple of things: instruct it to debug using symbol servers, which symbol servers to use. We will do both of them from the Tools / Options screen.

For the first one we go to Debugging section of tools, and disable Enable just my code, yes, disable it, so Visual Studio is instructed to debug external code.

image

Now under Debugging / Symbols, click on the icon I show you in the next picture, which will bring the add new Symbol Server from VSTS. You can leave the rest of the parameters with the default values.

image

On the next screen just select the VSTS account in which you set-up the symbol server. And after that close the Options screen.

image

Debugging the Azure Web App

If we came this far, we have everything ready to start debugging, so with the version of the code we published the symbols and deployed open into Visual Studio, we will attach to the Web App process for debugging. Be sure to enable the breakpoints you need for the debugging and also notice you will impact any user of the application so better do it in a slot or any environment with no real users.

On Visual Studio 2017 Server Explorer, make sure you are connected to the Azure Subscription, and in the App Service list, locate the resource group containing your app, locate the app (and the slot if you have it), right click it, and select Attach debugger.

image

This can take a while, but after finish the attach, it will automatically connect to the Symbol Server, get the debug Symbols and you are ready to go and start debugging your web app with the breakpoints and the debugging features of Visual Studio and Azure.

If you receive an alert like this.

image

Remember to disable the Enable Just my code mentioned earlier in this same post.

Keep it clean

After debugging, this is something I like to do, and is go to the Azure Web App Application Settings, and disable the remote debugging check. Next time you need to debug, following previous steps, Visual Studio will re-enable it for you, but I just like to keep it off … just for the case …

image

Hope you enjoyed this feature as much as I do.

Phased deployments with Release Management gates

When we are enabling continuous deployment in development teams, there are a lot of things we must take care. First of all, enabling continuous deployment is not about throwing new code or features to the users, nevertheless the quality or the value it gives, this is about enabling a continuous value flow from development to the users.

For this we must ensure the quality and the impact of any new code we are going to deploy. Apart from the usual automated tests during the builds and deploys, there are something, a lot of companies does, which is a phased deployment, in which you are deploying new changes to a particular subset of users, until the new code has been “real life” tested enough to impact 100% of the users of your applications. This is something you probably has already experienced with for example Windows Insiders, Twitter which deploys features to particular subset of users, Facebook does the same, and even VSTS you can opt-in for new features until they are Generally Available for all users.

But for this one of the most important questions is, how do you decide when to deploy to a broader set of users? and also, which mechanism are you going to implement to automate this?.

This two questions can be resolved with the new (in preview at the moment of writing this) feature of Release Management Gates. These are automated approvals we can set, for any particular environment of a Release Management definition, which will be automatically evaluated, prior or post any environment deployment, also these gates are evaluated every specific period of time, until they pass, or until they timeout if they fail every check.

Gates can be set, out of the box, on a variety of things, but new ones can be created, like this example based on twitter sentiments:

  • Azure Functions: The gate will call a particular Azure Function, sent the function a pre-defined message (defined in the gate definition), and wait for the response, even being able to parse the response to check everything went ok.
  • Invoke API Rest: is similar to the previous one, but calling any particular Rest API.
  • Work Items Query: Check if a particular Work Item Query has grown its count of items. Think of this for example a Bugs Query, in which you decide the Gate has failed if the bug count grows over a particular threshold.
  • Azure Monitor: The one I wanted to explain in this article. If checks for any particular (one or n) Azure defined alert, to check if the alert has been thrown. Think for example an alert on performance degradation or number of errors to check in a particular environment, prior to deploy to new broader set of users environments.
Show me the code boxes

First of all, we must enable Gates on our preview Features, so on your Visual Studio Team Services account, click on your profile, and select Preview Features.

image

And on the preview features for the account (or just for you) enable Gates.

image

We start with a normal Release Management definition, with two environments, one dependent of another in sequence, and let’s say first one is for early adopters and the next one is the general available, I agree this is a great simplification of any real environment, but is enough for this example.

image

Now, lets inspect the GeneralAvailability environment pre-deployment approvals clicking on image And enable Gates as a pre-deployment approval, clicking on add  we can see the different choices we have, for this example we will go with Azure Monitor.

image

We have also there the delay before start evaluating this gate, this is the time needed to pass after previous environment deployment and the moment the first check for the gate is done. Once we add the new gate, we have to fill all the information for the gate.

  • Display Name for the gate.
  • Azure subscription connection, if we don’t have it already we need to setup an Azure connection via services
  • The name of the resource group in which the resource exists in the Azure subscription.
  • The type of the Azure resource, we can choose between Application Insights, App Service, Storage Account, Virtual Machines. In this demo we go with Application Insights resource.
  • Name of the Resource
  • Alert or alerts we want to be monitoring. The alerts must already exists in the chosen Azure Resource, but we will see this later on this post.
image

When we have this information filled, if we continue going down on that same screen, we can fill several options for all the gates.

image

First we have the timeout which is the time after which, we finish the deployment for this environment as failed if any of the gates has not passed, so we can’t go on with the environment. Sampling interval is the period of time between each check of the gate. This times by default are, respectively, 15 and 5 minutes, but they can be longer, even days, so you have time enough to go on with the early adopters (for example) environment before going ahead.

Also, for the case there are manual approvals before the deployment, you can select between three different options, like (as seen in the image) manual approvals must be done before start checking the gates, manual approval needed only after all gates has passed, or manual approval after each gate.

With this we would have the gate, but for the case you are not familiar with Azure alerts, just one thing more. For this example we chosen Azure monitor against an Application Insights Alert, so what I created before going on with all of this, I had my Application Insights resource created and configured for my selected app or Azure environment for the, in this case, EarlyAdopters environment (for the case you are not familiar with Application Insights check it here).

image

And if e click on Alerts, we can see the configured alerts, and go to specific alert configuration, the one we selected on the Azure Monitor Gate.image

In this case the alert will raise if in the last 5 minutes, more than 2 errors has occurred in the application.

When we finish this configuration, we will start deploying the application to the early adopters environment, hopefully users will start using the new features or version of the application, after the configured time for the gate, VSTS will check for the alert, if the alert has not been raised, it will continue with the next environment deployment, if the gate has been raised, it will continue waiting for the next check of the gate, until timeout configured occurs or the gate passes.

I hope you liked gates as much as I like them, as conclusion, when working with phased continuous deployment it is important to establish which are your gates defining how to move from one phase to another, and afterwards, configure them as you need, and configure your phased deployment with VSTS Release Management.

Work Items bulk edit with templates on VSTS

There are some occasions in which you need to apply the same changes to multiple Work Items, not only once, but several times during a project, for sure most of you already know the edit selected items feature, with several Work Items selected, just right click and select edit (sorry I had to protect the innocent on the captures)  :

image  image

This allows you to edit all the selected Work Items and make the changes to the fields you selected. The only point with this, is when you need to do it several times, and always apply the same values to the same fields, as it is a little bit tedious.

  1. So we have Work Item templates, we start from the same point, select several Work Items, in this case all of them must be from the same type, but I will explain this later, right click, and select Templates/Manage:image
  2. This bring us to the template editor screen, in this screen you will see we can define as much templates as we want for each type of Work Item, that’s the reason I said in the previous point all selected Work Items must be from the same type, as you will apply a template for a particular type:
    image
  3. When clicking on New template button, it will open a screen for stating which are the values for the different fields for this template:
    image
  4. When you save it, and go back to the list of Work Items (you will need to refresh the browser window), select the Work Items you want, right click, and now, under Templates option, you will have this newly created template, and once applied to the selected Work Items, it will apply the values to the fields selected in the template:
    image

As you can see, templates can simplify our editing a lot when moving work across teams, organizing backlogs, bugs, etc. so go and check which would be your needed templates and go create them. Just remember, they must be defined per Work Item Type, maybe it is just a small “cons” for this, as when we used to do the bulk editing, we can select different Work Item Types, but for repetitive editing, templates are far more powerful.