Product Management

Continuously right-sizing your Jira Epics using Power Automate

A guide on how you can automate the continuous right-sizing of your Jira Epics using it’s REST API and Power Automate…

Context

Right-sizing is a flow management practice that ensures work items remain within a manageable size. Most teams apply this at the User Story level, using historical Cycle Time data to determine the 85th percentile as their right-size, meaning 85% of items are completed within a set number of days. This is often referred to as a Service Level Expectation (SLE).

“85% of the time we complete items in 7 days or less”

In combination with this, teams use Work Item Age, the amount of elapsed time in calendar days since a work item started, to proactively manage the flow of work that is “in-progress”. Previously I have shared how you can automate the Work Item Age to your Jira issues.

Right-sizing isn’t limited to Stories — it also applies to Epics, which group multiple Stories. For Epics, right-sizing means keeping the child item count below a manageable limit.

To understand what this right-size is, we choose a selected date range, plotting our completed Epics against the date they were completed and the number of child items they had. We can then use percentiles to derive what our ‘right-size’ is (again typically taking the 85th percentile):

Good teams will then use this data to proactively check their current ‘open’ Epics (those in progress/yet to start) and see if those Epics are right-sized:

Right-sizing brings many benefits for teams as it means faster feedback, reduced risk and improved predictability. The challenge is that this data/information will almost always live in a different location to the teams’ work. In order for practices such as right-sizing to become adopted by more teams it needs to be simple and visible every day so that teams are informed around their slicing efforts and growth of Epic size.

Thankfully, we can leverage tools like Power Automate, combined with Jira’s REST API to make this information readily accessible to all teams…

Prerequisites

This guide assumes the following prerequisites:

With all those in place — let’s get started!

Adding a custom field for if an Epic is right-sized is not

We first need to add a new field into our Epics called Right-sized? As we are focusing on right-sizing of Epics, for the purpose of simplicity in this blog we will stick to Epic as the issue type we will set this up for.

Please note, if you are wanting to do this for multiple issue types you will have to repeat the process of adding this field for each work item type.

  • Click on ‘Project settings’ then choose Epic

  • Choose ‘Text’ and give the field the name of Rightsized

  • Add any more information if you want to do so (optional)

  • Once done, click ‘Save Changes’

We also then need to find out what this new custom field is called, as we will be querying this in the API. To do so, follow this guide that Caidyrn Roder pointed me to previously.

Understanding our Epic right-size

As mentioned earlier in the blog, we plot our completed Epics over a given time period (in this case 12 weeks) against the date they were completed on and the number of child items those Epics had. We can then draw percentiles against our data to understand our ‘right-size’:

If you’re wondering where the tools are to do this, I have a free template for Power BI you can download and connect to/visualise your Jira data.

For the purpose of simplicity, in this blog we’re going to choose our 85th percentile as our right-size value so, for this team, they have a right-size of 14 child items or less.

Automating our right-size check

Start by going to Power Automate and creating a Scheduled cloud flow. Call this whatever you want but we want this to run every day at a time that makes sense (probably before people start work). Once you’re happy with the time click create:

Next we need to click ‘+ new step’ to Initialize variable — this is essentially where we will ‘store’ what our Rightsize is which, to start with, will be an Integer with a value of 0:

We’re going to repeat this step a few more times, as we’re Initialize variable for ranking Epics (as a ‘float’ type) by their child item count:

Then we will Initialize Variable to flatten our array value, which we’re going to need towards the end of the flow to get our data in the format we need it to be in to do the necessary calculations:

Our final Initialize Variable will be for our Interpolated Value, which is a ‘float’ value we’re going to need when it comes to calculating the percentile for our right-sizing:

Then we’re going to choose a HTTP step to get back all our Epics completed in the last 12 weeks. You’ll need to set the method as ‘GET’ and add in the the URL. The URL (replace JIRAINSTANCE and PROJECT with your details) should be:

https://JIRAINSTANCE.atlassian.net/rest/api/3/search?jql=project%20%3D%20PROJECT%20AND%20statuscategory%20%3D%20Complete%20AND%20statuscategorychangeddate%20%3E%3D%20-12w%20AND%20hierarchyLevel%20%3D%201&fields=id&maxResults=100

Click ‘Show advanced options’ to add in your access token details:

Then we need to add in a Parse JSON step. This is where we are essentially going to extract our the Issue Key from our completed Epics. Choose ‘body’ as the content and add a schema like so:

{
    "type": "object",
    "properties": {
        "expand": {
            "type": "string"
        },
        "startAt": {
            "type": "integer"
        },
        "maxResults": {
            "type": "integer"
        },
        "total": {
            "type": "integer"
        },
        "issues": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "expand": {
                        "type": "string"
                    },
                    "id": {
                        "type": "string"
                    },
                    "self": {
                        "type": "string"
                    },
                    "key": {
                        "type": "string"
                    }
                },
                "required": [
                    "expand",
                    "id",
                    "self",
                    "key"
                ]
            }
        }
    }
}

Then we’re then going to add an Apply to each step, using the ‘issues’ value from our previous step. Then add a HTTP actionwhere we’re going to take the child count for each Epic. The first part of the URL (replace JIRAINSTANCE with your details) should be:

https://JIRAINSTANCE.atlassian.net/rest/api/3/search?jql=%27Parent%27=

Then the id, and then:

%20AND%20hierarchyLevel=0&maxResults=0

Which should then look like so:

Don’t forget to click ‘Show advanced options’ and add your access token details again. Then we’re going to add a Parse JSON action using Body as the content and the following schema:

{
    "type": "object",
    "properties": {
        "startAt": {
            "type": "integer"
        },
        "maxResults": {
            "type": "integer"
        },
        "total": {
            "type": "integer"
        },
        "issues": {
            "type": "array"
        }
    }
}

Which should look like so:

Next add a Compose action with the total from the previos step:

Next we’re going to Append to array variable the output of this to our ‘FlattenedArray’ variable:

Then we’re going to go outside our Apply to each loop and add a Compose step to sort our child item counts:

sort(variables('FlattenedArray'))

Then we’re going to add a Set Variable step where we’re going to set our Rank variable using the following expression:

float(add(mul(0.85, sub(length(outputs('SortedCounts')), 1)), 1))

Next we’re going to do the part where we work out our 85th percentile. To start with, we first need to figure out the integer part. Add a compose action with the following expression:

int(substring(string(variables('rank')), 0, indexOf(string(variables('rank')), '.')))

Then add another compose part for the fractional part, using the expression of:

sub(float(variables('rank')), int(substring(string(variables('rank')), 0, indexOf(string(variables('rank')), '.'))))

Then we’re going to add a Compose step for formatting this to be one decimal place, we do using:

formatNumber(outputs('Compose_FractionalPart'), 'N1')

Then we’re going to initialize another variable, which we do simply to “re-sort” our array (I found in testing this was needed). This will have a value of:

sort(variables('FlattenedArray'))

Then we’re going to set our FlattenedArray variable to be the output of this step:

Then we need to calculate the value at our Integer position:

variables('FlattenedArray')[sub(int(outputs('Compose_IntegerPart')), 1)]

Then do the same again for the value at the next integer position:

variables('FlattenedArray')[outputs('Compose_IntegerPart')]

Then add a compose for our interpolated value:

add(
    outputs('Compose_ValueAtIntegerPosition'),
    mul(
        outputs('Compose_FractionalPart'),
        sub(
            outputs('Compose_ValueAtNextIntegerPosition'),
            outputs('Compose_ValueAtIntegerPosition')
        )
    )
)

Remember the variable we created at the beginning for this? This is where we need it again, using the outputs of the previous step to set this as our InterpolatedValue variable:

Then we need to add a Compose step:

if(
    greaterOrEquals(mod(variables('InterpolatedValue'), 1), 0.5),
    formatNumber(variables('InterpolatedValue'), '0'),
    if(
        less(mod(variables('InterpolatedValue'), 1), 0.5),
        if(
            equals(mod(variables('InterpolatedValue'), 1), 0),
            formatNumber(variables('InterpolatedValue'), '0'),
            add(int(first(split(string(variables('InterpolatedValue')), '.'))), 1)
        ),
        first(split(string(variables('InterpolatedValue')), '.'))
    )
)

Then we just need to reformat this to be an integer:

int(outputs('Compose'))

Then we use the output of this to set our rightsize variable:

Next step is to use HTTP again, this time getting all our open Epics in Jira. It should be a GET with the URL (replace JIRAINSTANCE and PROJECT with your details) of:

https://JIRAINSTANCE.atlassian.net/rest/api/3/search?jql=project%20%3D%20PROJECT%20AND%20statuscategory%20%21%3D%20Done%20AND%20hierarchyLevel%20%3D%201%0AORDER%20BY%20created%20DESC&fields=id&maxResults=100

Again, don’t forget to click ‘Show advanced options’ and add in your access token details.

Next we’re going to add a Parse JSON step with the ‘body’ of the previous step and the following schema:

{
    "type": "object",
    "properties": {
        "expand": {
            "type": "string"
        },
        "startAt": {
            "type": "integer"
        },
        "maxResults": {
            "type": "integer"
        },
        "total": {
            "type": "integer"
        },
        "issues": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "expand": {
                        "type": "string"
                    },
                    "id": {
                        "type": "string"
                    },
                    "self": {
                        "type": "string"
                    },
                    "key": {
                        "type": "string"
                    }
                },
                "required": [
                    "expand",
                    "id",
                    "self",
                    "key"
                ]
            }
        }
    }
}

Then you’re going to add in an Apply to each step, using issues from the previous step. Add in a HTTP step, the first part of the URL (replace JIRAINSTANCE with your details) should be:

https://JIRAINSTANCE.atlassian.net/rest/api/3/search?jql=%27Parent%27=

Add in the id field from our Parse JSON step then follow it with:

%20AND%20hierarchyLevel=0&maxResults=0

Which looks like so:

You should know by now what to do with your access token details ;)

Add a Parse JSON with body of the previous step and the following schema:

{
    "type": "object",
    "properties": {
        "startAt": {
            "type": "integer"
        },
        "maxResults": {
            "type": "integer"
        },
        "total": {
            "type": "integer"
        },
        "issues": {
            "type": "array"
        }
    }
}

Then add a Compose step where we’re just going to take the total of the previous step:

Finally, we’re going to add a condition. Here we’ll look at each open Epic and see if the child count is less than or equal to our Rightsize variable:

If yes, then we add an Edit Issue (V2) step where we add in our Jira instance, the Issue ID (which we get from a previous step) and, crucially, what our customfield is for our ‘right-sized’ value (remember at the beginning when we worked out what this was? If not go back and re-read!). We update this with “No” if it’s greater than the right-size, or “yes” if it is not:

And that’s it — you’re (finally) done!

If you run the automation, then it should successfully update your Epics if they are/are not right-sized:

It’s worth noting that any Epics with 0 child items aren’t updated with yes/no, purely due to this likely being too early on in the process. Saying an Epic with 0 child items is ‘right-sized’ feels wrong to me but you are welcome to amend the flow if you disagree!

By implementing continuous right-sizing in Jira using Power Automate, teams can drive faster feedback loops, reduce delivery risks, and improve predictability. Automating the right-sizing check ensures the data remains visible and actionable, empowering teams to stay focused on maintaining manageable work sizes. With this flow in place, you’re not just optimising Epics — you’re fostering a culture of continuous improvement and efficiency.

Continuously right-sizing your Azure DevOps Features using Power Automate

A guide to continuously right-sizing Azure DevOps (ADO) Features using OData queries and Power Automate…

Context

Right-sizing is a flow management practice that ensures work items remain within a manageable size. Most teams apply this at the User Story or Product Backlog Item (PBI) level, using historical Cycle Time data to determine the 85th percentile as their right-size, meaning 85% of items are completed within a set number of days. This is often referred to as a Service Level Expectation (SLE).

“85% of the time we complete items in 7 days or less”

In combination with this, teams use Work Item Age, the amount of elapsed time in calendar days since a work item started, to proactively manage the flow of work that is “in-progress”. Previously I have shared how you can automate the Cycle Time, SLE and Work Item Age to your ADO boards.

Right-sizing isn’t limited to Stories or PBIs — it also applies to Features (and Epics!), which group multiple Stories or PBIs. For Features, right-sizing means keeping the child item count below a manageable limit.

To understand what this right-size is, we choose a selected date range, plotting our completed Features against the date they were completed and the number of child items they had. We can then use percentiles to derive what our ‘right-size’ is (again typically taking the 85th percentile):

Good teams will then use this data to proactively check their current ‘open’ Features (those in progress/yet to start) and see if those Features are right-sized:

Right-sizing brings many benefits for teams as it means faster feedback, reduced risk and improved predictability. The challenge is that this data/information will almost always live in a different location to the teams’ work. In order for practices such as right-sizing to become adopted by more teams it needs to be simple and visible every day so that teams are informed around their slicing efforts and growth of feature size.

Thankfully, we can leverage tools like Power Automate, combined with ADO’s OData queries to make this information readily accessible to all teams…

Prerequisites

This guide assumes the following prerequisites:

With all those in place — let’s get started!

Adding a custom field for if a Feature is right-sized is not

We first need to add a new field into our process template in ADO called Rightsized. As we are focusing on right-sizing of Features, for the purpose of simplicity in this blog we will stick to Feature as the work item type we will set this up for and be using an inheritance of the Scrum process template.

Please note, if you are wanting to do this for multiple work item types you will have to repeat the process of adding this field for each work item type.

  • Find the Feature type in your inherited process template work items list

  • Click into it and click ‘new field’

  • Add the Rightsized field — ensuring you specify it as picklist type field with two options, “Yes” or “No”

Understanding our Feature right-size

As mentioned earlier in the blog, we plot our completed Features over a given time period (in this case 12 weeks) against the date they were completed on and the number of child items those Features had. We can then draw percentiles against our data to understand our ‘right-size’:

If you’re wondering where the tools are to do this, I have a free template for Power BI you can download and connect to/visualise your ADO data.

For the purpose of simplicity, in this blog we’re going to choose our 85th percentile as our right-size value so, for this team, they have a right-size of 14 child items or less.

Automating our right-size check

Start by creating two ADO queries for automation. The first will retrieve completed Features within a specified time range. A 12-week period (roughly a quarter) works well as a baseline but can be adjusted based on your planning cadence. In this example, we’re querying any Features that were ‘Completed’ in the last 12 weeks, that are owned by a particular team (in this example they are under a given Area Path):

Save that query with a memorable name (I went with ‘Rightsizing Part 1’) as we’ll need it later.

Then we’re going to create a second query for all our ‘open’ Features. Here you’re going to do a ‘Not In’ for your completed/removed states and those that are owned by this same team (again here I’ll be using Area Path):

Make sure ‘rightsized’ is added as a column option and save that query with a memorable name (I went with ‘Rightsizing Part 2’) as we’re going to need it later.

Next we go to Power Automate and we create a Scheduled cloud flow. Call this whatever you want but we want this to run every day at a time that makes sense (probably before people start work). Once you’re happy with the time click create:

Next we need to click ‘+ new step’ and add an action to Get query results from the first query we just set up. ensuring that we input the relevant ‘Organization Name’ and ‘Project Name’ where we created the query:

Following this we are going to add a step to Initialize variable — this is essentially where we will ‘store’ what our Rightsize is which, to start with, will be an Integer with a value of 0:

We’re going to repeat this step a few more times, as we’re Initialize variable for ranking Features (as a ‘float’ type) by their child item count:

Then we will Initialize Variable to flatten our array value, which we’re going to need towards the end of the flow to get our data in the format we need it to be in to do the necessary calculations:

Our final Initialize Variable will be for our Interpolated Value, which is a ‘float’ value we’re going to need when it comes to calculating the percentile for our right-sizing:

Then we are going to add an Apply to each step where we’ll select the ‘value’ from our ‘Get query results’ step as the starting point:

Then we’re going to choose a HTTP step. You’ll need to set the method as ‘GET’ and add in the the URL. The first part of the URL (replace ORG and PROJECT with your details) should be:

https://analytics.dev.azure.com/ORG/PROJECT/_odata/v3.0-preview/WorkItems?%20$filter=WorkItemId%20eq%20

Add in the dynamic content of ‘Id’ from our Get work item details step, then add in:

%20&$select=WorkItemID&$expand=Descendants(%20$apply=filter(WorkItemType%20ne%20%27Test%20Case%27%20and%20StateCategory%20eq%20%27Completed%27%20and%20WorkItemType%20ne%20%27Task%27%20and%20WorkItemType%20ne%20%27Test%20Plan%27%20and%20WorkItemType%20ne%20%27Shared%20Parameter%27%20and%20WorkItemType%20ne%20%27Shared%20Steps%27%20and%20WorkItemType%20ne%20%27Test%20Suite%27%20and%20WorkItemType%20ne%20%27Impediment%27%20)%20/groupby((Count),%20aggregate($count%20as%20DescendantCount))%20)

Which should look like:

Click ‘Show advanced options’ and add your PAT details — Set authentication to ‘Basic,’ enter ‘dummy’ as the username, and paste your PAT as the password:

PAT blurred for obvious reasons!

Then we need to add in a Parse JSON step. This is where we are essentially going to extract our count of child items from our completed Features. Choose ‘body’ as the content and add a schema like so:

{
    "type": "object",
    "properties": {
        "@@odata.context": {
            "type": "string"
        },
        "vsts.warnings@odata.type": {
            "type": "string"
        },
        "@@vsts.warnings": {
            "type": "array",
            "items": {
                "type": "string"
            }
        },
        "value": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "WorkItemId": {
                        "type": "integer"
                    },
                    "Descendants": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "@@odata.id": {},
                                "Count": {
                                    "type": "integer"
                                },
                                "DescendantCount": {
                                    "type": "integer"
                                }
                            },
                            "required": [
                                "@@odata.id",
                                "Count",
                                "DescendantCount"
                            ]
                        }
                    }
                },
                "required": [
                    "WorkItemId",
                    "Descendants"
                ]
            }
        }
    }
}

This is where it gets a bit more complicated, we’re then going to add an Apply to each step, using the value from our previous step. Then do ANOTHER Apply to each where we’re going to take the descendant count for each Feature and append it to our FlattenedArray variable:

Then we’re going to go outside our Apply to each loop and add a Compose step to sort our child item counts:

Then we’re going to add a Set Variable step where we’re going to set our Rank variable using the following expression:

float(add(mul(0.85, sub(length(outputs('SortedCounts')), 1)), 1))

Next we’re going to do the part where we work out our 85th percentile. To start with, we first need to figure out the integer part. Add a compose action with the following expression:

int(substring(string(variables('rank')), 0, indexOf(string(variables('rank')), '.')))

Then add another compose part for the fractional part, using the expression of:

sub(float(variables('rank')), int(substring(string(variables('rank')), 0, indexOf(string(variables('rank')), '.'))))

Then we’re going to add a Compose step for formatting this to be one decimal place, we do using:

formatNumber(outputs('Compose_FractionalPart'), 'N1')

Then we’re going to initialize another variable, which we do simply to “re-sort” our array (I found in testing this was needed). This will have a value of:

sort(variables('FlattenedArray'))

Then we’re going to set our FlattenedArray variable to be the output of this step:

Then we need to calculate the value at our Integer position:

variables('FlattenedArray')[sub(int(outputs('Compose_IntegerPart')), 1)]

Then do the same again for the value at the next integer position:

variables('FlattenedArray')[outputs('Compose_IntegerPart')]

Then add a compose for our interpolated value:

add(
    outputs('Compose_ValueAtIntegerPosition'),
    mul(
        outputs('Compose_FractionalPart'),
        sub(
            outputs('Compose_ValueAtNextIntegerPosition'),
            outputs('Compose_ValueAtIntegerPosition')
        )
    )
)

Remember the variable we created at the beginning for this? This is where we need it again, using the outputs of the previous step to set this as our InterpolatedValue variable:

Then we need to add a Compose step:

if(
    greaterOrEquals(mod(variables('InterpolatedValue'), 1), 0.5),
    formatNumber(variables('InterpolatedValue'), '0'),
    if(
        less(mod(variables('InterpolatedValue'), 1), 0.5),
        if(
            equals(mod(variables('InterpolatedValue'), 1), 0),
            formatNumber(variables('InterpolatedValue'), '0'),
            add(int(first(split(string(variables('InterpolatedValue')), '.'))), 1)
        ),
        first(split(string(variables('InterpolatedValue')), '.'))
    )
)

Then we just need to reformat this to be an integer:

int(outputs('Compose'))

Then we use the output of this to set our rightsize variable:

Next step is to use ADO again, get query results of our second query we created for all our ‘open’ features:

Then we’re going to add an Apply to each step using the value from the previous step. Then a HTTP step with the method as ‘GET’ and add in the the URL. This URL is different than the one above! The first part of the URL (replace ORG and PROJECT with your details) should be:

https://analytics.dev.azure.com/ORG/PROJECT/_odata/v3.0-preview/WorkItems?%20$filter=WorkItemId%20eq%20

Add in the dynamic content of ‘Id’ from our Get work item details step, then after the Id, add in:

%20&$select=WorkItemID&$expand=Descendants(%20$apply=filter(WorkItemType%20ne%20%27Test%20Case%27%20and%20State%20ne%20%27Removed%27%20and%20WorkItemType%20ne%20%27Task%27%20and%20WorkItemType%20ne%20%27Test%20Plan%27%20and%20WorkItemType%20ne%20%27Shared%20Parameter%27%20and%20WorkItemType%20ne%20%27Shared%20Steps%27%20and%20WorkItemType%20ne%20%27Test%20Suite%27%20and%20WorkItemType%20ne%20%27Impediment%27%20)%20/groupby((Count),%20aggregate($count%20as%20DescendantCount))%20)

Which should look like:

Make sure again to click ‘Show advanced options’ to add in your PAT details. Then add a Parse JSON step, this is where we are essentially going to extract our count of child items from our open Features. Choose ‘body’ as the content and add a schema like so:

{
    "type": "object",
    "properties": {
        "@@odata.context": {
            "type": "string"
        },
        "vsts.warnings@odata.type": {
            "type": "string"
        },
        "@@vsts.warnings": {
            "type": "array",
            "items": {
                "type": "string"
            }
        },
        "value": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "WorkItemId": {
                        "type": "integer"
                    },
                    "Descendants": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "@@odata.id": {},
                                "Count": {
                                    "type": "integer"
                                },
                                "DescendantCount": {
                                    "type": "integer"
                                }
                            },
                            "required": [
                                "@@odata.id",
                                "Count",
                                "DescendantCount"
                            ]
                        }
                    }
                },
                "required": [
                    "WorkItemId",
                    "Descendants"
                ]
            }
        }
    }
}

This is where it gets a bit more complicated, we’re then going to add an Apply to each step, using the value from our previous step. Then do ANOTHER Apply to each where we’re going to take the descendant count for each ‘open’ Feature. Then we’re going to add a condition step. Where we’ll look at each open Feature and see if the Descendant count is less than or equal to our Rightsize variable:

If yes, then we add an update work item step and ensure that we are choosing the ID and Work Item Type of the item from our query previously, and setting the Rightsized value to “Yes”. If no, then do the same as above but ensure you’re setting the Rightsized value to “No”:

And that’s it — you’re (finally) done!

If you run the automation, then it should successfully update your Features if they are/are not right-sized:

It’s worth noting that any Features with 0 child items aren’t updated with yes/no, purely due to this likely being too early on in the process. Saying a Feature with 0 child items is ‘right-sized’ feels wrong to me but you are welcome to amend the flow if you disagree!

There may be tweaks to the above you could make to improve flow performance, currently it runs at 1–2 minutes in duration:

By implementing continuous right-sizing in Azure DevOps using Power Automate, teams can drive faster feedback loops, reduce delivery risks, and improve predictability. Automating the right-sizing check ensures the data remains visible and actionable, empowering teams to stay focused on maintaining manageable work sizes. With this flow in place, you’re not just optimising features — you’re fostering a culture of continuous improvement and efficiency.

Stay tuned for my next post, where I’ll explore how to apply the same principles to Epics in Jira!

Signal vs Noise: How Process Behaviour Charts can enable more effective Product Operations

In today’s product world, being data-driven (or data-led) is a common goal, but misinterpreting data can lead to wasted resources and missed opportunities. For Product Operations teams, distinguishing meaningful trends (signal) from random fluctuations (noise) is critical. Process Behaviour Charts (PBCs) provide a powerful tool to focus on what truly matters, enabling smarter decisions and avoiding costly mistakes…

What Product Operations is (and is not)

Effective enablement is the cornerstone of Product Operations. Unfortunately, many teams risk becoming what Marty Cagan calls “process people” or even the reincarnated PMO. Thankfully, Melissa Perri and Denise Tilles provide clear guidance in their book, Product Operations: How successful companies build better products at scale, which outlines how to establish a value-adding Product Operations function.

In the book there are three core pillars to focus on to make Product Operations successful. I won’t spoil the other two, but the one to focus on for this post is the data and insights pillar. This is is all about enabling product teams to make informed, evidence-based decisions by ensuring they have access to reliable, actionable data. Typically this means centralising and democratizing product metrics and trying to foster a culture of continuous learning through insights. In order to do this we need to visualise data, but how can we make sure we’re enabling in the most effective way in doing this?

Visualising data and separating signal from noise

When it comes to visualising data, another must read book is Understanding Variation: The Key To Managing Chaos by Donald Wheeler. This book highlights so much about the fallacies in organisations that use data to monitor performance improvements. It explains how to effectively interpret data in the context of improvement and decision-making, whilst emphasising the need to understand variation as a critical factor in managing and improving performance. The book does this through the introduction of a Process Behaviour Chart (PBC). A PBC is a type of graph that visualises the variation in a process over time. It consists of a running record of data points, a central line that represents the average value, and upper and lower limits (referred to as Upper Natural Process Limit — UNPL and Lower Natural Process Limit — LNPL) that define the boundaries of routine variation. A PBC can help to distinguish between common causes and exceptional causes of variation, and to assess the predictability and stability of data/a process.

An example of a PBC is the the chart below, where the daily takings on the fourth Saturday of the month could be ‘exceptional variation’ compared to normal days:

Deming Alliance — Process Behaviour Charts — An Introduction

If we bring these ideas together, an effective Product Operations department that is focusing on insights and data should be all about distinguishing signal from noise. If you aren’t familiar with the term, signal is what you should be looking at, this is the meaningful information you want to focus on, after all, the clue is in the name! Noise is all the random variation that interferes with it. If you want to learn more, the book The Signal and The Noise is another great resource to aid your learning around this topic. Unfortunately, too often in organisations when we work with data people wrongly misinterpret that which is noise to in fact be signal. For Product Operations to be adding value, we need to be pointing our Product teams to signals and cutting out the noise in typical metrics we track.

But what good is theory without practical application?

An example

Let’s take a look at four user/customer related metrics for an eCommerce site from the beginning of December up until Christmas last year:

The use of colour in the table draws the viewer to this information as it is highlighted. What then ends up happening is a supporting narrative something like so, which typically comes from those monitoring the numbers:

The problem here is that noise (expected variation in data) is being mistaken for signal (exceptional variation we need to investigate), particularly as it is influenced through the use of colour (specifically the RAG scale). The metrics of Unique Visitors and Orders contain no use of colour so there’s no way to determine what, if anything, we should be looking at. Finally, our line charts don’t really tell us anything other than if values are above/below average and potentially trending.

A Product Operations team shows value-add in enabling the organisation be more effective in spotting opportunities and/or significant events that others may not see. If you’re a PM working on a new initiative/feature/experiment, you want to know if there are any shifts in the key metrics you’re looking at. Visualising it in this ‘generic’ way doesn’t allow us to see that or, could in fact be creating a narrative that isn’t true. This is where PBCs can help us. They can highlight where we’re seeing exceptional variation in our data.

Coming back to our original example, let’s redesign our line chart to be a PBC and make better usage of colour to highlight large changes in our metrics:

We can see that we weren’t ‘completely’ wrong, although we have missed out a lot more useful information. We can see that Conversion Rate for the 13th and 20th December was in fact exceptional variation from the norm, so the colour highlighting of this did make sense. However, the narrative around Conversion Rate performing badly at the start of the month (with the red background in the cells in our original table) as well as up to and including Christmas is not true, as this was just routine variation that was within values we expected.

For ABV we can also see that there was no significant event up to and including Christmas, so it neither performed ‘well’ or ‘not so well’ as the values every day were within our expected variation. What is interesting is that we can see we have dates where we have seen exceptional variation in both our Orders and Unique Visitors, which should prompt further investigation. I say further investigation as these charts, like nearly all data visualisation doesn’t give you answers, it just gets you asking better questions. It’s worth noting that for certain events (e.g. Black Friday) these may appear as ‘signal’ in your data but in fact it’s pretty obvious as to the cause.

Exceptional variation in terms of identifying those significant events isn’t the only usage for PBCs. We can use these to spot other, more subtle changes in data. Instead of large changes we can also look at moderate changes. These draw attention to patterns inside ‘noisy’ data that you might want to investigate (of course after you’ve checked out those exceptional variation values). For simplicity, this happens when two out of three points in a row are noticeably higher than usual (above a certain threshold not shown in these charts). This can provide new insight that wasn’t seen previously, such as our metrics of Unique Visitors and Orders, which previously had no ‘signal’ to consider:

Now we can see points where there has been a moderate change. We can then start to ask questions such as could this be down to a new feature, a marketing campaign or promotional event? Have we improved our SEO? Were we running an A/B test? Or is it simply just random fluctuation?

Another use of PBCs centre on sustained shifts which, when you’re working in the world of product management is a valuable data point to have at your disposal. To be effective at building products, we have to focus on outcomes. Outcomes are a measurable change in behaviour. A measurable change in behaviour usually means a sustained (rather than one-off) shift. In PBCs, moderate, sustained shifts indicate a consistent change which, when analysing user behaviour data means a sustained change in the behaviour of people using/buying our product. This happens when four out of five points in a row are consistently higher than usual, based on a specific threshold (not shown in these charts). We can now see where we’ve had moderate, sustained shifts in our metrics:

Again we don’t know what the cause of this is but, it focuses our attention on what we have been doing around those dates. Particularly for our ABV metricwe might want to reconsider our approach given the sustained change that appears to be on the wrong side of the average.

The final sustained change focus on smaller, sustained changes. This is a run of at least 8 successive data points within the process limits on the same side of the average line (which could be above or below):

For our example here, we’re seeing this for Unique Visitors, which is good as we’re seeing a small, sustained change in the website’s traffic above the average. Even clearer is for ABV, with multiple points above the average indicating a positive (but small) shift in customer purchasing behaviour.

Key Takeaways

Hopefully, this blog provides some insight into how PBCs enable Product Operations to support data-driven decisions while avoiding common data pitfalls. By separating signal from noise, organisations can prevent costly errors like unnecessary resource allocation, misaligned strategies, or failing to act on genuine opportunities. In a data-rich world, PBCs are not just a tool for insights — they’re a safeguard against the financial and operational risks of misinterpreting data.

In terms of getting started, consider any of the metrics you look at now (or provide the organisation) as a Product Operations team. Think about how you differentiate signal from noise. What’s the story behind your data? Where should people be drawn to? How do we know when there are exceptional events or subtle shifts in our user behaviour? If you can’t easily tell or have different interpretations, then give PBCs a shot. As you grow more confident, you’ll find PBCs an invaluable tool in making sense of your data and driving product success.

If you’re interested in learning more about them, check out Wheeler’s book (I picked up mine for ~£10 on eBay) or if you’re after a shorter (and free!) way to learn as well as how to set them up with the underlying maths, check out the Deming Alliance as well as this blog from Tom Geraghty on the history of PBCs.

Outcome focused roadmaps and Feature Monte Carlo unite!

Shifting to focusing on outcomes is key for any product operating model to be a success, but how do you manage the traditional view on wanting to see dates for features, all whilst balancing uncertainty? I’ll share how you can get the best of both worlds with a Now/Next/Later X Feature Monte Carlo roadmap…

What is a roadmap?

A roadmap could be defined as one (or many) of the following:

Where do we run into challenges with roadmaps?

Unfortunately, many still view roadmaps as merely a delivery plan to execute. They simply want a list of Features and when they are going to be done by. Now, sometimes this is a perfectly valid ask, for example if efforts around marketing or sales campaigns are dependent on Features in our product and when they will ship. More often than not though, it is a sign of low psychological safety. Teams are forced to give date estimates when they know the least and are then “held to account” for meeting that date that is only formulated once, rather than being reviewed continuously based on new data and learning. Delivery is not a collaborative conversation between stakeholders and product teams, it’s a one-way conversation.

What does ‘good’ look like?

Good roadmaps are continually updated based on new information, helping you solicit feedback and test your thinking​, surface potential dependencies and ultimately achieve the best outcomes with the least amount of risk and work​.

In my experience, the most effective roadmaps out there find the ability to tie the vision/mission for your product to the goals, outcomes and planned features/solutions for the product. A great publicly accessible example is the AsyncAPI roadmap:

A screenshots of the ASyncAPI roadmap

Vision & Roadmap | AsyncAPI Initiative for event-driven APIs

Here we have the whole story of the vision, goals, outcomes and the solutions (features) that will enable this all to be a success.

To be clear, I’m not saying this is the only way to roadmap, as there are tonnes of different ways you can design yours. In my experience, the Now / Next / Later roadmap, created by Janna Bastow, provides a great balance in giving insight into future trajectory whilst not being beholden to dates. There are also great templates from other well known product folk such as Melissa Perri’s one here or Roman Pichler's Go Product Roadmap to name a few. What these all have in common is they are able to tie vision, outcomes (and even measures) as well as features/solutions planned to deliver into one clear, coherent narrative.

Delivery is often the hardest part though, and crucially how do we account for when things go sideways?

The uncertainty around delivery

Software development is inherently complex, requiring probabilistic rather than deterministic thinking about delivery. This means acknowledging that there are a range of outcomes that can occur, not a single one. To make informed decisions around delivery we need to be aware of the probability of that outcome occurring so we can truly quantify the associated “risk”.

I’ve covered in a previous blog about using a Feature Monte Carlo when working on multiple features at once. This is a technique teams adopt in understanding the consequences around working on multiple Features (note: by Feature I mean a logical grouping of User Stories/Product Backlog Items), particularly if you have a date/deadline you are working towards:

An animation of a feature monte carlo chart

Please note: all Feature names are fictional for the purpose of this blog

Yet this information isn’t always readily accessible to stakeholders and means navigating to multiple sources, making it difficult to tie these Features back to the outcomes we are trying to achieve.

So how can we bring this view on uncertainty to our roadmaps?

The Now/Next/Later X Feature Monte Carlo Roadmap

The problem we’re trying to solve is how can we quickly and (ideally) cheaply create an outcome oriented view of the direction of our product, whilst still giving that insight into delivery stakeholders need, AND balance the uncertainty around the complex domain of software development?

This is where our Now/Next/Later X Feature Monte Carlo Roadmap comes into the picture.

Using Azure DevOps (ADO) as our tool of choice, which has a work item hierarchy of Epic -> Feature -> Product Backlog Item/User Story. With some supporting guidance, we can make it clear around what each level should entail:

An example work item hierarchy in Azure DevOps

You can of course rename these levels if you wish (e.g. OKR -> Feature -> Story) however we’re aiming to do this with no customisation so will stick with the “out-the-box” configuration. Understanding and using this setup is important as this will be the data that feeds into our roadmap.

Now let’s take a real scenario and show how this plays out via our roadmap. Let’s say we were working on launching a brand new loyalty system for our online eCommerce site, how might we go about it?

Starting with the outcomes, let’s define these using the Epic work item type in our backlog, and where it sits in our Now/Next/Later roadmap (using ‘tags’). We can also add in how we’ll measure if those outcomes are being achieved:

An example outcome focused Epic in ADO

Note: you don’t have to use the description field, I just did it for simplicity purposes!

Now we can formulate the first part of our roadmap:

A Now, Next, Later roadmap generated from ADO data

For those Epics tagged in the “Now”, we’re going to decompose those (ideally doing this as team!) into multiple Features and relevant Product Backlog Items (PBIs). This of course should be done ‘just in time’, rather than doing it all up front. Techniques like user story mapping from Jeff Patton are great for this. In order to get some throughput (completed PBIs) data, the team are then going to start working through these and moving items to done. Once we have sufficient data (generally as little as 4 weeks worth is enough), we can then start to view our Feature Monte Carlo, playing around with the parameters involved:

A Feature Monte Carlo generated from ADO data

The real value emerges when we combine these two visuals. We can have the outcome oriented lens in the Now / Next / Later and, if people want to drill down to see where delivery of those Features within that Epic (Outcome) is, they can:

A now, next, later roadmap being filtered to show the Feature Monte Carlo

They can even play around with the parameters to understand just what would need to happen in order to make that Feature that’s at risk (Red/Amber) a reality (Green) for the date they have in mind:

A now, next, later roadmap being filtered to show the Feature Monte Carlo

It’s worth noting this only works when items in the “Now” have been broken down into Features. For our “Next” and “Later” views, we deliberately stop the dynamic updates as items at these horizons should never be focused on specific dates.

Similarly, we can also see where we have Features with 0 child items that aren’t included in the monte carlo forecast. This could be that either they’re yet to be broken down or that all the child items in it are complete but the Feature hasn’t yet moved to “done” — for example if it is waiting feedback. Similarly, it also highlights those Features that may not be linked to a parent Epic (Outcome):

A Feature monte carlo highlighted with Features without parents and/or children.

Using these tools allows for our roadmap becomes an automated, “living” document generated from our backlog that shows outcomes and the expected dates of the Features that can enable those outcomes to be achieved. Similarly, we can have a collaborative conversation around risk and what factors (date, confidence, scope change, WIP) are at play. In particular, leverage the power of adjusting WIP means we can finally add proof to that agile soundbite of “stop starting, start finishing”.

Interested in giving this a try? Check out the GitHub repo containing the Power BI template then plug in your ADO data to get started…

Measuring value through portfolio alignment

Understanding and prioritising based on (potential) value is key to the success of any agile team. However, not all teams have value measures in place and often are just a component part of a delivery mechanism in an organisation. Here’s how we’re enabling our teams at ASOS Tech to better understand the value in the work they do…

Context

We’ve previously shared how we want people in our tech teams to understand the purpose in their work, rather than just blindly building feature after feature. In terms of our work breakdown structure, we use a four-level hierarchy of work items, with some flexibility around how that may look in terms of ‘standards’:

To bring this to life with an example, take our launch of ASOS Drops where the portfolio epic would represent the whole proposition/idea, with the child epic(s) representing the different domains/platforms involved:

Please note: more platforms were involved in this, this is just simplified for understanding purposes :)

We also want our teams to have a healthy balance of Feature work vs. that which is Hygiene/BAU/Tech Debt/Experimentation, with our current guidance around capacity being:

Team feature capacity will, in most instances, be work related to portfolio epics, as these are the highest priority for our organisation that tech is delivering. If someone can trace the work they are doing on a daily basis (at User Story/Product Backlog Item level) all the way to the portfolio, they should be able to see the outcomes we are striving for, the value they are delivering and ultimately how they are contributing towards ASOS’ strategy (which was consistent feedback in Vibe surveys as something our people in technology want). It is therefore a good proxy measure for (potential) value in helping teams understand just how much of their backlog compliments the priorities for ASOS. This is where portfolio alignment comes into play.

Understanding portfolio alignment

Portfolio alignment, simply put, is how much of a team backlog traces all the way up to the priorities for delivery the organisation desires from its technology department.

To calculate it, we start with a team backlog at user story/product backlog item (PBI) level. Here we look at every item at this level and to see if that item has a parent Feature. It then looks at those Features to see if they have a parent Epic. Finally, it then looks at those Epics to see if they have a parent Portfolio Epic.

To show a simplified example, imagine a backlog of 10 PBI’s that had the following linkage:

This would have an alignment score of 10%, as 1/10 PBI’s in the team backlog link all the way to the portfolio.

Even if a team backlog has good linkage at Feature and/or Epic level, it would still only receive a ‘count’ if it linked all the way. For example if this team improved their linkage like so:

This would still only result in an alignment of 10%, as only 1/10 PBI’s link all the way to the top.

As we’re looking at this on a consistent basis across teams, platforms and domains, we look purely at count, as it would simply be impossible to do any sort of single calculation around complexity.

Visualising portfolio alignment

The alignment starts at a team backlog at User Story/PBI level. Teams get two views. The top view is a line chart which shows, on a daily basis, what their backlog (any items that are yet to start and those that have started) alignment was on a particular date. The value on the far right shows their current alignment, with a summary number showing the average alignment over the selected period, as well as a trend line to see if it is improving in the selected period:

Teams also have the ability to filter between seeing this view for the whole backlog or just those “in flight” (items that are in progress):

Finally, there is the table underneath this chart which details the full backlog and all those relevant parent-child links, with every item clickable to open it up in Azure DevOps.

We also have aggregated views for both a domain (a logical grouping of teams) and ASOS tech wide view:

Rollout across tech

Rolling this out initially was a hard sell in some areas. This was mainly due to how people can immediately react to viewing ‘their’ data. Careful messaging is needed around it not being a stick/tool to beat teams with, but a method to understand (and improve) alignment. Similarly, we were very clear in that it should never be at 100% and that there isn’t a single number to hit, as context varies. This way we are accounting for any type of Goodhart's Law behaviour we may see.

Similarly, to help team leads and leaders of areas understand where they could improve, as coaches we advised around what people might want to consider to improve their alignment. Which predominantly wasn’t through improving your linking, more deleting old stuff you’re never going to do!

At times this was met with resistance, which was surprising as I always find deleting backlog items to be quite cathartic! However showing teams a large number of items that had not been touched in months or added many months (and sometimes years!) ago did prompt some real reflection as to if those items were truly needed.

Impact and outcomes

As a team, we set quarterly Objectives and Key Results (OKRs) to measure the impact we’re having across different areas in the tech organisation, ideally demonstrating a behavioural change. This was one of our focuses, particularly around demonstrating where there has been significant improvements and behavioural change from teams:

With anything agility related, it’s important to recognise those innovators and early adopters, so those that had seen a double digit improvement were informed/celebrated, with positive feedback from leaders in different areas around this being the right thing to be doing:

Portfolio alignment also now helps our teams in self-assessing their agility , as one of our four themes (more to come on this in a future post!):

This way, even our teams that struggle to measure the value in their work at least have a starting point to inform them how they are contributing to organisational priorities and strategy.

Seeking purpose – intrinsic motivation at ASOS

Autonomy, mastery and purpose are the three core components to intrinsic motivation. How do you embed these into your technology function/department? Read on to explore these concepts further and how we go about it at ASOS…

The book Drive by Daniel Pink is an international bestseller and a commonly referenced book around modern management. If you haven’t read it, the book essentially looks at what motivators are for people when it comes to work.

Some may immediately assume this is financial, which, to a certain degree is true. The research in the book explains that for simple, straightforward work, financial rewards to motivation are indeed effective. It also explains how we need to understand this as being an ‘external’ motivational factor. Motivation from these external factors is classed as extrinsic motivation. These factors only go so far and, in the complex domain such as software development, quickly lose effectiveness when pay is fair.

This is where we look at the second motivational aspect of intrinsic motivation. When pay is fair and work is more complex, thisis when the behaviour of the person is motivated by an inner drive that propels a person to pursue an activity. Pink explains how intrinsic motivation is made up of three main parts:

  • Autonomy — the desire to direct our own lives

  • Mastery — the desire to continually improve at something that matters

  • Purpose — the desire to do things in service of something larger than ourselves

What drives us: autonomy + mastery + purpose

Source

When people have intrinsic motivation, it motivates people to do their best work. So how do we try to bring intrinsic motivation to our work in Tech @ ASOS?

Autonomy

Autonomy is core to all our teams here at ASOS. From a technical perspective, teams have aligned autonomy around technologies they can leverage. We do this through things such as our Patterns and Practices group, which looks to improve technical alignment across teams and agree on patterns for solving particular problems. We then communicate these patterns both internally and externally, which makes our software safer to operate and reduces re-learning effort.

As a team of Agile Coaches, we uphold this autonomy principle by not prescribing a single way of working for any of our teams. Instead, we give them the freedom to choose however they want to work, but guiding them around ensuring this way of working aligns with agile values and principles.

Comic Agilé of a leader telling teams they are self-organising

Not like this!

From books such as Accelerate, we know that enforcing standardisation with working practices upon teams actually reduces learning and experimentation. When your target market is fashion-loving 20-somethings, teams simply must be able to innovate and change without having what Marty Cagan would call ‘process people’ who impose constraints on how teams must work. You cannot inhibit yourselves by mandating one single way of working.

To bring this to life with a simple example, we don’t have any teams that use all elements of Scrum as per the guide. Do we have teams that take inspiration and practices from Scrum? Yes. Can they change/get rid of practices that don’t add value? Of course. Do they also blend practices from other frameworks too? Absolutely! For instance, we have plenty of teams who work in sprints (Scrum), love pairing (eXtreme Programming) and use flow metrics (Kanban) to continuously improve, all whilst retaining a core principle of “you build it, you run it” (DevOps). Autonomy is therefore an essential factor for all our technology teams.

Enough about autonomy… what about mastery?

Mastery

Mastery exists in a few forms for our teams. A core approach to mastery our teams use is our Fundamentals. These are measures we use to drive continuous improvement and operational excellence across our services​. Our own Scott Frampton discussing the history and evolution of this in detail in this series. In short, it comprises of four pillars:

  1. Monitoring & Observability

  2. Performance & Scalability

  3. Resiliency

  4. Deployability

Teams self-assess and use this as a compass (rather than a GPS) to guide them in their improvement efforts. This means we are aligned in “what good looks like” when engineering and operating complex systems.

Sample view of engineering excellence

The levels of the respective measures are continually assessed and evolve quarter to quarter, in line with industry trends, as well as patterns and practices, so teams never “sit still” or think they have achieved a level of mastery that they will never surpass.

Similarly, mastery is something that is encouraged and celebrated through our internal platforms and initiatives. ASOS Backstage is our take on Spotify Backstage, another tool in our toolbox to better equip our teams in understanding the software landscape at ASOS. We also have our Defenders of the Wheel group — a collection of engineers who work to support the development and growth of new ASOS Core libraries and internal tools.

Screenshot of ASOS Backstage

To encourage mastery, individuals across Tech are able to achieve certifications relevant to their role(s) and/our contributions to these internal platforms/groups:

Backstage badges

This means that there are frequent sources of motivation for individuals in our teams from a mastery perspective.

What about the final aspect of intrinsic motivation, purpose?

Purpose

This is probably the most challenging area for our teams, as often this may be outside of their control. As an organisation, we’re very clear on what our vision and purpose is:

Our vision is to be the world’s number one fashion destination for fashion-loving 20-somethings

Source: ASOS PLC

Similarly, our CEO José recently reminded us all about what makes ASOS the organisation it is, covering our purpose, performance and passion at a recent internal event:

José talking purpose, performance and passion at Town Hall

Source: José’s LinkedIn

The challenge is that in a tech organisation, this doesn’t always easily translate into the specific work an individual and/or team is doing. If a team is working on a User Story for example, it’s not an unfair question for them to be asking “Why am I doing this?” or “What impact will this have?” or even “Where is the value?”. One of our efforts around this has been introducing and improving what we call ‘Semester Planning’, which Paul Taylor will cover in a future post. The other main effort has been around portfolio transparency.

Portfolio transparency, as a concept, is essentially end-to-end linkage in the work anyone in a team is doing so that they, as an individual, can understand how this aligns with the goals and strategy of the organisation. Books such as Sooner, Safer, Happier by Jonathan Smart bring this concept to life in visuals like so:

Strategic objective diagram

Source: Sooner Safer Happier

The key to this idea is that an individual should be able to understand the value in the work they are doing. This value should be as simple as possible – i.e. not via some Fibonacci voodoo or ambiguous mathematical formula (e.g. SAFe’s version of WSJF). The acid test being that can anyone in the tech organisation understand how a given item (story, feature, epic) contributes to the goals of the organisation and the value this brings. My own ‘self-imposed’ constraint for this being that they should achieve this in less than five clicks.

At its core, this really is just about better traceability of work end to end. We have high-performing teams who regularly showcase technical excellence, but how does that fit into the big picture?

With the work we have been doing, a team can now take a User Story that they will be working on and, within five clicks, understand the value this brings and the strategic alignment to the goals of the organisation (note these numbers have been modified for the purpose of this blog):

Sample hierarchy of User Story to Feature to Epic to Portfolio Epic
Sample epic in Azure Devops

*Note — 

not

the actual £ values*

Sample products demo from previous user story

And this is what it looks like to you!

Of course, this is dependent on quality data entry! Not everything (yet!) in our portfolio contains this information, however, this is the first positive step in making visible the purpose and value in our work.

How do you do this in your organisation? Can teams easily see the value in what they are doing? I’d love to hear your thoughts in the comments below…

Weeknotes #40

Product Management Training

This week we had a run through from Rachel of our new Product Management training course that she has put together for our budding Product Managers. I really enjoyed going through it as a team (especially using our co-working space in More London) and viewing the actual content itself.

Credits: Jon Greatbatch for photo “This can be for your weeknotes”

What I really liked about the course was the fact the attendees are going to be very ‘hands-on’ during the training, and will get to go apply various techniques that PdM’s use with a case study of Delete My Data (DMD) throughout. It’s something that I’ve struggled with when putting together material in the past of having an ‘incremental’ case study that builds through the day, so glad that Rachel has put something like this together. We’ve earmarked the 28th Jan to be the first session we run, with it being a combination of our own team and those moving into Product Management being the ‘guinea pigs’ for the first session.

2019 Reflections

This week has been a particularly challenging week, with lots of roadblocks in the way of moving forward. A lack of alignment in new teams with future direction, and lack of communication to the wider function around our move to new ways of working means that it feels like we aren’t seeing the progress we should be, or creating a sense of urgency. Whilst it’s certainly true around achieving big through small, it does feel that with change initiatives it can feel like you are moving too slow, which is the current lull we’re in. After a few days feeling quite down I took some time out to reflect on 2019, and what we have achieved, such as:

  • Delivering a combined 49 training courses on Agile, Lean and Azure DevOps

  • Trained a total of 789 PwC staff across three continents

  • Becoming authorised trainers to offer an industry recognised course

  • Actually building our first, proper CI/CD web apps as PoC’s

  • Introducing automated security tools and (nearly) setting up ServiceNow change management integration to #TakeAwayTheExcuses for not adopting Agile

  • Hiring our first ever Product Manager (Shout out Rachel)

  • Getting our first ever Agile Delivery Manager seconded over from Consulting (Shout out Stefano)

  • Our team winning a UK IT Award for Making A Difference

  • Agreement from leadership on moving from Project to Product, as part of our adoption of new ways of working

All in all, it’s fair to say we’ve made big strides forward this year, I just hope the momentum continues into 2020. A big thank you from me goes to Jon, Marie, James, Dan, Andy, Rachel and Stefano for not just their hard work, but for being constant sources of inspiration throughout the year.

Xmas Break

Finally, I’ll be taking a break from writing these #Weeknotes till the new year. Even though I’ll be working over the Christmas period, I don’t think there’ll be too much activity to write about! For anyone still reading this far in(!), have a great Christmas and New Year.

Weeknotes #38

Authorized Instructors

This week, we had our formal course accreditation session with ICAgile, where we were to review our 2-day ICAgile Fundamentals course, validating if it meets the desired learning objectives as well as the general course structure, with the aim being to sufficiently balance theory, practical application and attendee engagement. I was extremely pleased when we were given the rubber stamp of approval by ICAgile, as well as getting some really useful feedback to make the course even better, in particular to include more modules aligned to the training from the BACK of the room (TBR) technique.

It’s a bit of a major milestone for us as a team, when you consider this time last year most of the training we were doing was just starting, and most of the team running it for the first time. It’s testimony to the experience we’ve gained, and incremental improvements we’ve made based on the feedback we’ve received that four of us are now authorized to offer a certified course from a recognised body in the industry. A new challenge we face in the course delivery is now the organisational impediments faced around booking meeting rooms(!) — but with two sessions in the diary for January and February next year I’m looking forward to some more in depth learning and upskilling for our PwC staff.

Product Management

As I mentioned last week, Rach Fitton has recently joined us as a Product Manager, looking to build that capability across our teams. It’s amazing how quickly someone with the right experience and mindset can quickly make an impact, as I already feel like myself (and others) are learning a great deal from her. Despite some conversations with colleagues so far where I feel they haven’t given her much to work with, she’s always given them at least one thing that can inspire them or move them further along on the journey. 

A good example being the visual below as something she shared with myself and others around all the activities and considerations that a Product Manager typically would undertake:

Things like this are great sources of information for people, as it really emphasises for me just how key this role is going to be in our organisation. It’s great for me to have someone far more experienced in the product space than myself to not only validate my thoughts, but also critique any of the work we do, as Rachel gives great, actionable feedback. I’m hoping soon we can start to get “in the work” with more of the teams and start getting some of our people more comfortable with the areas above.

Next Week

Next week we plan to start looking at structuring one of our new services and the respective product teams within that, aiming for a launch in the new year. I’m also looking forward to connecting with those in the PwC Sweden team, who are starting their own journey towards new ways of working. Looking forward to collaborating together on another project to product journey.

Weeknotes #37

Ways of Working

This week we had our second sprint review as part of our Ways of Working (WoW) group. The review went well with lots of discussion and feedback which, given we aren’t producing any “working software” is for me a really good sign. We focused a lot on change engagement this sprint, working on the comms side as well (with producing ‘potentially releasable comms’) as well as identifying/analysing our pilot areas where we really want teams to start to move towards this approach. A common theme appears to be around the lack of a product lens to services being offered, and a lack of portfolio management to ensure WIP is being managed and work aligns with strategy. If we can start to tackle this then we should have some good social proof for those who may be finding adoption slightly more tricky.

We agreed to limit our pilot to be on four particular areas for now, rather than spreading ourselves too thinly across multiple teams, fingers crossed we can start to have some impact this side of the new year.

New Joiners

I was very pleased this week to finally have Rachel, our new Product Manager finally join us. It feels like an age since we interviewed her for the role, and we’ve been trying our best in holding people back to make sure we aren’t veering too much away from the Product Management capability we’re wanting her to build. It’s great to have someone who is a very experienced practitioner, rather than have someone who just relies on the theory. I often find that the war stories and when stuff has not quite worked out is where the most learning occurs, so it’s great to have her here in the team to help us all.

Another positive note for me was after walking her through the WoW approach, as she not only fed back around it making sense but that it also has her excited :) It’s always nice to get some validation from a fresh pair of eyes, particularly from someone as experienced as Rachel is, I’m really looking forward to working with and learning from her.

With Rachel joining us as a Product Manager, and Dave who joined us roughly a month ago as a DevOps Engineer, it does feel like we’re turning a corner in the way we’re recruiting as well as the moves towards new ways of working day to day. I’m extremely appreciative to both of them for taking a risk in wanting to be part of something that will be both very challenging but also (hopefully!) very rewarding.

Team Health Check

We’ve made some good progress this week with our Team Health Check App, which will help teams identify different areas of their ways of working which may need improvement. With a SQL DB now populated with previous results, we can actually connect to a source where the data will be automatically updated, as opposed to manually copying/pasting from Google Sheets -> Power BI. The next step is to get it fully working in prod with a nicer front end, release it to some users to actually use, as well as write a short guidance document on how to connect to it.

Well done again to all our grads for taking this on as their first Agile delivery, they’re definitely learning as they go but thankfully taking each challenge/setback as a positive. Fingers crossed for the sprint review Thursday it’s something we can release!

Next Week

Next week we have our ICAgile course accreditation session, hopefully giving us the rubber stamp as accredited trainers to start offering our 2-day ICAgile Fundamentals course. It also means another trip to Manchester for myself, running what I *think* will be my last training session of 2019. Looking forward to delivering the training with Andy from our team for our people in Assurance!

Weeknotes #34

Team Areas

A tell tale sign for any Agile practitioner is normally a walk of the office floor. If an organisation claims to have Agile teams, usually a giveaway is if there are team areas with lots of visual radiators around their ways of working.

With my trip to Manchester this week, I was really please to see that one of our teams, Vulcan had taken to claiming their own area and making the work they do and the management of it highly visible.

This is great to see as even with the digital tooling we have, it’s important for teams (within a large organisation) to have a sense of purpose and identity, which I’d argue is impossible to do without something physical/a dedicated area for their work. These are the things that when going through change provide inspiration and encourage you to keep on going, knowing that certainly with some teams, the message is landing.

Product Manager Hat

With our new graduate intake in IT, one of the things various teams were asked to put together was a list of potential projects for them to work on. 

A niggling issue I’ve had is our Team Health Check tool which, taking inspiration from the Spotify Squad Health Check, uses a combination of anonymous Google Form responses that are then visualized in Power BI.

This process though is highly manual, with a Google Apps Script converting the form responses into a BI tool friendly format, then copied/pasted into a Power BI table. The project therefore for the graduates is about a web version, with a database to store responses for automated reporting. I’ve therefore been volunteered as the Product Manager :D which meant this week even writing some stories and BDD acceptance criteria! Looking forward to seeing how creative they can be, and a chance for them to really apply some of the learnings from the recent training they’ve been through.

Digital Accelerator Feedback

We received feedback from both our Digital Accelerator sessions we ran recently. Overall with an average score of 4.43/5 we were one of the highest rated sessions people attended. We actually received the first batch of feedback before the second session, which was great for us as it allowed us to make a couple tweaks to exercises and delete slides that we feel maybe weren’t needed. Some highlights in terms of feedback:

Good introduction into agile concept and MVP. Extremely engaging and persuasive games to demonstrate concept! Lots of fun!

All of it was brilliant and also further reading is great to have

This was a great module and something I want to take further. This was the first time I heard of agile and Dan broke down exactly what it was in bite size pieces which was really helpful.

So much fun and energy created through very simple activities. It all made sense — easily relatable slides. Thought Marie did a great job

Really practical and useful to focus on the mindset not the methodology, which I think is more applicable to this role

I’ve heard the term agile a lot in relation to my clients so was really useful to understand this broken down in a really basic and understandable way and with exercises. This has led me to really understand the principles more than through reading I’ve done.

Very interesting topic, great presentation slides, games, engaging presenter

Very engaging and interesting session. Particularly liked the games and the story boarding.

Very engaging and impactful session. The activities really helped drive home the concepts in an accessible way

Best.Session.Ever.

Thanks to Andy, Marie, Stefano, James and Dan for running sessions, as well as Mark M, Paul, Bev, Ashley, Tim, Anna, Mark P, Gurdeep and Brian for their assistance with running the exercises.

Next Week

Next week I’ll be heading out to Dubai to our Middle East office to run a couple training sessions for teams out there. A welcome break from the cold British weather — looking forward to meeting new faces and starting their Agile journey as well as catching up with those who I trained last time!

Weeknotes #31

OKRs

We started the week off getting together and formally reviewing our Objectives and Key Results (OKRs) for the last quarter, as well as setting them for this quarter.

Generally, this quarter has gone quite well when you check against our key results, with the only slight blip being around the 1-click deployment and the cycle time at portfolio level. 

A hypothesis I have is due to the misunderstanding around people feeling that they had to hold a retrospective before moving something to “done”, we have inadvertently caused cycle times to elongate. With us correcting this and again re-emphasizing the need to focus on the small batch, the goal for this quarter really will be to focus on getting that as close to our 90-day Service Level Expectation (SLE) at portfolio level. As well as this will be putting some tangible measurements around spinning up new, dedicated product teams and building out our lean offering.

Prioritisation

Prioritisation is something that is essential to success. Whether it be at strategic, portfolio, program or team level, priorities need to be set so that people have a clear sense of purpose, have a goal to work towards, have focus and that ultimately we’re working on the right things. Prioritisation is also a very difficult job, too often we rely on HiPPO (Highest Paid Person's Opinion), First In, First Out (FIFO) or just sheer gut feel. In previous years, I provided teams with this rough, fibonacci-esque approach to formulating a ‘business value’ score, then dividing this by effort to get an ‘ROI’ number:

Business Value Score

10 — Make current users happier

20 — Delight existing users/customers

30 — Upsell opportunity to existing users/customers

50 — Attract new business (users, customers, etc.)

80 — Fulfill a promise to a key user/customer

130 — Aligns with PwC corporate/strategic initiative(s)

210 — Regulatory/Compliance (we will go to jail if we don’t do it)

It’s fairly “meh” I feel, but was a proposed stop gap between getting them doing nothing and something that used numbers. Rather bizarrely, the Delight existing users/customers aspect was then changed by people to be User has agreed deliverable date — which always irked me, mainly as I cannot see how this has anything to do with value. Sure people may have a date in mind, but this to do with urgency, not value. Unfortunately a date-driven (not data-driven) culture is still very prevalent. Just this week for example we had someone explain how an option was ‘high priority’ as it was to going to be delivered in the next three months(!).

Increasingly, I’m finding a simple, lightweight approach to prioritisation I’m gravitating towards, and one that is likely to get easier buy in, is Qualitative Cost of Delay.

Source: Black Swan Farming — Qualitative Cost of Delay

Cost of Delay allows us to combine value AND urgency, which is something we’re all not very good at. Ideally, this would be quantified so we would all be talking a common language (i.e. not some weird dark voodoo such as T-Shirt sizing, story points or fibonacci), however you tend to find people fear numbers. My hope is that this way we can get some of the benefits of cost of delay, whilst planting the seed of gradually moving to more of a quantified approach.

Next Week

Next week is a big week for our team. We’re running the first of two Agile Introduction sessions as part of the firms Digital Accelerator program. With four sessions running in parallel with roughly 40 attendees in each, we’ll be training 160 people in a 90-minute session. Looking forward to it but also nervous!

Weeknotes #20

Project to Product

This week we’ve been spending a number of sessions building out what our future IT delivery model looks like, with the focus on shifting from project thinking to aligning products teams around services we offer to the organisation. Building on last weeks notes, the current working definition we’ve gone with on the Product front is the following:

Product — a product is a tangible item that closely meets the demands of a particular market and yields enough value to justify its continued existence. 

It is characterised by its benefits, features, functions and uses that satisfy needs of the consumer.

With this, we feel some taxonomy is also needed around what types of products we offer, with the below being what we currently define as offerings:

Now, we’re still learning through incrementally moving towards the model but kudos must be given to Government Digital Service (GDS) material available online. The role descriptions and skills needed with positions such as a Delivery Manager, Product Manager, Business Analyst and Service Owner are all roles we envisage to feature prominently in our new teams and services, so it’s great to have some material out there to build appropriate roles and (more importantly) career paths for our people. 

The biggest area of challenge I foresee at the moment is around Product Management, with this being a skill that across the whole organisation we’re not great at. A blend of hiring experienced PM’s and upskilling the majority is likely the route we’ll take to address this. Look out for future posts on the rest of the delivery model as we learn and iterate in the coming months/years…

Goal Setting

With it being summer holiday season, it’s the annual “goal setting” period for everyone in the organisation. I’ve found it hard to formulate my goals this year, not so much with what I want our focus to be on as a team, but more the measurement aspect that comes with goals. My goals are currently a blend of launching new ways of working (through product teams), reducing batch size/managing portfolio WIP around ‘old ways of working’ (projects), moving towards certified trainers within our team and finally a personal goal around being a career coach for a couple of our newly promoted managers in IT. 

The notion of setting annual goals is a problematic one I find as a practitioner, drawing parallels with big up front planning akin to a waterfall world. Thankfully I work with a career coach who is pragmatic around my goals and allows me to tweak them where appropriate. 

It was great to see a member of our leadership team share their goals around committing to have N teams (exact number still TBC) having implement continuous delivery within the performance year, especially as that really is the ‘hard stuff’.

Reflection

This week, myself and Andy had a call with a member of our Assurance team who wanted a demonstration around some of the things we had implemented as part of formulating the Lean PfMO within our IT organisation. The call went really well with them being impressed with some of the things we’d done around prioritisation of work using our rough RoI calculator, the visualisation of work using a portfolio kanban board and the empirical nature of monitoring progress using flow metrics. It made me reflect on when myself and Andy first started working together a couple of years ago, and how our own working relationship has developed, as well as the improvement of his own understanding and knowledge around Agile ways of working and portfolio management. With my primary focus being on where I feel we need to be and how far away we are from that, it’s easy to forget just how far we’ve come from where we once were. Some of the things we have implemented so far have gone really well and are a far cry from a large majority of people in our organisation who still rely on techniques such as spreadsheets to manage their work. Much like these posts, reflecting on how far you’ve come can help motivate, in particular during times of frustration.

Next Week

Next week I’m looking forward to meeting one of our new Consulting directors, who is focused in particular on Agile ways of working using SAP and cloud in the FS sector. I’ve already helped him this week get a Kanban board setup for his current team (rare to get someone proactively wanting one!), so hopefully it can be a mutually beneficial chat :) 

With a few gaps in the week I’m hopeful I can finally make a start on our 2-day ICAgile Certified Agile Fundamentals course, which we plan to start offering from October this year.