Story Pointless (Part 2 of 3)

The second in a three-part series on moving away from Story Points and how to introduce empirical methods within your team(s). 

Part one refamiliarised ourselves with what story points are, a brief history lesson and facts about them, the pitfalls of using them and how we can use alternative methods for single item estimation.

Part two looks at probabilistic vs. deterministic thinking, the use of burndown/burnups, the flaw of averages and monte carlo simulation for multiple item estimation.

Forecasting

You’ll have noticed in part one I used the word forecast a number of times, particularly when it came to the use of Cycle Time. It’s useful to clarify some meaning before we proceed.

What do we mean by a forecast?

Forecast — predict or estimate (a future event or trend).

What does a forecast consist of?

A forecast is a calculation about the future that includes both a range and a probability of that range occurring.

Where do we see forecasts?

Everywhere!

Sources: FiveThirtyEight & National Hurricane Centre

Forecasting in our context

In our context, we use forecasting to answer the key questions of:

  • When will it be done?

  • What will we get?

Which we typically do by:

Which we then visualize as a burnup/burndown chart, such as the example below. Feel free to play around with the inputs:

https://observablehq.com/embed/@nbrown/story-pointless?cells=viewof+work%2Cviewof+rate%2Cchart

All good right? Well not really…

The problems with this approach

The big issue with this approach is that the two inputs into our forecast(s) are highly uncertain, both are influenced by;

  • Additional work/rework

  • Feedback

  • Delivery team changes (increase/decrease)

  • Production issues

Neither inputs can be known exactlyupfront nor can they be simply taken as a single value, due to their variability.

And don’t forget the flaw of averages!

Plans based on average, fail on average (Sam L. Savage — The Flaw of Averages)

The above approach means forecasting using average velocity/throughput which, at best, is the odds of a coin toss!

Source:

Math with bad drawings — Why Not to Trust Statistics

Using averages as inputs to any forecasting is fraught with danger, in particular as it is not transparent to those consuming the information. If it was it would most likely lead to a different type of conversation:

But this is Agile — we can’t know exactly when something will be done!?!…

Source: Jon Smart — Sooner, Safer, Happier

Estimating when something will be done is particularly tricky in the world of software development. Our work predominantly sits in the domain of ‘Complex’ (using Cynefin) where there are “unknown unknowns”. Therefore, when someone asks, “when will it be done?” or “what will we get?” — when we estimate, we cannot give them a single date/number, as there are many factors to consider. As a result, you need to approach the question as one which is probabilistic (a range of possibilities) rather than deterministic (a single possibility).

Forecasts are about predicting the future, but we all know the future is uncertain. Uncertainty manifests itself as a multitude of possible outcomes for a given future event, which is what science calls probability.

To think probabilistically means to acknowledge that there is more than one possible future outcome which, for our context, this means using ranges, not absolutes.

Working with ranges

Communicating such a wide range to stakeholders is definitely not advisable nor is it helpful. In order to account for this, we need an approach that allows us to simulate lots of different scenarios.

The Monte Carlo method is a method of using statistical sampling to determine probabilities. Monte Carlo Simulation (MCS) is one implementation of the Monte Carlo method, where a real-world system is used to describe a probabilistic model. The model consists of uncertainties (probabilities) of inputs that get translated into uncertainties of outputs (results).

This model is run a large number (hundreds/thousands) of times resulting in many separate and independent outcomes, each representing a possible “future”. These results are then visualised into a probability distribution of possible outcomes, typically in a histogram.

TLDR; this is getting nerdy so please simplify

We use ranges (not absolutes) as inputs in the amount of work and the rate we do work. We run lots of different simulations to account for different outcomes (as we are using ranges).

So instead of this:

https://observablehq.com/embed/@nbrown/story-pointless?cells=viewof+work%2Cviewof+rate%2Cchart

We do this:

https://observablehq.com/embed/@nbrown/story-pointless?cells=chart2%2Cviewof+numberOfResultsToShow%2Cviewof+paceRange%2Cviewof+workRange

However, this is not easy on the eye! 

So what we then do is visualise the results on a Histogram, showing the distribution of the different outcomes.

We can then attribute percentiles (aka a probability of that outcome occurring) to the information. This allows us to present a range of outcomes and probability of those outcomes occurring, otherwise known as a forecast.

Meaning we can then move to conversations like this:

The exact same approach can be applied if we had a deadline we were working towards and we wanted to know “what will we get?” or “how far down the backlog will we get to”. The input to the forecast becomes the number of weeks you have, with the distribution showing the percentage likelihood against the number of items to be completed.

Tools to use

Clearly these simulations need computer input to help them be executed. Fortunately there are a number of tools out there to help:

  • Throughput Forecaster — a free and simple to use Excel/Google Sheets solution from troy.magennis that will do 500 simulations based on manual entry of data into a few fields. Probably the easiest and quickest way to get started, just make sure you have your Throughput and Backlog Size data.

  • Actionable Agile — a paid tool for flow metrics and forecasting that works as standalone SaaS solution or integrated within Jira or Azure DevOps. This tool can do up to 1 million simulations, plus gives a nice visual calendar date for the forecasts and percentage likelihood.

Source:

Actionable Agile Demo

  • FlowViz — a free Power BI template that I created for teams using Azure DevOps and GitHub Issues that generates flow metrics as well as monte carlo simulations. The histogram visual provides a legend which can be matched against a percentage likelihood.

Summary — multiple item forecasting

  • A forecast is a calculation about the future that includes both a range and a probability of that range occurring

  • Typically, we forecast using single values/averages — which is highly risky (odds of a coin toss at best)

  • Forecasting in the complex domain (Cynefin) needs to account for uncertainty (which using ‘average’ does not)

  • Any forecasts therefore need to be probabilistic (a range of possibilities) not deterministic (a single possibility)

  • Probabilistic Forecasting means running Monte Carlo Simulations (MCS) — simulating the future lots of different times

  • To do Monte Carlo simulation, we need Throughput data (number of completed items) and either a total number of items (backlog size) or a date we’re working towards

  • We should always continuously forecast as we get new information/learning, rather than forecasting just once

Ok but what about…

I’m sure you have lots of questions, as did I when first uncovering these approaches. To help you out I’ve collated the most frequently asked questions I get, which you can check out in part three

— — — — — — — — — — — — — — — — — — — — — — — — — —

References:

Story Pointless (Part 1 of 3)

The first in a three-part series on moving away from Story Points and how to introduce empirical methods within your team(s).

Part one refamiliarises ourselves with what story points are, a brief history lesson and facts about them, the pitfalls of using them and how we can use alternative methods for single item estimation.

What are story points?

Story points are a unit of measure for expressing an estimate of the overall effort (or some may say, complexity) that will be required to fully implement a product backlog item (PBI), user story or any other piece of work.

When we estimate with story points, we assign a point value to each item. Typically, teams will use a Fibonacci or Fibonacci-esque scale of 1,2,3,5,8,13,21, etc. Teams will often roll these points up as a means of measuring velocity (the sum of points for items completed that iteration) and/or planning using capacity (the number of points we can fit in an iteration).

Why do we use them?

There are many reasons why story points seem like a good idea:

  • The relative approach takes away the ‘date commitment’ aspect

  • It is quicker (and cheaper) than traditional estimation

  • It encourages collaboration and cross-functional behaviour

  • You cannot use them to compare teams — thus you should be unable to use ‘velocity’ as a weapon

A brief history lesson

Some things you might not know about story points:

Ron’s current thoughts on the topic

  • Story points are not (and never have been) mentioned in the Scrum Guide or viewed as mandatory as a part of Scrum

  • Story points originated from eXtreme Programming (XP)

  • - Chrysler Comprehensive Compensation (C3) project was the birth of XP

  • - They originally estimated in “ideal days” and later, unitless Story Points

  • - Ron Jeffries is credited with being the person who introduced them

  • James Grenning invented Planning Poker which was first publicised in Mike Cohn’s book Agile Estimating and Planning

  • Mountain Goat Software (Mike Cohn) own the trademark on planning poker cards and the copyright on the number sequence used for story point estimation

Problems with story points

What time would you tell your 

 friends you’d meet them?

They do not speak in the language of our customer

Telling our customers and stakeholders something is a “2” or a “3” does not help when it comes to new ways of working. What if we did this in other industries — what would you think as a customer? Would you be happy?

They may encourage the right behaviours, but also the wrong ones too

Agileis all about collaboration, iterative execution, customer value, and experimentation. Teams can have ‘high velocity’ but be finishing everything on the last day of the sprint (not working at a sustainable pace/mini waterfalls) and/or be delivering the wrong things (build the wrong thing). Similarly, teams are pressured to ‘increase velocity’ which is easy to artificially inflate by making every 2 into a 3, 3 into a 5, etc. — then we have increased our velocity!

They are hugely inconsistent within a team

Plot the actual time from starting to finishing an item (in days) against the story point estimate. Compare the variance for stories that had the same points estimate:

For this team (in Nationwide) we can see:

  • 1 point story — 1–59 days

  • 2 point story — 1–128 days

  • 3 point story — 1–442 days

  • 5 point story — 2–98 days

  • 8 point story — 1–93 days

They are a poor mechanism for planning / full of assumptions

Not only is velocity a highly volatile metric but it also encourages playing ‘Tetris’ with people in complex work. When estimating stories, teams purely take the story and acceptance criteria as written. They do not account for various assumptions (customer availability, platform reliability) and/or things that can go wrong or distract them (what is our WIP, discovery, refinement, production issues, bug-fixes, etc.) during an iteration.

Uncovering better ways

Agile has always been about “uncovering better ways”, after all it’s the first line of the Manifesto!

Given the limitations with story points, we should be open to exploring alternative approaches. When looking at uncovering new approaches, we need to be able to:

  • Forecast/Estimate a single item (PBI/User Story)

  • Forecast/Estimate our capacity at a sprint level (Sprint Backlog)

  • Forecast/Estimate our capacity at a release level (Release Backlog)

Source: Jon Smart — Sooner, Safer, Happier

Estimating when something will be done is particularly tricky in the world of software development. Our work predominantly sits in the domain of ‘Complex’ (using Cynefin) where there are “unknown unknowns”. Therefore, when someone asks, “when will it be done?” or “what will we get?” — we cannot estimate give them a single date/number, as there are many factors to consider. As a result, you need to approach the question as one which is probabilistic (a range of possibilities) rather than deterministic (a single possibility).

Forecasts are about predicting the future, but we all know the future is uncertain. Uncertainty manifests itself as a multitude of possible outcomes for a given future event, which is what science calls probability.

To think probabilistically means to acknowledge that there is more than one possible future outcome which, for our context, this means using ranges, not absolutes.

Single item forecast/estimation

One of the two key flow metrics that inputs into single item estimation is our Cycle Time. Cycle time is the amount of elapsed time between when a work item started and when a work item finished. We visualise this on a scatter plot, like so:

On the scatter plot, each ‘dot’ represents a PBI/user story, plotted against the completion date and the time (in days) it took to complete. Our 85th percentile (highlighted in the visual) tells us that 85% of our stories are completed within n days or less. Therefore with this team, we can say that 85% of the time we finish stories in 26 days or less.

We can communicate this to customers and stakeholders by saying that:

“If we start work on this today, there is an 85% chance it will be done in 26 days or less”

This may be sufficient for your customer (if so — great!), however they may push for it sooner. If, for instance, with this team they wanted the story in 7 days, you can show them (with data) that this is only 50% likely. Use this as a basis to start the conversation with them (and the rest of the team!) around breaking work down.

What about when work commences?

If they are happy with the forecast, and we start work on an item, it’s important that we don’t stop there and ensure we continue to manage the expectations of the customer.

Work Item Age is the second metric to use to maintain a continued focus on flow. This is the amount of time (in days) between when a item started and the current time. This applies only to items that are still in progress.

Each dot represents a user story and the age (in days) of that respective PBI/user story so far.

Use this in the Daily Scrum to track the age of an item against your 85th percentile time, as well as comparing to where an item is in your process.

If it is in danger of ‘breaching’ the cycle time, swarm on an item or break it down accordingly. If this can’t be done, work with your stakeholder(s) to collaborate on how to achieve the best outcome.

As a Scrum Master / Agile Delivery Manager / Coach, your role would be to guide the team in understanding the trade offs of high WIP age items vs. those closest to done vs. starting something new — no easy task!

Summary — Single Item Forecasting

In terms of a story pointless approach to estimating a single item, try the following:

  1. Prioritise your backlog

  2. Use your Cycle Time scatter plot and 85th percentile

  3. Take the next highest priority item on your backlog

  4. As a team, ask — “Do we think this can be delivered within our 85th percentile?”

  5. (Note: you can probe further and ask ‘can this be delivered within our 50th percentile?” to promote further slicing/refinement)

  6. If yes, then let’s get started/move it to ‘Ready’ 

  7. (considering your work-in-progress)

  8. If no, then find out why/break it down till it is small enough

  9. Once we start work on items, use Work Item Age as a leading indicator for flow

  10. Manage Work Item Age as part of your Daily Scrum, if it looks like it may exceed the 85th percentile — swarm/slice!

Please note: it’s best to familiarise yourself with what your 85th percentile is first (particularly in comparison to your cadence). 

If it’s 100+ days then you should be focusing initially on reducing that time — this can be done through various means such as pairing, mobbing, story mapping, story slicing, lowering WIP, etc.

But what about for multiple items? And what about…

For multiple item forecasting, be sure to check out part two.

If you have any questions, feel free to add them to the comments below in time for part three, which will cover common questions/observations people make about these new methods…

— — — — — — — — — — — — — — — — — — — — — — — — — —

References:

ThoughtSpot and Blocked Work

The Importance of Being Blocked

Despite our best attempts at creating small, cross-functional and autonomous teams, being “blocked” is unfortunately still a common occurrence with many teams. There can be any number of reasons why work gets blocked — it could be internal to the team (e.g. waiting on Product Owner/Manager feedback, environments down, etc.), within the technology function/from other teams (e.g. platform outage) or even the wider organisation (e.g. waiting for risk, security, legal, etc.).

The original production of The Importance of Being Earnest in 1895…with a blocked lens

Source: Wikipedia

As mentioned in a previous post, flow metrics should be an essential aspect in the day to day interactions a high performing team has. They should also be leveraged as inputs into conversations with stakeholders, whether it’s them being interested in the product(s) the team is building and/or as members in the technology ecosystem in the organisation.

Unfortunately, when it comes to flow, measuring and quantifying blocked work is one of the biggest blind spots teams have. Most teams probably don’t have a consensus on what being blocked means — As Dan Vacanti and Prateek Singh mentioned in their video on Flow Efficiency, most teams don’t even have an agreement on a definition of what blocked is!

Source:

https://stefan-willuda.medium.com/being-blocked-it-s-not-what-you-might-think-f8b3ad47e806

Blocked work is probably one of the most valuable data insights at your disposal as a team and organisation. These are the real things that are actually slowing you down, and likely the biggest impediments to flow that are in your in your way. As Jonathan Smart would say in Sooner Safer Happier:

Impediments are not in the path. Impediments ARE the path.

So how can we start to make this information visible and quantify the impact of our work being blocked? We use Blocked Work metrics.

Blocked Work Metrics

Here are four recommended metrics to look at when it comes to measuring the impact of work being blocked:

  • Current Blocked Items — items that are currently blocked and how long they have been blocked for.

  • Blocker Frequency — how frequently items become blocked, as well as a trend line showing if this is becoming more/less frequent over time.

  • Mean Time To Unblocked (MTTU)— how long (on average) does it take to unblock items, as well as a trend line to show if this is decreasing over time.

  • Days Lost to Being Blocked — how many days of an item’s total cycle time were spent being blocked (compared to not blocked).

Generating these in ThoughtSpot

As mentioned in previous posts, ThoughtSpot is what we use for generating insights on different aspects of work in Nationwide, one of the key products offered by our Measurement & Insight Accelerator. It produces ‘answers’ from our data which are then pinned to ‘pinboards’ for others to view. Our Product Owner Marc Price, supported by Zsolt Berend showcase this across the organisation, demonstrating how it aids conversations and learning, as opposed to a tool for senior leaders to brandish as a stick!

The Blocked Work Insights Pinboard is there as a pinboard for teams to ‘pull’ (rather than forced to use) — editing/filtering to be relevant to their context.

Using Blocked Work Insights

Current Blocked Items

This chart can be used as an input to your Daily Scrum. Discuss as a team on how to focus or swarm on unblocking these items over starting new work, particularly those that have been blocked for an extended period and/or may be closer to “Done” in your context.

Blocker Frequency

In using this chart, you should look at the trendline and the direction it’s heading, as well as the frequency of work being blocked. From a trend perspective it should be trending downwards or low and stable. If it’s trending in the wrong direction (upwards) then use this as an input into Retrospectives — potentially focusing on reducing dependencies the team faces.

Mean Time to Unblocked (MTTU)

Use this chart to see how long it takes blockers to be resolved, as well as if this time to resolve is improving (trend line heading downward) or getting worse (trend line going upward) over time.

Days Lost to Being Blocked

Use this chart to identify how much time is being lost due to work being blocked, potentially identifying themes around why items are blocked. You could use this as part of a a blocker clustering exercise in a retrospective. If you find the blockers are due to external factors, use it with senior leaders who can influence external change to show them the quantified impact teams are facing due to external bottlenecks.

Summary

To summarise, focusing on blocked work is data that is overlooked by most Agile teams. It shouldn’t be, as it will likely give you clear insight into where the bottlenecks are in achieving flow in your system, and the most impact with the improvements you identify. Teams should leverage data and metrics such as Current Blocked Items, Blocker Frequency, Blocked Vs. Unblocked and Time Lost to Being Blocked in order to take a data-driven approach to system-wide improvement.

For any Nationwide folks reading this who are curious about the impact of blocked work in their context, be sure to check out the Blocked Work Insights pinboard on our ThoughtSpot platform.

What metrics do you use for blocked work? Let me know in the replies :)

Thoughtspot and the four flow metrics

Focusing on flow

As a Ways of Working Enablement Specialist, one of our primary focuses is on flow. Flow can be referred to as the movement of value throughout your product development system. Some of the most common methods teams will use in their day to day are Scrum, Kanban, or Scrum with Kanban.

Optimising flow in a Scrum context requires defining what flow means. Scrum is founded on empirical process control theory, or empiricism. Key to empirical process control is the frequency of the transparency, inspection, and adaptation cycle — which we can also describe as the Cycle Time through the feedback loop.

Kanban can be defined as a strategy for optimising the flow of value through a process that uses a visual, work-in-progress limited pull system. Combining these two in a Scrum with Kanban context means providing a focus on improving the flow through the feedback loop; optimising transparency and the frequency of inspection and adaptation for both the product and the process.

Quite often, product teams will think that the use of a Kanban board alone is a way to improve flow, after all that is one of its primary focuses as a method. Taking this further, many Scrum teams will also proclaim that “we do Scrum with Kanban” or “we like to use ScrumBan” without understanding what this means if you really do focus on flow in the context of Scrum. However, this often becomes akin to pouring dressing all over your freshly made salad, then claiming to eat healthily!

Images via

Idearoom

/

Adam Luck

/

Scrum Master Stances

If I was to be more direct, put simply, Scrum using a Kanban board ≠ Scrum with Kanban.

All these methods have a key focus on empiricism and flow — therefore visualisation and measurement of flow metrics is essential, particularly when incorporating these into the relevant events in a Scrum context.

The four flow metrics

There are four basic metrics of flow that teams need to track:

  • Throughput — the number of work items finished per unit of time.

  • Work in Progress (WIP) — the number of work items started but not finished. The team can use the WIP metric to provide transparency about their progress towards reducing their WIP and improving their flow.

  • Cycle Time — the amount of elapsed time between when a work item starts and when a work item finishes.

  • Work Item Age — the amount of time between when a work item started and the current time. This applies only to items that are still in progress.

Generating these in ThoughtSpot

ThoughtSpot is what we use for generating insights on different aspects of work in Nationwide, one of the key products offered to the rest of the organisation by Marc Price and Zsolt Berend from our Measurement & Insight Accelerator. This can be as low level as individual product teams, or as high-level as aggregated into our different Member Missions. We produce ‘answers’ from our data which are then pinned to ‘pinboards’ for others to view.

Our four flow metrics are there as a pinboard for teams to consume, filtering to their details/context and viewing the charts. If they want to, they can then pin these to their own pinboards for sharing with others.

For visualizing the data, we use the following:

  • Throughput — a line chart for the number of items finished per unit of time.

  • WIP — a line chart with the number of items in progress on a given date.

  • Cycle Time — a scatter plot where each dot is an item plotted against how long it took (in days) and the completed date. Supported by an 85th percentile below showing how long in days items took to complete.

  • Work Item Age — a scatter plot where each dot is an item plotted against its current column on the board and how long it has been there. Supported by the average age of WIP in the system.

Using these in Scrum Events

Throughput (Sprint Planning, Review & Retrospective) — Teams can use this as part of Sprint Planning in forecasting the number of items for the Sprint Backlog.

It can also surface in Sprint Reviews when it comes to discussing release forecasts or product roadmaps (although I would encourage the use of Monte Carlo simulations in this context — more in a later blog on this). As well as being reviewed in the Sprint Retrospective, where teams inspect and adapting their processes to find ways to improve (or validating if previous experiments have improved) throughput.

Work In Progress (Daily Scrum & Sprint Retrospective) — as the Daily Scrum focuses on what’s currently happening in the sprint/with the work, WIP chart is good to look at here (potentially seeing if it’s too high).

The chart also is a great input into the Sprint Retrospective, particularly seeing where WIP is trending towards — if teams are optimising their WIP then you would expect this to be relatively stable/low — if high/highly volatile then you need to “stop starting and start finishing” or find ways you can improve your workflow.

Cycle Time (Sprint Planning, Review & Retrospective) — Looking at 85th/95th percentiles of Cycle Time can be a useful input into deciding what items to take into the Sprint Backlog. Can we deliver this within our 85th percentile time? If not, can we break it down? If we can, then let’s add it to the backlog. It also works as an estimation technique, so stakeholders know that when work is started on an item, there is an 85% likelihood it will take n days — want it in n days? Ok well that’s only got a 50% likelihood, can we collaborate to break it down into something smaller? Then let’s add that to a backlog refinement discussion.

In the Sprint Review it can be used by looking at trends, such as if your cycle times are highly varied then are there larger constraints in the “system” that we need stakeholders to help with? Finally, it provides a great discussion point for Retrospectives — we can use it to deep dive into outliers to find out what happened and how to improve, see if there is a big difference in our 50th/85th percentiles (and how to reduce this gap), and/or see if the improvements we have implemented as outcomes of previous discussions are having a positive impact on cycle time.

Work Item Age (Sprint Planning & Daily Scrum) — this is a significantly underutilised chart that so many teams could get benefit from. If you incorporate this into your Daily Scrums, it will likely lead to much more conversations on getting work done (due to item age) rather than generic updates. Compare work item age to your 85th percentile on your cycle time — is it likely to exceed this time? 

 Is that ok? Should we/can slice it down further to get some value out there and faster feedback sooner? All very good, flow-based insights this chart can provide.

It may also play a part in Sprint Planning — do you have items left over from the previous sprint? What should we do with those? All good inputs into the planning conversation.

Summary

To summarise, focusing on flow involves more than just using a Kanban board to visualize your work. To really take a flow-based approach and incorporate the foundations of optimising WIP and empiricism, teams should utilise the four key flow metrics of Throughput, WIP, Cycle Time and Work Item Age. If you’re using these in the context of Scrum, look to accommodate these appropriately into the different Scrum events.

For those wanting to experiment with these concepts in a safe space, I recommend checking out TWiG — the work in progress game, (which now has a handy facilitator and participant guide) and for any Nationwide folks reading this curious about flow in their context, be sure to check out the Four Key Flow Metrics pinboard on our ThoughtSpot platform.

Further/recommended reading:

Kanban Guide (Dec 2020 Edition) — KanbanGuides.org

Kanban Guide for Scrum Teams (Jan 2021 Edition) — Scrum.org

Basic Metrics of Flow — Dan Vacanti & Prateek Singh

Four Key Flow Metrics and how to use them in Scrum events — Yuval Yeret

TWiG — The Work In Progress Game

Weeknotes #40

Product Management Training

This week we had a run through from Rachel of our new Product Management training course that she has put together for our budding Product Managers. I really enjoyed going through it as a team (especially using our co-working space in More London) and viewing the actual content itself.

Credits: Jon Greatbatch for photo “This can be for your weeknotes”

What I really liked about the course was the fact the attendees are going to be very ‘hands-on’ during the training, and will get to go apply various techniques that PdM’s use with a case study of Delete My Data (DMD) throughout. It’s something that I’ve struggled with when putting together material in the past of having an ‘incremental’ case study that builds through the day, so glad that Rachel has put something like this together. We’ve earmarked the 28th Jan to be the first session we run, with it being a combination of our own team and those moving into Product Management being the ‘guinea pigs’ for the first session.

2019 Reflections

This week has been a particularly challenging week, with lots of roadblocks in the way of moving forward. A lack of alignment in new teams with future direction, and lack of communication to the wider function around our move to new ways of working means that it feels like we aren’t seeing the progress we should be, or creating a sense of urgency. Whilst it’s certainly true around achieving big through small, it does feel that with change initiatives it can feel like you are moving too slow, which is the current lull we’re in. After a few days feeling quite down I took some time out to reflect on 2019, and what we have achieved, such as:

  • Delivering a combined 49 training courses on Agile, Lean and Azure DevOps

  • Trained a total of 789 PwC staff across three continents

  • Becoming authorised trainers to offer an industry recognised course

  • Actually building our first, proper CI/CD web apps as PoC’s

  • Introducing automated security tools and (nearly) setting up ServiceNow change management integration to #TakeAwayTheExcuses for not adopting Agile

  • Hiring our first ever Product Manager (Shout out Rachel)

  • Getting our first ever Agile Delivery Manager seconded over from Consulting (Shout out Stefano)

  • Our team winning a UK IT Award for Making A Difference

  • Agreement from leadership on moving from Project to Product, as part of our adoption of new ways of working

All in all, it’s fair to say we’ve made big strides forward this year, I just hope the momentum continues into 2020. A big thank you from me goes to Jon, Marie, James, Dan, Andy, Rachel and Stefano for not just their hard work, but for being constant sources of inspiration throughout the year.

Xmas Break

Finally, I’ll be taking a break from writing these #Weeknotes till the new year. Even though I’ll be working over the Christmas period, I don’t think there’ll be too much activity to write about! For anyone still reading this far in(!), have a great Christmas and New Year.

Weeknotes #39

Agile not WAgile

This week we’ve been reviewing a number of our projects that are tagged as being delivered using Agile ways of working within our main delivery portfolio. Whilst we ultimately do want to shift from project to product, we recognise that right now we’re still doing a lot of ‘project-y’ style of delivery, and that this will never completely go away. So we’re trying to in parallel at least get people familiar with what Agile delivery is all about, even if delivering from a project perspective.

The catalyst really for this was one of our charts where we look at the work being started and the split between which of that is Agile (blue line) Vs. Waterfall (orange line).

The aspiration being of course that with a strategic goal to be ‘agile by default’ the chart should indeed look something like it does here, with the orange line only slightly creeping up when needed but generally people looking to adopt Agile as much as they can.

When I saw the chart looking like the above last week I must admit, I got suspicious! I felt that we definitely were not noticing the changes in behaviours, mindset and outcomes that the chart would suggest, which prompted a more thorough review.

The review was not intended to act as the Agile police(!), as we very much want to help people in moving to new ways of working, but to really make sure people had understood correctly around what Agile at its core really is about, and if they are indeed doing that as part of their projects.

The review is still ongoing, but currently it looks like so (changing the waterfall/agile field retrospectively updates the chart):

The main problems observed being things such as lack of frequent delivery, with project teams still doing one big deployment to production at the end before going ‘live’ (but lots of deployments to test environments). Projects are maybe using tools such as Azure DevOps and some form of Agile events (maybe daily scrums), but work is still being delivered in phases (Dev / Test / UAT / Live). As well as this, a common theme was not getting early feedback and changing direction/priorities based on that (hardly a surprise if you are infrequently getting stuff into production!).

Inspired by the Agile BS detector from the US Department of Defense, I prepared a one-pager to help people quickly understand if their application of Agile to their projects is right, or if they need to rethink their approach:

Here’s hoping the blue line goes up, but against some of that criteria above, or at least we get more people approaching us for help in how to get there.

Team Health Check

This week we had our sprint review for the project our grads are working on, helping develop a team health check web app for teams to conduct monthly self assessments as to different areas of team needs and ways of working.

Again, I was blown away by what the team had managed to achieve this sprint. Not only had they managed to go from a very basic, black and white version of the app to a fully PwC branded version.

They’ve also successfully worked with Dave (aka DevOps Dave) to configure a full CI/CD pipeline for any future changes made. As the PO for the project I’ll now be in control of any future releases via the release gate in Azure DevOps, very impressive stuff! Hopefully now we can share more widely and get teams using it.

Next Week

Next week will be the last weeknotes for a few weeks, whilst we all recharge and eat lots over Christmas. Looking at finalising training for the new year and getting a run through from Rachel in our team of our new Product Management course!

Weeknotes #38

Authorized Instructors

This week, we had our formal course accreditation session with ICAgile, where we were to review our 2-day ICAgile Fundamentals course, validating if it meets the desired learning objectives as well as the general course structure, with the aim being to sufficiently balance theory, practical application and attendee engagement. I was extremely pleased when we were given the rubber stamp of approval by ICAgile, as well as getting some really useful feedback to make the course even better, in particular to include more modules aligned to the training from the BACK of the room (TBR) technique.

It’s a bit of a major milestone for us as a team, when you consider this time last year most of the training we were doing was just starting, and most of the team running it for the first time. It’s testimony to the experience we’ve gained, and incremental improvements we’ve made based on the feedback we’ve received that four of us are now authorized to offer a certified course from a recognised body in the industry. A new challenge we face in the course delivery is now the organisational impediments faced around booking meeting rooms(!) — but with two sessions in the diary for January and February next year I’m looking forward to some more in depth learning and upskilling for our PwC staff.

Product Management

As I mentioned last week, Rach Fitton has recently joined us as a Product Manager, looking to build that capability across our teams. It’s amazing how quickly someone with the right experience and mindset can quickly make an impact, as I already feel like myself (and others) are learning a great deal from her. Despite some conversations with colleagues so far where I feel they haven’t given her much to work with, she’s always given them at least one thing that can inspire them or move them further along on the journey. 

A good example being the visual below as something she shared with myself and others around all the activities and considerations that a Product Manager typically would undertake:

Things like this are great sources of information for people, as it really emphasises for me just how key this role is going to be in our organisation. It’s great for me to have someone far more experienced in the product space than myself to not only validate my thoughts, but also critique any of the work we do, as Rachel gives great, actionable feedback. I’m hoping soon we can start to get “in the work” with more of the teams and start getting some of our people more comfortable with the areas above.

Next Week

Next week we plan to start looking at structuring one of our new services and the respective product teams within that, aiming for a launch in the new year. I’m also looking forward to connecting with those in the PwC Sweden team, who are starting their own journey towards new ways of working. Looking forward to collaborating together on another project to product journey.

Weeknotes #37

Ways of Working

This week we had our second sprint review as part of our Ways of Working (WoW) group. The review went well with lots of discussion and feedback which, given we aren’t producing any “working software” is for me a really good sign. We focused a lot on change engagement this sprint, working on the comms side as well (with producing ‘potentially releasable comms’) as well as identifying/analysing our pilot areas where we really want teams to start to move towards this approach. A common theme appears to be around the lack of a product lens to services being offered, and a lack of portfolio management to ensure WIP is being managed and work aligns with strategy. If we can start to tackle this then we should have some good social proof for those who may be finding adoption slightly more tricky.

We agreed to limit our pilot to be on four particular areas for now, rather than spreading ourselves too thinly across multiple teams, fingers crossed we can start to have some impact this side of the new year.

New Joiners

I was very pleased this week to finally have Rachel, our new Product Manager finally join us. It feels like an age since we interviewed her for the role, and we’ve been trying our best in holding people back to make sure we aren’t veering too much away from the Product Management capability we’re wanting her to build. It’s great to have someone who is a very experienced practitioner, rather than have someone who just relies on the theory. I often find that the war stories and when stuff has not quite worked out is where the most learning occurs, so it’s great to have her here in the team to help us all.

Another positive note for me was after walking her through the WoW approach, as she not only fed back around it making sense but that it also has her excited :) It’s always nice to get some validation from a fresh pair of eyes, particularly from someone as experienced as Rachel is, I’m really looking forward to working with and learning from her.

With Rachel joining us as a Product Manager, and Dave who joined us roughly a month ago as a DevOps Engineer, it does feel like we’re turning a corner in the way we’re recruiting as well as the moves towards new ways of working day to day. I’m extremely appreciative to both of them for taking a risk in wanting to be part of something that will be both very challenging but also (hopefully!) very rewarding.

Team Health Check

We’ve made some good progress this week with our Team Health Check App, which will help teams identify different areas of their ways of working which may need improvement. With a SQL DB now populated with previous results, we can actually connect to a source where the data will be automatically updated, as opposed to manually copying/pasting from Google Sheets -> Power BI. The next step is to get it fully working in prod with a nicer front end, release it to some users to actually use, as well as write a short guidance document on how to connect to it.

Well done again to all our grads for taking this on as their first Agile delivery, they’re definitely learning as they go but thankfully taking each challenge/setback as a positive. Fingers crossed for the sprint review Thursday it’s something we can release!

Next Week

Next week we have our ICAgile course accreditation session, hopefully giving us the rubber stamp as accredited trainers to start offering our 2-day ICAgile Fundamentals course. It also means another trip to Manchester for myself, running what I *think* will be my last training session of 2019. Looking forward to delivering the training with Andy from our team for our people in Assurance!

Weeknotes 36

Refreshing Mindsets

This week was the second week of our first sprint working with our graduate intake on our team health check web app. It was great to see in the past week or so that the team, despite not having much of a technical background, had gone away and been able to create a very small app created using a mix of Python and an Azure SQL database for the responses. It just goes to show how taking the work to a team and allowing them to work in an environment where they can be creative (rather than prescribing the ‘how’) can lead to a great outcome. Whilst the app is still not quite yet in a ‘releasable’ state, in just a short time it really isn’t too far away from something a larger group of Agile Delivery Managers and Coaches can use. It’s refreshing to not have to take on the battle of convincing hearts and minds, working with a group of people who recognise this is the right way to work and are just happy to get on and deliver. Thanks to all of them for their efforts so far!

Cargo Culting

“Cargo Culting” is a term used when people believe they can achieve benefits by adopting/copying certain behaviours, actions or techniques. They don’t consider why the benefits and/or causes occur, instead just blindly copy the behaviours to try get similar results.

In the agile world, this is becoming increasingly commonplace, with the Spotify model being the latest fad for cargo culting in organisations. Organisations are hearing about how Spotify or companies like ING are scaling Agile ways of working which, in practice, sounds great, but it is incredibly hard and nowhere near as simple as just redesigning organisations into squads, tribes, chapters and guilds.

In a training session with some of our client facing teams this week, I used the above as an example of what cargo culting is like. Experienced practitioners need to be aware that the Spotify model is one tool in the toolbox, with there being lots of possible paths to organisational agility. Spotify themselves never referred to it as a model, nor use it themselves anymore, as well as ING moving towards experimenting with using LeSS in addition to the Spotify model. Dogma is one of the worst traps you can fall into when it comes to moving to new ways of working, particularly when you don’t stop and reassess whether this actually is the right way for this context. Alignment on language is important, but should not be at the compromise of finding first of all what works in the environment.

Next Week

Next week I’ll be running an Agile Foundations training session, and we (finally!) have Rachel joining our team as a Product Manager. I’m super excited to have her as part of the team, whilst hopeful we can control the flow of requests her way so she does not feel swamped, looking forward to having her join PwC!

Weeknotes #35

Back to Dubai

This week I was out in the Middle East again, running back to back Agile Foundations training sessions for people in our PwC Middle East firm. 

I had lots of fun, and it looked like attendees did too, both with the engagement on the day and the course feedback I received.

One issue with running training sessions in a firm like ours are that a number of large meeting rooms still have that legacy “boardroom” format, which means for little movement during sessions that require interaction. Last time I was there this wasn’t always the case, as one room was in the academy which, as you can tell by the title was a bit more conducive to collaboration. As well as that we had 12 people attend on day one, but 14 attendees on day two which again for me is probably two people too many. Whilst it generally works ok in the earlier parts of the day as the room can break off into two groups, it causes quite a lot of chaos when it comes to the lego4scrum simulation later on, as we really only have enough lego for one group. Combine that with the room layout and you can understand why some people can go off and get distracted/talk amongst themselves, but then again maybe that’s a challenge for the Scrum Master in the simulation! A learning for me is to limit it to 12 attendees max, with a preference to smaller (8–10) audience sizes.

Retrospectives

I’ve talked before around my view on retrospectives, and how they can be mistreated by those who act as the ‘agile police’ by using their occurance to determine if a team is/is not Agile (i.e. “thou cannot be agile if thou is not running retrospectives”). This week we’ve had some further contact from our Continuous Improvement Group around the topic and how to encourage more people to conduct them. Now, given this initiative has been going on for some time, I feel that we’ve done enough around encouragement and providing assistance/coaching to people if needed. We’ve run mock retrospectives, put together lengthy guidance documents with templates/tools for people to use, people practice it in the training on multiple occasions yet there are still only a small amount of people doing them. Given a key principle we have is invitation over infliction, this highlights that the interest isn’t currently there, and that’s ok! This is one in a list of many ‘invitations’ there are for people to start their agile journey — if the invitation is not accepted then ok, let’s try a different aspect of Agile.

A more important point for me really is that just because you are having retrospectives, it does not always mean you are continuously improving.

If it’s a moan every 1–4 weeks, that’s not continuous improvement. 

If nothing actionable or measurable comes out of it that is then reviewed at the next retro, then it’s not continuous improvement. 

If it’s held too infrequently, then it’s not continuous improvement.

With Toyota’s Kentucky factory pulling on the andon cord on average 5,000 times a day, this is what continuous improvement is! Worth all of us as practitioners remembering that running a retrospective ≠ Continuous Improvement.

Next Week

Next week we have a review with ICAgile, to gain course accreditation to start offering a 2-day training course with a formal ICAgile Fundamentals certification. It’s been interesting putting the course together and mapping it to official learning outcomes to validate attendees getting the certification. Fingers crossed all goes well and we can run a session before Christmas!

Weeknotes #34

Team Areas

A tell tale sign for any Agile practitioner is normally a walk of the office floor. If an organisation claims to have Agile teams, usually a giveaway is if there are team areas with lots of visual radiators around their ways of working.

With my trip to Manchester this week, I was really please to see that one of our teams, Vulcan had taken to claiming their own area and making the work they do and the management of it highly visible.

This is great to see as even with the digital tooling we have, it’s important for teams (within a large organisation) to have a sense of purpose and identity, which I’d argue is impossible to do without something physical/a dedicated area for their work. These are the things that when going through change provide inspiration and encourage you to keep on going, knowing that certainly with some teams, the message is landing.

Product Manager Hat

With our new graduate intake in IT, one of the things various teams were asked to put together was a list of potential projects for them to work on. 

A niggling issue I’ve had is our Team Health Check tool which, taking inspiration from the Spotify Squad Health Check, uses a combination of anonymous Google Form responses that are then visualized in Power BI.

This process though is highly manual, with a Google Apps Script converting the form responses into a BI tool friendly format, then copied/pasted into a Power BI table. The project therefore for the graduates is about a web version, with a database to store responses for automated reporting. I’ve therefore been volunteered as the Product Manager :D which meant this week even writing some stories and BDD acceptance criteria! Looking forward to seeing how creative they can be, and a chance for them to really apply some of the learnings from the recent training they’ve been through.

Digital Accelerator Feedback

We received feedback from both our Digital Accelerator sessions we ran recently. Overall with an average score of 4.43/5 we were one of the highest rated sessions people attended. We actually received the first batch of feedback before the second session, which was great for us as it allowed us to make a couple tweaks to exercises and delete slides that we feel maybe weren’t needed. Some highlights in terms of feedback:

Good introduction into agile concept and MVP. Extremely engaging and persuasive games to demonstrate concept! Lots of fun!

All of it was brilliant and also further reading is great to have

This was a great module and something I want to take further. This was the first time I heard of agile and Dan broke down exactly what it was in bite size pieces which was really helpful.

So much fun and energy created through very simple activities. It all made sense — easily relatable slides. Thought Marie did a great job

Really practical and useful to focus on the mindset not the methodology, which I think is more applicable to this role

I’ve heard the term agile a lot in relation to my clients so was really useful to understand this broken down in a really basic and understandable way and with exercises. This has led me to really understand the principles more than through reading I’ve done.

Very interesting topic, great presentation slides, games, engaging presenter

Very engaging and interesting session. Particularly liked the games and the story boarding.

Very engaging and impactful session. The activities really helped drive home the concepts in an accessible way

Best.Session.Ever.

Thanks to Andy, Marie, Stefano, James and Dan for running sessions, as well as Mark M, Paul, Bev, Ashley, Tim, Anna, Mark P, Gurdeep and Brian for their assistance with running the exercises.

Next Week

Next week I’ll be heading out to Dubai to our Middle East office to run a couple training sessions for teams out there. A welcome break from the cold British weather — looking forward to meeting new faces and starting their Agile journey as well as catching up with those who I trained last time!

Weeknotes #33

Right to Left

This week I finished reading Mike Burrows’ latest book Right to Left

Yet again Mike manages to expertly tie together numerous aspects of Agile, Lean and everything else, in a manner that’s easy to digest and understandable from a reader/practitioner perspective. One of my favourite sections of the book is the concept of the ‘Outside-In’ Service Delivery Review. As you can imagine from the title of the book, it’s taking the perspective of the right (needs, outcomes, etc.) as an input, over the left (roles, events, etc.) and then applying this thinking across the board, say for example in the Service Delivery Review meeting. This is really handy for where we are on our own journey, as we emphasise the need to focus on outcomes in grouping and moving to product teams that provide a service to the organisation. One area of this being around how you construct the agenda of a service review. 

I’ve slightly tweaked Mikes take on matters, but most of the format/wording is still the same:

With a Service Review coming soon, the hope is that we can start adopting this format as a loose agenda going forward, in particular due to it’s right to left perspective.

Formulating the above has also helped with clarity around the different events and cadences we want teams to be thinking about in choosing their own ways of working. I’ve always been a fan of the kanban cadences and their inputs/outputs into each other:

However I wanted to tweak this again to be a bit simpler, to be relevant to more teams and to align with some of what teams are already doing currently. Sonya Siderova has a nice addition to the above with some overarching themes for each meeting, which again I’ve tailored based on our context:

These will obviously vary depending on what level (team/service) we’re focusing on, but my hope is something like the image above will give teams a bit clearer steer as to things they should be thinking about and the intended purpose of them.

Digital Accelerators

We had another session for our Digital Accelerators this week, which seemed to be very well received by our attendees. We did make a couple changes for this one based on the feedback from last week, removing 2–3 slides and changing the Bad Breath MVP exercise from 2 groups to 4 groups. 

It’s amazing how much a little tweak can make, as it did feel like it flowed a lot easier this time, with plenty opportunity for people to ask questions. 

Last weeks session was apparently one of the highest scoring ones across the whole week (and apparently received the biggest cheer when the recap video showed photos of people playing the ball point game!), with a feedback score of 4.38/5 — hopefully these small changes lead to an even higher score once we get the feedback!

Next Week

Next week is a quieter one, with a trip to Manchester on Tuesday to meet Dave, our new DevOps Engineer, as well as help coach one of our teams around ‘Product’ thinking with one of our larger IT projects at the minute. Looking forward to some different types of challenges there, and how we can start growing that product management capability.

Weeknotes #32

Little Bets

A few weeks ago, I was chatting to a colleague in our Robotic Process Automation (RPA) team who was telling me about how the team had moved to working in two-week sprints. They mentioned how they were finding it hard to keep momentum and energy up, in particular towards the end of the sprint when it came to getting input to the retro. I asked what day of the week they were starting the sprint to which they replied “Monday”, of course meaning the sprint finished on a Friday. A suggestion I had was actually to move the start of the sprint (keeping the two-week cadence) to be on a Wednesday, as no one really wants to be reviewing or thinking about how to get better (introspection being a notoriously tougher ask anyway) on a Friday. They said they were going to take it away and run it as an experiment and let me know how it went. This week the team had their respective review and retrospective, with the feedback being that the team much preferred this approach, as well as the inputs to the retro being much more meaningful and collaborative.

It reminded me that sometimes as coaches we need to recognise that we can actually achieve big through small, and that a tiny little tweak can actually make the world of difference to a team. For myself I’ve recently found that I’ve been getting very frustrated with bigger changes we want to make, and concepts not landing with people, despite repeated attempts at engagement and involvement. Actually, sometimes it’s better to focus on those tiny tweaks/experiments that can make a big difference.

This concept is explained really well in Peter Sims “Little Bets”, a great book on innovation in organisations through making series of little bets, learning critical information from lots of little failures and from small but significant wins.

Here’s to more little bets with teams, rather than big changes!

Digital Accelerators

This week we also ran the first of two sessions introducing Agile to individuals taking part in our Digital Accelerator programme at PwC. The programme is one of the largest investments by the firm, centered on upskilling our people on all things digital, covering everything from cleansing data and blockchain to 3D Printing and drones.

Our slot was 90 minutes long, where we introduced the manifesto and “Agile Mindset” to individuals, including a couple of exercises such as the Ball Point Game and Bad Breath MVP. With 160 people there we had to run 4 concurrent sessions with 40 people in each, which was the smallest group size we were allowed!

I thoroughly enjoyed my session, as it had been a while since I’d done a short, taster session on Agile — good to brush off the cobwebs! The energy in the room was great, with some maybe getting a little too competitive with plastic balls!

Seems like the rest of our team also enjoyed it, as well as the attendee feedback being very positive. We also had some additional help from colleagues co-facilitating the exercises which I’m very thankful for as it would have been chaotic without their help! Looking forward to hearing how the Digital Accelerators take this back to their day to day, and hopefully generate some future work for us with new teams to work with.

Next week

Next week is another busy one. I’m helping support a proposal around Enterprise Agility for a client, as well as having our first sprint review for our ways of working programme. On top of that we have another Digital Accelerator session to run, so a busy period for our team!

Weeknotes #31

OKRs

We started the week off getting together and formally reviewing our Objectives and Key Results (OKRs) for the last quarter, as well as setting them for this quarter.

Generally, this quarter has gone quite well when you check against our key results, with the only slight blip being around the 1-click deployment and the cycle time at portfolio level. 

A hypothesis I have is due to the misunderstanding around people feeling that they had to hold a retrospective before moving something to “done”, we have inadvertently caused cycle times to elongate. With us correcting this and again re-emphasizing the need to focus on the small batch, the goal for this quarter really will be to focus on getting that as close to our 90-day Service Level Expectation (SLE) at portfolio level. As well as this will be putting some tangible measurements around spinning up new, dedicated product teams and building out our lean offering.

Prioritisation

Prioritisation is something that is essential to success. Whether it be at strategic, portfolio, program or team level, priorities need to be set so that people have a clear sense of purpose, have a goal to work towards, have focus and that ultimately we’re working on the right things. Prioritisation is also a very difficult job, too often we rely on HiPPO (Highest Paid Person's Opinion), First In, First Out (FIFO) or just sheer gut feel. In previous years, I provided teams with this rough, fibonacci-esque approach to formulating a ‘business value’ score, then dividing this by effort to get an ‘ROI’ number:

Business Value Score

10 — Make current users happier

20 — Delight existing users/customers

30 — Upsell opportunity to existing users/customers

50 — Attract new business (users, customers, etc.)

80 — Fulfill a promise to a key user/customer

130 — Aligns with PwC corporate/strategic initiative(s)

210 — Regulatory/Compliance (we will go to jail if we don’t do it)

It’s fairly “meh” I feel, but was a proposed stop gap between getting them doing nothing and something that used numbers. Rather bizarrely, the Delight existing users/customers aspect was then changed by people to be User has agreed deliverable date — which always irked me, mainly as I cannot see how this has anything to do with value. Sure people may have a date in mind, but this to do with urgency, not value. Unfortunately a date-driven (not data-driven) culture is still very prevalent. Just this week for example we had someone explain how an option was ‘high priority’ as it was to going to be delivered in the next three months(!).

Increasingly, I’m finding a simple, lightweight approach to prioritisation I’m gravitating towards, and one that is likely to get easier buy in, is Qualitative Cost of Delay.

Source: Black Swan Farming — Qualitative Cost of Delay

Cost of Delay allows us to combine value AND urgency, which is something we’re all not very good at. Ideally, this would be quantified so we would all be talking a common language (i.e. not some weird dark voodoo such as T-Shirt sizing, story points or fibonacci), however you tend to find people fear numbers. My hope is that this way we can get some of the benefits of cost of delay, whilst planting the seed of gradually moving to more of a quantified approach.

Next Week

Next week is a big week for our team. We’re running the first of two Agile Introduction sessions as part of the firms Digital Accelerator program. With four sessions running in parallel with roughly 40 attendees in each, we’ll be training 160 people in a 90-minute session. Looking forward to it but also nervous!

Weeknotes #30

CI/CD

We started the week with Jon running a demo for the rest of UKIT on CI/CD, with a basic website he built using Azure DevOps for the board, pipeline, code and automated testing. I really enjoyed the way it was pitched, as it went into just enough detail for people who like the technical side, but also was played out in a ‘real’ way of a team pulling an item from the backlog, deploying a fix and being able to quickly validate that the fix worked whilst not compromising on quality and/or security. This was a key item for our backlog this quarter, as it ties in nicely to one of our objectives around embedding Agile delivery in our portfolio, specifically around the technical excellence needed. We’re hoping this should start to spark curiosity and encourage others to start exploring this with their own teams — even if not fully going down the CI/CD route, the pursuit of technical excellence is something all teams should be aspiring to achieve.

Aligned Autonomy

This week we’ve been having multiple discussions around the different initiatives that are going on in our function around new ways of working. Along with moving to an Agile/Product Delivery Model, there are lots of other conversations taking place around things such as changing our funding model, assessing suppliers, future roles, future of operations and the next generation cloud, to name a few. With so many things going on in parallel, it’s little surprise that overlap happens, blockers quickly emerge, and/or a lack of shared understanding ceases to exist. Henrik Kniberg has a great talk where he talks about the importance of aligned autonomy, precisely the thing that we’re missing currently.

Thankfully, those of us involved in these various initiatives have come together to highlight the lack of alignment, with the aim of something a bit more cohesive to manage overlap and dependencies. A one day workshop is planned to build some of this out and agree priorities (note: 15 different ‘priorities’ is not prioritisation!) — which should provide a lot more clarity. 

An important learning though has to be around aligned autonomy, making sure any sort of large ways of working initiative has this.

Next Week

Next week has a break midweek for me, as I have a day off for my birthday 😀 We’ll have a new DevOps Engineer — Dave starting on Monday, looking forward to having him join our organisation and drive some of those changes around the technical aspects. Dan is running a lunch and learn for the team on LeSS, which will be good to hear about his learnings from the course. We’ve also got an OKR review on Monday which will be good to assess how we’ve done against our desired outcomes and what we need to focus on for next quarter.

Weeknotes #29

Back to training

It was back to training this week, as myself and Stefano ran another of our Agile Foundations sessions for people across the firm. It was also Stefano’s first time delivering the course, so a good experience for him. We had an interesting attendee who gave us “a fact for you all, agile is actually a framework” in the introduction which did make me chuckle and also made things a little awkward 10 minutes later for our Agile is a mindset slide:

Source

We also had a number of our new graduates attend, which was good to meet them all and not have to deal with quite so many “but in waterfall” or “how would you do this (insert random project) in an Agile way” type questions. They also got pretty close to building everything in our Lego4Scrum simulation which would have been a first, had it not been for a harsh business stakeholder attending their sprint reviews! There were a couple times they got lost when doing the retro and misunderstanding its purpose, which was hard to not interject and correct it. Feedback was they would have preferred being steered the right way (as we let it play out), so good learning if that happens again.

BXT Jam

On Wednesday night myself and Andy ran a session in the evening as part of a BXT Jam.

BXT (Business, eXperience, Technology) is all about modern ways of working and helping our clients start and sustain on that journey. It has four guiding principles of:

1. Include diverse perspectives

2. Take a human centred approach

3. Work iteratively and collaboratively

4. Be bold

Our session was mainly centered on understanding Agile at its core, really focusing on mindset, values and principles as opposed to any particular practices. We looked at the manifesto, some empirical research supporting Agile (using DORA) and played the Ball Point Game with attendees. We took a little bit of a risk as it was the first time we’d run the game, but given it’s a pretty easy one to facilitate there thankfully weren’t any issues.

Andy handled some particularly interesting questions well (“how are we supposed to collaborate with the customer without signing a contract?”) and I think the attendees left with a better understanding of what Agile at its core is all about. We’ve already been approached to hold a similar session for our Sales and Marketing team in October, so hoping this can lead to lots more opportunities to collaborate with the BXT team and wider firm. Special thank you has to go to Gurdeep Kang for setting up the opportunity and connecting us with the BXT team.

Next Week

Next week I’m heading to Birmingham to run a couple workshops with Senior Managers in our Tax teams, helping them understand Agile and start to apply it to a large, uncertain programme of change they are undertaking. We’ll also be holding a sprint review with one of our vendors centered on new ways of working, looking at how some of our pilot teams are getting on and learning from their feedback.

Weeknotes #28

Incremental Changes

This week I observed a great example of approaching work with an Agile mindset. Within our office we have a number of electronic displays which show desk availability on each floor, as well as room names/locations. John Cowx, one of our Experience Design team, showed to me an incremental change they had made this week, introducing a single touch screen for one display on one floor which would allow staff to interact, type in a room name and then have a route plotted to the room to show them the way. This is a great example of an Agile mindset to work, as rather than roll this out through every single screen across every single office across the country, here we’re piloting it (small batch) and observe the interactions/obtaining feedback, before making changes and/or deciding whether or not to scale it across all locations (inspecting and adapting). It was great to not only see someone so passionate about the product, but to see an example of the Agile mindset being evidenced in the work we do.

Retrospectives

This week we were having a conversation around the Continuous Improvement initiative being run in IT and encouraging people in our ‘Project’ model to conduct retrospectives, regardless of delivery approach (then taking any wider improvements identified in the retrospectives into the initiative to implement). It’s something that has been running for a while with limited success as, generally, the observations are people aren’t conducting Retrospectives or the improvements being implemented are low hanging fruit rather than anything meaningful of impact. The former doesn’t really surprise me, even with using our team to provide lots of guidance, templates and lunch & learns. For me it’s clear that people don’t want to use retros (which is fine), therefore we need to learn from that feedback and change direction, rather than continuing to push the retrospectives agenda, as otherwise we can end up falling into the trap below:

Imposition of Agile

It’s perfectly reasonable to see that people can continuously improve without doing retrospectives but more importantly, it’s to recognise that doing retrospectives != continuously improving. I’ve suggested the group conduct some end user interviews/field research to understand why people are struggling with retros and also around what they see the purpose of the initiative as. Possibly there could be an unearthing from that around what the real improvements are that are needed, rather than relying on the retrospectives as the mechanism to capture them.

TLDR; individuals and interactions over tools and processes

Training

It was back to the training rhythm this week, running a half day session on Wednesday as part of our Hands On With Azure DevOps course. Given it had been so long since running any type of training, I found myself a little bit rusty in parts, but generally thought it went well. Dan from our team was shadowing, so we can reduce the single point dependency in the team of only myself running the session. This was really good from my perspective as there are certain nuances that can be missed, which he was there to either point out to attendees or to ask me about. Having started doing the session months ago it finally feels like now the content flows nicely and that we give a sufficient learning experience without teaching too much unnecessary detail. My favourite point is the challenge on configuring the kanban board, as normally there are a lot of alarmed faces when it’s first presented! However they all end up doing it well and meeting the criteria, which is of course a good indicator that attendees are learning through doing. There is only one slot available across all sessions in the next four months, so please that demand is so high!

Next Week

Next week it’s back to running Agile Foundations courses — with myself and Stefano running a session on Tuesday. I’ll also be working with Andy from our team on presenting at a BXT Jam on Wednesday night, with a 30–45 minute slot introducing people to Agile. A few slides plus the ball point game is our planned approach, hoping it can scale to 40 people!

Weeknotes #27

New Ways of Working

This week we’ve had some really positive discussions around our new delivery model and how we start the transition. We’ve tentatively formulated a ways of working group, bringing expertise across operations, software engineering, programme & portfolio management and Agile — so it should provide a nice blend in managing the change. The more conversations we’re having with people the consistent feedback is that it “makes sense” with no real holes in the way of working, however a recognition that we are some way away in terms of the skills needed with the current workforce. I’m hopeful we’ll be able to spin up our next pilot product teams and service in the next month.

Brand Building

Now that the holiday season is over with, we’re back into full flow on the training front. This week Dan ran another Agile Foundations session on Monday, which was followed up with some fantastic feedback. Speaking to someone who went in a separate conversation on Wednesday she said it was “the best training I have ever attended”, which is a fantastic endorsement to Dan and the material that we have. Currently it’s forecast for us to have delivered 41 training sessions in 2019, which will be a great achievement.

The good thing about these sessions going well is that word of mouth spreads, and this week I was approached about us getting involved in other initiatives across the firm. BXT is one of our Digital Services we offer to clients, and we’ve been requested to present at a BXT Jam later this month for roughly 40–50 people, to help familiarise attendees with what Agile at its core is really about, plus showcasing what we’re doing to grow and embed that thinking in our culture. We’ve also been asked to help run sessions on Agile as part of the firms Digital Accelerator initiative, which is helping over 250 people (open to anyone from Senior Associate to Director, across all LoS and IFS) upskill in all things digital, in order for them to become advocates and help the firms next phase of our transformation, helping us to build our digital future faster. With over 2000 applicants across the UK it’s something recognised by the whole firm and highly visible, so hoping it’s more positive press for the Agile Collaboratory — maybe we can get an exec board member playing with Lego!

Change is hard

This week in general I’ve found things to be really tough from a work perspective. I’m finding it increasingly difficult when involved in calls/meetings to not get frustrated at some of the things that are being discussed. This is mainly due to it being things like bad knowledge, deliberately obstructive behaviour, misinterpretation or statements being made about things people really just don’t know anything about. Despite trying to help guide people you can often be shouted down or simply not consulted in conversations. It must have been noticeable in particular this week as within our own team people asked if my weeknotes were going to be as bad as my mood!

A day in the life…

I think when you’re involved in change like we are, there are going to be setbacks and off days/weeks — you are going to get frustrated. Like all things it’s important to recognise what caused that, and what preventive/proactive measures you’re going to take in order to not feel that way again. For myself, I’m going to temporarily taking myself out of certain meetings, in order to free up time to focus on individual discussions and share/build understanding that way.

Next Week

Next week it’s back to training, with another of our Hands On With Azure DevOps sessions in London. The week concludes with a trip to Manchester to look at identifying our next pilot product teams and areas of focus, a meeting I sense may provide more challenges!

Weeknotes #26

Manchester Travels

This week I spent a couple days on the newly designed fourth floor of our Manchester office. Despite the rain (seemingly every time I go to Manchester) it was great to see a new, modern work environment with lots of space for visualisation and collaboration amongst teams.

Source: PwC_NorthWest Twitter

One of the main reasons for my visit was to present our proposed new ways of working model to get an agreement around this being where we *think* (as it’s emergent) we want to go, as well as formulating a working group and how to approach the change (incremental rather than big bang). It was one of the most positive meetings I’ve been to in recent months, both in the sense of getting feedback/providing clarity to others, and from a personal standpoint being able to passionately showcase the work our team have spent the last few months on.

Another reason for my visit was to meet our new Agile Delivery Manager — Stefano Ciarambino who has moved across from Consulting to do a six month secondment with our team. Me and Stefano have chatted on and off about all things Agile for the past 6/7 months, after he attended one of the Hands On With Azure DevOps course I ran. I was impressed with his experience and understanding as a practitioner, and with us starting to gain momentum with our new ways of working model, we needed a new face in the team to help *do* and help others do. Having learnt through some past mistakes, I’m quite particular now around who we have in our core team and them bringing something unique to the table. I’m hoping it proves to be an enjoyable six months for him and for us, so that we can make his stay a permanent one — welcome aboard Stefano!

Reflection

This week is a bit of one for anniversaries! This post will mark 26 weeks/6 months of writing weeknotes. In reflecting on the writing of them, I’ve found it to be a great vehicle for checking that the work I’m doing actually has purpose. For example if I’m getting to the end of the week and struggling to come up with things to write about, then maybe I’ve not been working on the right things! I hope sharing the things I’m learning through our own internal Agile adoption should help others who are experiencing it in a big organisation, and show to those who I do work with that I’m learning all the time, just as they may be.

This week also marks the four year anniversary since I joined PwC. When I think about that first team I joined to work with, who were split by developer per application, estimating tasks in story points but stories in hours, not delivering anything working at the end of sprints and working without PO’s, it’s fair to say I’ve come a long way since then! There’s been some memorable high points for me, a highlight being over the last twelve months in building a team of people who I look forward to working with every day. Also it contains some low points, for example being maybe too dogmatic at the beginning around Agile or having to walk away from teams as the negative behaviours from a management perspective inflicted on them were not going to change.

Hoping for both to continue for the foreseeable future and to writing the same again in twelve months time!

Next Week

Next week is our sprint review, looking forward to getting feedback on work we’ve done and what we should focus on next. I’ve also got some conversations with people who could help us in our transition towards new ways of working — looking forward to hearing their thoughts and seeing what adjustments we could make.

Weeknotes #25

Team Identity

One of the experiments we’re running is for our newly formed teams to come up with a team name as well as some form of team identity. Typically we’ll suggest teams complete something like a team canvas, coming back to revisit it as the team evolves and matures, to better reflect ways of working.

We’re struggling at the moment around this becoming a ‘thing’ that teams do, and it’s often greeted with derision, a blank stare or a roll of the eyes. As well as that teams aren’t always the most creative with either team names (we’ve suggested a theme of ‘major places — fictional or non-fictional’) or filling out the canvas (i.e. completing them with Agile buzzwords so as to ‘convince’ management). This week in particular I’ve mentioned it a number of times to people, the majority of the time getting one or all of the reactions above. I’m struggling with why this is, in particular when I think about some of the great teams we know of. A great (albeit begrudgingly) example for me is Manchester United. When I think about the Ferguson era, the class of 92 and the values and principles he instilled at that club, everyone there knew what it meant to be ‘Manchester United’ — in terms of what was expected both on the pitch and off it. You hear ex players now talk with great passion about what it means to be at that club, to wear that shirt and how certainly in recent teams they’ve lost their way, with those values seemingly going out the window. When coming up with a team identity, this is what we want teams to strive for. I’m not sure if it’s because work has become so transactional for people they can’t possibly fathom something that isn’t solution focused (i.e. naming yourself after the application you’ve worked on) or that the psychological safety is lacking in the context they are in (i.e. teams don’t feel safe enough that they can be viewed as having fun or show vulnerability). One to watch in the coming months but a topic I’m certainly finding difficult at the moment.

Agile Portfolio Management

This week I’ve also been helping on some client work where we’re looking at helping their PMO transition/adopt Agile ways of working, and what it means for their area. For a lot of organisations this is probably one of the hardest areas to change — with a PMO that is focused on milestones, RAG statuses and being on time and on budget, getting them to shift mindset is one of the biggest challenges. We’ve been trying to do some of this internally, with a real shift towards a focus on flow from a metrics perspective, but also agreeing some principles around what the PMO is there for. Even just adopting the metrics more relevant to Agile teams is not enough, it requires a whole shift in thinking around the PMO being an enabler for business agility. Things like focusing on the small batch, focusing on outcomes, as well as the scaling of team work through to portfolio and strategy at the relevant flight levels is really how a PMO “transforms”.

We’ll be playing back to them some example next week, which should hopefully lead to some good conversations and follow on opportunities for how we can enable their Agile adoption.

Next Week

Next week it’s back to Manchester, as we look to setup a ways of working group to take our adoption to the next level. We’re looking to blend experienced Agilists with PwC’ers, which should give us that balance. As well as that we have our first Agile Delivery Manager starting, looking forward to welcoming Stefano to the team!