Software Development

Objectively measuring “predictability”

Predictability is often one of the goals organisations seek with their agile teams but, in the complex domain, how do you move past say-do as a measurement for predictability? This post details how our teams at ASOS Tech can objectively look at this whilst accounting for variation and complexity in their work…

Predictability is often the panacea that many delivery teams and organisations seek. To be clear, I believe predictability to be one of a balanced set of themes (along with Value, Flow, Delivery and Culture), that teams and organisations should care about when it comes to agility.

Recently, I was having a conversation around this topic with one of our Lead Software Engineers about his team. He explained how the team he leads had big variation in their weekly Throughput and therefore were not predictable, with the chart looking like so:

Upon first glance, my view was the same. The big drops and spikes suggested too much variation for this to be useful from a forecasting perspective (in spite of the positive sign of an upward trend!) and that this team was not predictable.

The challenge, as practitioners, is how do we validate this perspective? 

Is there a way that we can objectively measure predictability?

What predictability is not

Some of you may be reading and saying that planned vs. actual is how we can/should measure predictability. Often referred to as “say-do ratio”, this once was a fixture in the agile world with the notion of “committed” items/story points for a sprint. Sadly, many still believe this is a measure to look at, when in fact the idea of a committed number of items/points left the Scrum Guide more than 10 years ago. Measuring this has multiple negative impacts on a team, which this fantastic blog from Ez Balci explains.

Planned Vs. Actual / Committed Vs. Delivered / Say-Do are all measurement relics of the past we need to move on from. These are appropriate when the work is clear, for example when I go to the supermarket and my wife gives me a list of things we need, did I get what we needed? Did I do what I said I was going to do? Software development is complex, we are creating something (features, functionality, etc.) from nothing through writing lines of code.

About — Cynefin Framework — The Cynefin Co

Thinking about predictability as something that is ‘black and white’ like those approaches encourage simply does not work, therefore we need a better means of looking at predictability that considers this.

What we can use instead

Karl Scotland explored similar ideas around predictability in a blog post, specifically looking at the difference in percentiles of cycle time data. For example if there is a significant difference in your 50th percentile compared to your 85th percentile. This is something that as a Coach I also look at, but more to understand variation than being predictable. Karl himself shared in a talk after exploring the ideas from the blog further how this was not a useful measure around predictability.

Which brings us on to how we can do it, using a Process Behaviour Chart (PBC). A PBC is a type of graph that visualises the variation in a process over time. It consists of a running record of data points, a central line that represents the average value, and upper and lower limits (referred to as Upper Natural Process Limit — UNPL and Lower Natural Process Limit — LNPL) that define the boundaries of routine variation. A PBC can help to distinguish between common causes and exceptional causes of variation, and to assess the predictability and stability of a process.

I first gained exposure to this chart through watching the Lies, damned lies, and teens who smoke talk from Dan Vacanti, as well as learning more through one of my regular chats with a fellow coach, Matt Milton. Whilst I will try my best not to spoil the talk, Dan looks at Wilt Chamberlains points scoring over the 1962 season in a PBC and in particular if the 100 point game should be attributed to what some say it was.

Dan Vacanti — Lies, Damned Lies, and Teens Who Smoke

In his new book, Actionable Agile Metrics Volume II: Advanced Topics in Predictability, Dan goes to great lengths in explaining the underlying concepts behind variation and how to calculate/visualise PBCs for all four flow metrics of Throughput, Cycle Time, Work In Progress (WIP) and Work Item Age.

With software development being complex, we have to accept that variation is inevitable. It is about understanding how much variation is too much. PBCs can highlight to us when a team's process is predictable (within our UNPL and LNPL lines) or unpredictable (outside our UNPL and LNPL lines). It therefore can (and should) be used as an objective measurement of predictability.

Applying to our data

If we take our Throughput data shown at the beginning and put it into a PBC, we can now get a sense for if this team is predictable or not:

We can see that in fact, this team is predictable. Despite us seemingly having lots of up and down values in our Throughput, all those values are within our expected range. It is worth noting that Throughput is the type of data that is zero bound as it is impossible for us to have a negative Throughput. So, by default, our LNPL is considered to be 0.

Another benefit of these values being predictable is that it also means that we can confidently use this data as input for forecasting delivery of multiple items using Monte Carlo simulation.

What about the other flow metrics?

We can also look at the same chart for our Cycle Time, Work In Progress (WIP) and Work Item Age flow metrics. Generally, 10–20 data points is the sweet spot for the baseline data in a PBC (read the book to understand why), so we can’t quite use the same time duration as our Throughput chart (as this aggregated weekly for the last 14 weeks).

If we were to look at the most recent completed items in that same range and their Cycle Time, putting it in a PBC gives us some indication as to what we should be focusing on:

The highlighted item would be the one to look at if you were wanting to use cycle time as an improvement point for a team. Something happened with this particular item that made it significantly different than all the others in that period. This is important as, quite often, a team might look at anything above their 85th percentile, which for the same dataset looks like so:

That’s potentially four additional data points that a team might spend time looking at which were in fact just routine variation in their process. This is where the PBC helps us, helping to separate signal from noise.

With a PBC for Work In Progress (WIP), we can get a better understanding around where our WIP has increased to the point of making us unpredictable:

We often would look to see if we are within our WIP limits when in fact, there is also the possibility (as shown in this chart) of having too low WIP, as well as too high. There may be good reasons for this, for example keeping WIP low as we approach Black Friday (or as we refer to it internally as— Peak) so there is capacity if teams need to work on urgent items.

Work Item Age is where it gets the most interesting. As explained in the book, looking at this in a PBC is tricky. Considering we look at individual items and their status, how can we possibly put this in a chart that allows us to look at predictability? This is where tracking Total Work Item Age (which Dan credits to Prateek Singh) helps us:

Total Work Item Age is simply the sum of the Ages of all items that are in progress for a given time period (most likely per day). For example, let’s say you have 4 items currently in progress. The first item’s Age is 12 days. The second item’s Age is 2 days. The third’s is 6 days, and the fourth’s is 1 day. The Total Age for your process would be 12 + 2 + 6 + 1 = 21 days…using the Total Age metric a team could see how its investment is changing over time and analyse if that investment is getting out of control or not.

Plotting this gives new insight, as a team may well be keeping their WIP within limits, yet the age of those items is a cause for concern:

Interestingly, when discussing this in the ProKanban Slack, Prateek made the claim that he believes Total Work Item Age is the new “one metric to rule them all”. Stating that keeping this within limits and the other flow metrics will follow…and I think he might be onto something:

Summary

So, what does this all mean?

Well, for our team mentioned at the very beginning, the Lead Software Engineer can be pleased. Whilst it might not look it on first glance, we can objectively say that from a Throughput perspective their team is in fact predictable. When looking at the other flow metrics for this team, we can see that we still have some work to be done to understand what is causing the variation in our process.

As Coaches, we (and our teams) have another tool in the toolbox that allows us and our teams to quickly, and objectively, validate how ‘predictable’ they are. Moving to something like this allows for an objective lens on predictability, rather than relying on differing opinions of people who are interpreting data different ways. To be clear, predictability is of course not the only thing (but it is one of many) that matters. If you’d like to try the same for your teams, check out the template in this GitHub repo (shout out to Benjamin Huser-Berta for collaborating on this as well — works for both Jira and Azure DevOps).

Framework agnostic capacity planning at scale

How can you consistently plan for capacity across teams without mandating a single way of working? In this blog I’ll share how we are tackling this in ASOS Tech…

What do we mean by capacity planning?

Capacity planning is an exercise undertaken by teams for planning how much work they can complete (in terms of a number of items) for a given sprint/iteration/time duration. Sadly, many teams go incredibly detailed with this, getting into specifics of the number of hours available per individual per day, number of days holiday and, even worse, using story points:

When planning on a broader scale and at longer term horizons, say for a quarter and looking across teams, the Scaled Agile Framework (SAFe) and its Program Increment (PI) planning appears to be the most popular approach. However, with its use of normalised story points, it is quite rightly criticised due to it (whatever your views on them may be) abusing the intent of story points and, crucially, offering teams zero flexibility in choosing how they work.

At ASOS, we pride ourselves as being a technology organisation that allows teams autonomy in how they work. As Coaches, we do not mandate a single framework/way of working as we know that enforcing standardisation upon teams reduces learning and experimentation.

The problem that we as Coaches are trying to solve is aligning on a consistent understanding and way to calculate capacity across teams all whilst avoiding the mandating of a single way of working and aligning with agile principles. Our current work on this has led us down the path of taking inspiration from the work of folks like Prateek Singh in scaling flow through right-sizing and probabilistic forecasting.

Scaling Simplified: A Practitioner’s Guide to Scaling Flow eBook : Singh, Prateek: Amazon.co.uk: Books

How we are doing it

Right-sizing

Right-sizing is a practice where we acknowledge and accept that there will be variability in sizes of work items at all levels. What we focus on is, depending on backlog level, understanding what our “right-size” is. The most common type of right-sizing a team will do is taking their 85th percentile of their cycle time for items at story level, and using this as their “right-size”, saying 85% of items take n days or less. They then proactively manage items through Work Item Age, compared to their right-size:

Adapted from

John Coleman — Just use rightsizing, for goodness sake

However, as we are looking at planning for Features (since this is what business stakeholders care about), we need to do something different. Please note, when I say “Features”, what I really mean here is the backlog hierarchy level above User Story/Product Backlog Item. You may call this something different in your context (e.g. Epic), but for simplicity in this blog I will use the term “Feature” throughout.

I first learnt about this method from Prateek’s “How many bottles of whiskey will I drink in 4 months?” talk from Lean Agile Global 2021. We visualise the Features completed by the team in the last n weeks, plotting them on a scatter plot with the count of completed child items (at story level) and the date the Features were completed. We then add in percentiles to show the 50th/85th/95th percentiles for size (in terms of child item count), typically taking the 85th percentile for our right-size:

What we also do is visualise the current Features in the backlog and how they compare to the right-size value (giving this to teams ‘out the box’ we choose 85th percentile for our right-size). This way a team can quickly understand, of their current Features, which may be sized correctly (i.e. have a child item count lower than our right-size), which might be ones to watch (i.e. are the same size as our right-size) and which need breaking down (i.e. bigger than our right-size):

Please note: all Feature names are fictional for the purpose of this blog

Note that the title of the Feature is also a hyperlink for a team to open the item in their respective backlog tool (Azure DevOps or Jira), allowing them to directly take action for any changes they wish to make.

What will we get?

Now we know what our right-size for Features is, we need to figure out how many backlog items/stories we have capacity for. To do this, we are going to a run a Monte Carlo simulation to forecast how many items we will complete. I am not planning to go into detail on this approach and why it is more effective than other methods such as story points, mainly because I (and countless others!) have covered this in detail previously. We will use this to allow a team to forecast, to a percentage likelihood, the number of items the team is likely to complete in the forecasted period (in this instance 12 weeks):

It is important to note here that the historical data used as input to the forecast should contain the same mix of conditions as the future you are trying to predict. As well as this, you need to understand about the variability in your system and whether it is the right amount or too much — check out Dan Vacanti's latest book if you want more information around this. Given nearly all our teams are stable and dedicated to an application/service/part of the journey, this is generally a fair assumption for us to make.

How many Features?

Now that we have our forecast for how many items, as well as our right-size for our Features, we can calculate how many Features we have capacity for. Assuming we are using our 85th percentile, we would do this via:

  1. Taking the 85th percentile value in our ‘What will we get?’ forecast

  2. Divide this by our 85th percentile ‘right-size’ value

  3. If necessary, round this number (down)

  4. This gives us the number of ‘right-sized’ features we have capacity for

The beauty of this approach is, unlike other methods which just provide a single value in terms of capacity, with no understanding of what the risk involved with that calculation is, this method allows teams to play around with the risk appetite they have. Currently this is set to 85% but what if we were feeling more risky? For example, if we’ve paid down some tech debt recently that enables us to be more effective in delivery, then maybe 70% is better to select. Know of new joiners and people leaving your team in the coming weeks therefore need to be more risk averse? Then maybe we should be more conservative with 95%…

Tracking Feature progress

When using data for planning purposes, it is also important that we are transparent around progress with existing Features and when they are expected to complete. Another part to the template teams can use is running a Monte Carlo simulation on their current Features. We visualise Features in their priority order in the backlog along with their remaining child count, with the team able to select a target date, percentile likelihood and crucially, how many Features they work on in parallel. For a full explanation on this I recommend checking out Prateek Singhs Feature Monte Carlo blog which, combined with Troy Magennis’ multiple feature forecaster, was the basis for this chart. The Feature Monte Carlo then shows, depending on the percentage confidence chosen, which Features are likely to complete on or before the selected date, which will finish up to one week after the selected date, and which will finish more than one week after the selected date:

Please note: all Feature names are fictional for the purpose of this blog

Again, the team is able to play around with the different parameters here to understand which is the determining factor which, in almost all cases, is to limit your work in progress (WIP) — stop starting and start finishing!

Please note: all Feature names are fictional for the purpose of this blog

Aggregating information across teams

As the ASOS Tech blog has shared previously, we try to gather our teams at a cadence for our own take on quarterly planning (titled Semester Planning). We can use these techniques above to make clear what capacity a team has and, based on their current features, what may continue into another quarter and/or have scope for reprioritisation:

Capacity for 8 ‘right-sized’ features with four features that are being carried over (with their projected completion dates highlighted)

Within our technology organisation we work with a Team > Platform (multiple teams) > Domain (multiple platforms) model — a platform could therefore leverage the same information across multiple teams (in a different section of a Miro board) to present their view of capacity across teams as well as leveraging Delivery Plans to show when (in terms of dates) that capacity may be available:

Please note: all Feature names are fictional for the purpose of this blog

Domains are then also able to leverage the same information, rolling this info up one level further for a view across their Platform(s):

Please note: all Feature names are fictional for purpose of this blog

One noticeable addition at this level is the portfolio alignment value. 

This is where we look at what percentage of a Domains work is linked to our overall Portfolio Epics. These portfolio items ultimately represent the highest priorities for ASOS Tech and in turn directly align to strategic priorities, something which I have covered previously in this blog. It is therefore very important we are aware of and striking the right balance between feature delivery, the needs of our platforms and tech debt/hygiene.

These techniques allow us to present a data-informed, aligned view of capacity across our technology organisation whilst still allowing our teams the freedom in choosing their own way of working (aligned to agile principles).

Conclusion

Whilst we do not mandate a single way of working, there are some practices that need to be in place for teams/platforms to leverage this, these being:

  • Teams and platforms regularly review and move work items (User Stories, PBIs, Features, Epics, etc.) to in progress (when started) and done (once complete)

  • Teams regularly monitor the size (in terms of number of child work items) of Features

  • At all levels we always try to break work down to thin, vertical slices

  • Features are ‘owned’ by a single team (i.e. not shared across multiple teams)

All teams and platforms, regardless of Scrum, Kanban, XP, DevOps, blended methods, etc. should be doing these things already if they care about agility in their way of working.

Hopefully this blog has given some insight on how you can do capacity planning, at scale, whilst allowing your teams freedom to choose their own way of working. If you are wondering what tool(s) we use for this, we have a Power BI template that teams can download, connect to their Jira/Azure DevOps project and get the info. If you want, you can give this a go with your team(s) via the GitHub repo here(don’t forget to check against the pre-requisites!).

Let me know in the comments if you have any other approaches for capacity planning that allow teams freedom in their way of working…

The time we went to SEAcon

What is the latest thinking being shared at Agile conferences? The Agile Coaches at ASOS took a day out at SEAcon — The Study of Enterprise Agility conference to find out more…

A joint article by myself, Dan and Esha, fellow Agile Coaches here at ASOS Tech.

What is SEAcon?

According to the website, it first started in 2017, with the organisers creating the first-ever enterprise agility related meetup (SEAm) and conference (SEAcon) as they wanted to support organisations’ desires to be driven by an authentic purpose. Having attended the 2020 version of the conference and hearing some great talks, we knew it was a good one to go check out, our first time doing this as a team of coaches at ASOS.

This year, the conference had over 30 speakers across three different tracks — Enterprise, Start-up/Scale-up and Agile Leadership. This meant that there was going to be a good range of both established and new speakers covering a variety of topics. A particular highlight was the Agile Leadership track which had a number of speakers outside of software development but with plenty of applicable learnings, which was refreshing.

The conference itself

Morning

The conference was hosted at the Oval Cricket Ground, which was a fantastic venue!

We started the day by attending Steve Williams’ How to win an Olympic Gold Medal: The Agile rowing boatsession. This talk was so good and it set a high bar for all the sessions to follow that day. What was great about it was not only Steve's passion but the fact that there were so many parallels with what teams and organisations are trying to do with employing agility. One of our favourites was the concept of a ‘hot wash up’. Here the rowers would debrief after an exercise on how it went and share feedback amongst each other, all overseen/facilitated by the coach. Not just any coach mind you, this was with Jürgen Gröbler, one of the most successful Olympic coaches ever with 13 consecutive Olympic Gold medals.

Interestingly, Steve shared that Jürgen did not have the build of nor was ever a rower himself, which, when you consider the seemingly endless debate around ‘should Agile Coaches be technical?’ offers an alternative thought in our world. Another great snippet was that in a rowing team of four, work is always split evenly; shared effort and no heroes. There is also no expectation to operate at 100% effort all the time as you can’t be ‘sprinting’ constantly (looking at you Scrum 👀).

Late morning and lunch

After the morning break, we reconvened and chose to check out Andrew Husak (Emergn) and his session on It’s not the people, it’s the system. We found this session very closely aligned with what we are looking at currently. It effectively covered how to measure the impact from the work you are doing is having in the organisation, with Andrew offering a nice blend of historical references (Drucker, Deming, Goldratt, etc.) and metrics (cycle time, lead time, work in progress, etc.) to share how Emergn focus on three themes of Value, Flow and Quality, with engagements they have.

A key thing here being about measuring end to end flow (idea -> in the hands of users) rather than just the delivery cycle (code started -> deployed to production). Albeit, it may be that you have to start with the delivery cycle first, gathering the evidence/data on improving it, before going after the whole ‘system’.

We ended late morning/early lunch by going to Stephen Wilcock (Bloom & Wild) on Pivot to Profitability. Now, it wasn’t quite all relevant to us and where we work currently (not the fault of the speaker!) however there were still useful learnings like mapping business outcomes to teams (rather than Project or Feature teams) and the importance of feature prioritisation by CD3 (Cost of Delay Divided by Duration). Although there are sources that argue CD3 may not be the most effective way to prioritise.

We took a break after that, chatting to other attendees and digesting the morning, then before we knew it, it was time for lunch. Conference lunches can be really hit or miss and thankfully a great selection was available and, unlike other conferences, there were multiple stations to get your food from so “hangry-ness” was averted.

Afternoon

After lunch we were all really looking forward to the Leadership styles in the Royal Navy talk however, once we sat down for the session we actually realised we were in The Alignment Advantage session by Richard Nugent. That would be one small criticism of the conference in that it was really difficult to find a printed schedule (none were given out) and it seems schedule changes like this suffered as a result.

Thankfully, this talk was totally worth it. At the minute, we are reviewing and tweaking our Agile Leadership training and this gave us tonnes of new thinking/material we could be leveraging around strategy and the importance of alignment in achieving this. In the talk, Richard posed a series of questions for us all to note down our take on (either within our team or our organisation), such as:

  • What is strategy?

  • What is your key strategic objective?

  • What is your definition of culture?

  • On a scale of 1–6, to what degree does your current culture support the delivery of a strategic objective?

  • What is the distinction between service and experience?

  • What is your x?

What was great was, rather than leave this ambiguously for us to answer, Richard validated our understanding by giving his view on what the answer was to all the above. After the session, we were all very energised about how we could be using this approach for leaders we work with in ASOS and baking this into our leadership training.

After Richard it was time for Jon Smart and Thriving over surviving in a changing world. As big fans of his book Sooner, Safer, Happier, we were all particularly excited about this talk, and we were not disappointed. Jon highlighted that organisational culture is at the core of thriving, however, culture cannot be created, it has to be nurtured.

Leaders need to provide clear behavioural guardrails that are contextual and need to be repeatedly communicated to enable teams and leaders to hold each other to account.

Jon went on to explain the three key factors for success:

  • Leaders go first, become a role model

  • Psychological safety

  • Emergent mindset; the future is unknown so experiment and optimise for that

At ASOS, one of our focuses is on flow, so when Jon spoke about intelligent flow by optimising for outcomes, we were all naturally intrigued. By having high alignment (through OKRs) with minimum viable guardrails, teams are empowered to experiment on how to best achieve their north star. However, something that is always forgotten about is limiting WIP at all levels to create a pull not push system, where organisations stop starting and start finishing.

As we all had to leave early, the last session for the day we went to was Ben Sawyers Leadership Lessons from bomb disposal operations in Iraq and Afghanistan. Sharing stories, pictures and diagrams from his several tours, Ben provided a detailed explanation of how the British Army use Mission Command to create a shared understanding of goals, while enabling teams to decide and own their tactics and approach.

This decentralised approach to leadership echoed many of the other talks throughout the day and reiterated the importance of trust to reach success. Ben also referred to Steve Williams’ approach of using a ‘hot wash up’ to reflect on recent activities and consider improvements for next time. To round off, it was interesting to hear that despite so many contextual differences, similarities in approaches have led to success in many different industries.

Key learnings

So what were our main takeaways?

One of the standouts has to be about the concepts around agility traversing multiple industries and domains, not limited to software development. It’s a great reminder as Coaches about the importance of language and how, when it comes to agility, people are likely already applying aspects of this in their day to day but calling it something else, and this is ok. Being able to have more anecdotes of what different industries use which are similar to what teams are using is great.

Secondly, the importance of focusing on outcomes and measuring impact when it comes to ways of working. As Coaches we’re talking more and more about moving teams away from measuring things like agile compliance (stand-up attendance, contribution to refinement) to the things that truly matter (speed, quality, value delivery).

Finally, the recurring theme of being outcome oriented and setting direction, allowing individuals and teams to choose their own path in how they get there being the most effective way to work. Rather than fixating on the how (e.g. methods/frameworks), it’s clear that whether you’re an agilist or not, alignment in strategy and direction is paramount for success.

For its price, SEAcon is probably the best value for money agile conference you’ll get the chance to attend. Good talks, networking and food make it one to watch out for when tickets go on sale — hopefully we’ll be back there in 2024!

Weeknotes #40

Product Management Training

This week we had a run through from Rachel of our new Product Management training course that she has put together for our budding Product Managers. I really enjoyed going through it as a team (especially using our co-working space in More London) and viewing the actual content itself.

Credits: Jon Greatbatch for photo “This can be for your weeknotes”

What I really liked about the course was the fact the attendees are going to be very ‘hands-on’ during the training, and will get to go apply various techniques that PdM’s use with a case study of Delete My Data (DMD) throughout. It’s something that I’ve struggled with when putting together material in the past of having an ‘incremental’ case study that builds through the day, so glad that Rachel has put something like this together. We’ve earmarked the 28th Jan to be the first session we run, with it being a combination of our own team and those moving into Product Management being the ‘guinea pigs’ for the first session.

2019 Reflections

This week has been a particularly challenging week, with lots of roadblocks in the way of moving forward. A lack of alignment in new teams with future direction, and lack of communication to the wider function around our move to new ways of working means that it feels like we aren’t seeing the progress we should be, or creating a sense of urgency. Whilst it’s certainly true around achieving big through small, it does feel that with change initiatives it can feel like you are moving too slow, which is the current lull we’re in. After a few days feeling quite down I took some time out to reflect on 2019, and what we have achieved, such as:

  • Delivering a combined 49 training courses on Agile, Lean and Azure DevOps

  • Trained a total of 789 PwC staff across three continents

  • Becoming authorised trainers to offer an industry recognised course

  • Actually building our first, proper CI/CD web apps as PoC’s

  • Introducing automated security tools and (nearly) setting up ServiceNow change management integration to #TakeAwayTheExcuses for not adopting Agile

  • Hiring our first ever Product Manager (Shout out Rachel)

  • Getting our first ever Agile Delivery Manager seconded over from Consulting (Shout out Stefano)

  • Our team winning a UK IT Award for Making A Difference

  • Agreement from leadership on moving from Project to Product, as part of our adoption of new ways of working

All in all, it’s fair to say we’ve made big strides forward this year, I just hope the momentum continues into 2020. A big thank you from me goes to Jon, Marie, James, Dan, Andy, Rachel and Stefano for not just their hard work, but for being constant sources of inspiration throughout the year.

Xmas Break

Finally, I’ll be taking a break from writing these #Weeknotes till the new year. Even though I’ll be working over the Christmas period, I don’t think there’ll be too much activity to write about! For anyone still reading this far in(!), have a great Christmas and New Year.

Weeknotes #39

Agile not WAgile

This week we’ve been reviewing a number of our projects that are tagged as being delivered using Agile ways of working within our main delivery portfolio. Whilst we ultimately do want to shift from project to product, we recognise that right now we’re still doing a lot of ‘project-y’ style of delivery, and that this will never completely go away. So we’re trying to in parallel at least get people familiar with what Agile delivery is all about, even if delivering from a project perspective.

The catalyst really for this was one of our charts where we look at the work being started and the split between which of that is Agile (blue line) Vs. Waterfall (orange line).

The aspiration being of course that with a strategic goal to be ‘agile by default’ the chart should indeed look something like it does here, with the orange line only slightly creeping up when needed but generally people looking to adopt Agile as much as they can.

When I saw the chart looking like the above last week I must admit, I got suspicious! I felt that we definitely were not noticing the changes in behaviours, mindset and outcomes that the chart would suggest, which prompted a more thorough review.

The review was not intended to act as the Agile police(!), as we very much want to help people in moving to new ways of working, but to really make sure people had understood correctly around what Agile at its core really is about, and if they are indeed doing that as part of their projects.

The review is still ongoing, but currently it looks like so (changing the waterfall/agile field retrospectively updates the chart):

The main problems observed being things such as lack of frequent delivery, with project teams still doing one big deployment to production at the end before going ‘live’ (but lots of deployments to test environments). Projects are maybe using tools such as Azure DevOps and some form of Agile events (maybe daily scrums), but work is still being delivered in phases (Dev / Test / UAT / Live). As well as this, a common theme was not getting early feedback and changing direction/priorities based on that (hardly a surprise if you are infrequently getting stuff into production!).

Inspired by the Agile BS detector from the US Department of Defense, I prepared a one-pager to help people quickly understand if their application of Agile to their projects is right, or if they need to rethink their approach:

Here’s hoping the blue line goes up, but against some of that criteria above, or at least we get more people approaching us for help in how to get there.

Team Health Check

This week we had our sprint review for the project our grads are working on, helping develop a team health check web app for teams to conduct monthly self assessments as to different areas of team needs and ways of working.

Again, I was blown away by what the team had managed to achieve this sprint. Not only had they managed to go from a very basic, black and white version of the app to a fully PwC branded version.

They’ve also successfully worked with Dave (aka DevOps Dave) to configure a full CI/CD pipeline for any future changes made. As the PO for the project I’ll now be in control of any future releases via the release gate in Azure DevOps, very impressive stuff! Hopefully now we can share more widely and get teams using it.

Next Week

Next week will be the last weeknotes for a few weeks, whilst we all recharge and eat lots over Christmas. Looking at finalising training for the new year and getting a run through from Rachel in our team of our new Product Management course!

Weeknotes #38

Authorized Instructors

This week, we had our formal course accreditation session with ICAgile, where we were to review our 2-day ICAgile Fundamentals course, validating if it meets the desired learning objectives as well as the general course structure, with the aim being to sufficiently balance theory, practical application and attendee engagement. I was extremely pleased when we were given the rubber stamp of approval by ICAgile, as well as getting some really useful feedback to make the course even better, in particular to include more modules aligned to the training from the BACK of the room (TBR) technique.

It’s a bit of a major milestone for us as a team, when you consider this time last year most of the training we were doing was just starting, and most of the team running it for the first time. It’s testimony to the experience we’ve gained, and incremental improvements we’ve made based on the feedback we’ve received that four of us are now authorized to offer a certified course from a recognised body in the industry. A new challenge we face in the course delivery is now the organisational impediments faced around booking meeting rooms(!) — but with two sessions in the diary for January and February next year I’m looking forward to some more in depth learning and upskilling for our PwC staff.

Product Management

As I mentioned last week, Rach Fitton has recently joined us as a Product Manager, looking to build that capability across our teams. It’s amazing how quickly someone with the right experience and mindset can quickly make an impact, as I already feel like myself (and others) are learning a great deal from her. Despite some conversations with colleagues so far where I feel they haven’t given her much to work with, she’s always given them at least one thing that can inspire them or move them further along on the journey. 

A good example being the visual below as something she shared with myself and others around all the activities and considerations that a Product Manager typically would undertake:

Things like this are great sources of information for people, as it really emphasises for me just how key this role is going to be in our organisation. It’s great for me to have someone far more experienced in the product space than myself to not only validate my thoughts, but also critique any of the work we do, as Rachel gives great, actionable feedback. I’m hoping soon we can start to get “in the work” with more of the teams and start getting some of our people more comfortable with the areas above.

Next Week

Next week we plan to start looking at structuring one of our new services and the respective product teams within that, aiming for a launch in the new year. I’m also looking forward to connecting with those in the PwC Sweden team, who are starting their own journey towards new ways of working. Looking forward to collaborating together on another project to product journey.

Weeknotes #37

Ways of Working

This week we had our second sprint review as part of our Ways of Working (WoW) group. The review went well with lots of discussion and feedback which, given we aren’t producing any “working software” is for me a really good sign. We focused a lot on change engagement this sprint, working on the comms side as well (with producing ‘potentially releasable comms’) as well as identifying/analysing our pilot areas where we really want teams to start to move towards this approach. A common theme appears to be around the lack of a product lens to services being offered, and a lack of portfolio management to ensure WIP is being managed and work aligns with strategy. If we can start to tackle this then we should have some good social proof for those who may be finding adoption slightly more tricky.

We agreed to limit our pilot to be on four particular areas for now, rather than spreading ourselves too thinly across multiple teams, fingers crossed we can start to have some impact this side of the new year.

New Joiners

I was very pleased this week to finally have Rachel, our new Product Manager finally join us. It feels like an age since we interviewed her for the role, and we’ve been trying our best in holding people back to make sure we aren’t veering too much away from the Product Management capability we’re wanting her to build. It’s great to have someone who is a very experienced practitioner, rather than have someone who just relies on the theory. I often find that the war stories and when stuff has not quite worked out is where the most learning occurs, so it’s great to have her here in the team to help us all.

Another positive note for me was after walking her through the WoW approach, as she not only fed back around it making sense but that it also has her excited :) It’s always nice to get some validation from a fresh pair of eyes, particularly from someone as experienced as Rachel is, I’m really looking forward to working with and learning from her.

With Rachel joining us as a Product Manager, and Dave who joined us roughly a month ago as a DevOps Engineer, it does feel like we’re turning a corner in the way we’re recruiting as well as the moves towards new ways of working day to day. I’m extremely appreciative to both of them for taking a risk in wanting to be part of something that will be both very challenging but also (hopefully!) very rewarding.

Team Health Check

We’ve made some good progress this week with our Team Health Check App, which will help teams identify different areas of their ways of working which may need improvement. With a SQL DB now populated with previous results, we can actually connect to a source where the data will be automatically updated, as opposed to manually copying/pasting from Google Sheets -> Power BI. The next step is to get it fully working in prod with a nicer front end, release it to some users to actually use, as well as write a short guidance document on how to connect to it.

Well done again to all our grads for taking this on as their first Agile delivery, they’re definitely learning as they go but thankfully taking each challenge/setback as a positive. Fingers crossed for the sprint review Thursday it’s something we can release!

Next Week

Next week we have our ICAgile course accreditation session, hopefully giving us the rubber stamp as accredited trainers to start offering our 2-day ICAgile Fundamentals course. It also means another trip to Manchester for myself, running what I *think* will be my last training session of 2019. Looking forward to delivering the training with Andy from our team for our people in Assurance!

Weeknotes 36

Refreshing Mindsets

This week was the second week of our first sprint working with our graduate intake on our team health check web app. It was great to see in the past week or so that the team, despite not having much of a technical background, had gone away and been able to create a very small app created using a mix of Python and an Azure SQL database for the responses. It just goes to show how taking the work to a team and allowing them to work in an environment where they can be creative (rather than prescribing the ‘how’) can lead to a great outcome. Whilst the app is still not quite yet in a ‘releasable’ state, in just a short time it really isn’t too far away from something a larger group of Agile Delivery Managers and Coaches can use. It’s refreshing to not have to take on the battle of convincing hearts and minds, working with a group of people who recognise this is the right way to work and are just happy to get on and deliver. Thanks to all of them for their efforts so far!

Cargo Culting

“Cargo Culting” is a term used when people believe they can achieve benefits by adopting/copying certain behaviours, actions or techniques. They don’t consider why the benefits and/or causes occur, instead just blindly copy the behaviours to try get similar results.

In the agile world, this is becoming increasingly commonplace, with the Spotify model being the latest fad for cargo culting in organisations. Organisations are hearing about how Spotify or companies like ING are scaling Agile ways of working which, in practice, sounds great, but it is incredibly hard and nowhere near as simple as just redesigning organisations into squads, tribes, chapters and guilds.

In a training session with some of our client facing teams this week, I used the above as an example of what cargo culting is like. Experienced practitioners need to be aware that the Spotify model is one tool in the toolbox, with there being lots of possible paths to organisational agility. Spotify themselves never referred to it as a model, nor use it themselves anymore, as well as ING moving towards experimenting with using LeSS in addition to the Spotify model. Dogma is one of the worst traps you can fall into when it comes to moving to new ways of working, particularly when you don’t stop and reassess whether this actually is the right way for this context. Alignment on language is important, but should not be at the compromise of finding first of all what works in the environment.

Next Week

Next week I’ll be running an Agile Foundations training session, and we (finally!) have Rachel joining our team as a Product Manager. I’m super excited to have her as part of the team, whilst hopeful we can control the flow of requests her way so she does not feel swamped, looking forward to having her join PwC!

Weeknotes #35

Back to Dubai

This week I was out in the Middle East again, running back to back Agile Foundations training sessions for people in our PwC Middle East firm. 

I had lots of fun, and it looked like attendees did too, both with the engagement on the day and the course feedback I received.

One issue with running training sessions in a firm like ours are that a number of large meeting rooms still have that legacy “boardroom” format, which means for little movement during sessions that require interaction. Last time I was there this wasn’t always the case, as one room was in the academy which, as you can tell by the title was a bit more conducive to collaboration. As well as that we had 12 people attend on day one, but 14 attendees on day two which again for me is probably two people too many. Whilst it generally works ok in the earlier parts of the day as the room can break off into two groups, it causes quite a lot of chaos when it comes to the lego4scrum simulation later on, as we really only have enough lego for one group. Combine that with the room layout and you can understand why some people can go off and get distracted/talk amongst themselves, but then again maybe that’s a challenge for the Scrum Master in the simulation! A learning for me is to limit it to 12 attendees max, with a preference to smaller (8–10) audience sizes.

Retrospectives

I’ve talked before around my view on retrospectives, and how they can be mistreated by those who act as the ‘agile police’ by using their occurance to determine if a team is/is not Agile (i.e. “thou cannot be agile if thou is not running retrospectives”). This week we’ve had some further contact from our Continuous Improvement Group around the topic and how to encourage more people to conduct them. Now, given this initiative has been going on for some time, I feel that we’ve done enough around encouragement and providing assistance/coaching to people if needed. We’ve run mock retrospectives, put together lengthy guidance documents with templates/tools for people to use, people practice it in the training on multiple occasions yet there are still only a small amount of people doing them. Given a key principle we have is invitation over infliction, this highlights that the interest isn’t currently there, and that’s ok! This is one in a list of many ‘invitations’ there are for people to start their agile journey — if the invitation is not accepted then ok, let’s try a different aspect of Agile.

A more important point for me really is that just because you are having retrospectives, it does not always mean you are continuously improving.

If it’s a moan every 1–4 weeks, that’s not continuous improvement. 

If nothing actionable or measurable comes out of it that is then reviewed at the next retro, then it’s not continuous improvement. 

If it’s held too infrequently, then it’s not continuous improvement.

With Toyota’s Kentucky factory pulling on the andon cord on average 5,000 times a day, this is what continuous improvement is! Worth all of us as practitioners remembering that running a retrospective ≠ Continuous Improvement.

Next Week

Next week we have a review with ICAgile, to gain course accreditation to start offering a 2-day training course with a formal ICAgile Fundamentals certification. It’s been interesting putting the course together and mapping it to official learning outcomes to validate attendees getting the certification. Fingers crossed all goes well and we can run a session before Christmas!

Weeknotes #34

Team Areas

A tell tale sign for any Agile practitioner is normally a walk of the office floor. If an organisation claims to have Agile teams, usually a giveaway is if there are team areas with lots of visual radiators around their ways of working.

With my trip to Manchester this week, I was really please to see that one of our teams, Vulcan had taken to claiming their own area and making the work they do and the management of it highly visible.

This is great to see as even with the digital tooling we have, it’s important for teams (within a large organisation) to have a sense of purpose and identity, which I’d argue is impossible to do without something physical/a dedicated area for their work. These are the things that when going through change provide inspiration and encourage you to keep on going, knowing that certainly with some teams, the message is landing.

Product Manager Hat

With our new graduate intake in IT, one of the things various teams were asked to put together was a list of potential projects for them to work on. 

A niggling issue I’ve had is our Team Health Check tool which, taking inspiration from the Spotify Squad Health Check, uses a combination of anonymous Google Form responses that are then visualized in Power BI.

This process though is highly manual, with a Google Apps Script converting the form responses into a BI tool friendly format, then copied/pasted into a Power BI table. The project therefore for the graduates is about a web version, with a database to store responses for automated reporting. I’ve therefore been volunteered as the Product Manager :D which meant this week even writing some stories and BDD acceptance criteria! Looking forward to seeing how creative they can be, and a chance for them to really apply some of the learnings from the recent training they’ve been through.

Digital Accelerator Feedback

We received feedback from both our Digital Accelerator sessions we ran recently. Overall with an average score of 4.43/5 we were one of the highest rated sessions people attended. We actually received the first batch of feedback before the second session, which was great for us as it allowed us to make a couple tweaks to exercises and delete slides that we feel maybe weren’t needed. Some highlights in terms of feedback:

Good introduction into agile concept and MVP. Extremely engaging and persuasive games to demonstrate concept! Lots of fun!

All of it was brilliant and also further reading is great to have

This was a great module and something I want to take further. This was the first time I heard of agile and Dan broke down exactly what it was in bite size pieces which was really helpful.

So much fun and energy created through very simple activities. It all made sense — easily relatable slides. Thought Marie did a great job

Really practical and useful to focus on the mindset not the methodology, which I think is more applicable to this role

I’ve heard the term agile a lot in relation to my clients so was really useful to understand this broken down in a really basic and understandable way and with exercises. This has led me to really understand the principles more than through reading I’ve done.

Very interesting topic, great presentation slides, games, engaging presenter

Very engaging and interesting session. Particularly liked the games and the story boarding.

Very engaging and impactful session. The activities really helped drive home the concepts in an accessible way

Best.Session.Ever.

Thanks to Andy, Marie, Stefano, James and Dan for running sessions, as well as Mark M, Paul, Bev, Ashley, Tim, Anna, Mark P, Gurdeep and Brian for their assistance with running the exercises.

Next Week

Next week I’ll be heading out to Dubai to our Middle East office to run a couple training sessions for teams out there. A welcome break from the cold British weather — looking forward to meeting new faces and starting their Agile journey as well as catching up with those who I trained last time!

Weeknotes #33

Right to Left

This week I finished reading Mike Burrows’ latest book Right to Left

Yet again Mike manages to expertly tie together numerous aspects of Agile, Lean and everything else, in a manner that’s easy to digest and understandable from a reader/practitioner perspective. One of my favourite sections of the book is the concept of the ‘Outside-In’ Service Delivery Review. As you can imagine from the title of the book, it’s taking the perspective of the right (needs, outcomes, etc.) as an input, over the left (roles, events, etc.) and then applying this thinking across the board, say for example in the Service Delivery Review meeting. This is really handy for where we are on our own journey, as we emphasise the need to focus on outcomes in grouping and moving to product teams that provide a service to the organisation. One area of this being around how you construct the agenda of a service review. 

I’ve slightly tweaked Mikes take on matters, but most of the format/wording is still the same:

With a Service Review coming soon, the hope is that we can start adopting this format as a loose agenda going forward, in particular due to it’s right to left perspective.

Formulating the above has also helped with clarity around the different events and cadences we want teams to be thinking about in choosing their own ways of working. I’ve always been a fan of the kanban cadences and their inputs/outputs into each other:

However I wanted to tweak this again to be a bit simpler, to be relevant to more teams and to align with some of what teams are already doing currently. Sonya Siderova has a nice addition to the above with some overarching themes for each meeting, which again I’ve tailored based on our context:

These will obviously vary depending on what level (team/service) we’re focusing on, but my hope is something like the image above will give teams a bit clearer steer as to things they should be thinking about and the intended purpose of them.

Digital Accelerators

We had another session for our Digital Accelerators this week, which seemed to be very well received by our attendees. We did make a couple changes for this one based on the feedback from last week, removing 2–3 slides and changing the Bad Breath MVP exercise from 2 groups to 4 groups. 

It’s amazing how much a little tweak can make, as it did feel like it flowed a lot easier this time, with plenty opportunity for people to ask questions. 

Last weeks session was apparently one of the highest scoring ones across the whole week (and apparently received the biggest cheer when the recap video showed photos of people playing the ball point game!), with a feedback score of 4.38/5 — hopefully these small changes lead to an even higher score once we get the feedback!

Next Week

Next week is a quieter one, with a trip to Manchester on Tuesday to meet Dave, our new DevOps Engineer, as well as help coach one of our teams around ‘Product’ thinking with one of our larger IT projects at the minute. Looking forward to some different types of challenges there, and how we can start growing that product management capability.

Weeknotes #32

Little Bets

A few weeks ago, I was chatting to a colleague in our Robotic Process Automation (RPA) team who was telling me about how the team had moved to working in two-week sprints. They mentioned how they were finding it hard to keep momentum and energy up, in particular towards the end of the sprint when it came to getting input to the retro. I asked what day of the week they were starting the sprint to which they replied “Monday”, of course meaning the sprint finished on a Friday. A suggestion I had was actually to move the start of the sprint (keeping the two-week cadence) to be on a Wednesday, as no one really wants to be reviewing or thinking about how to get better (introspection being a notoriously tougher ask anyway) on a Friday. They said they were going to take it away and run it as an experiment and let me know how it went. This week the team had their respective review and retrospective, with the feedback being that the team much preferred this approach, as well as the inputs to the retro being much more meaningful and collaborative.

It reminded me that sometimes as coaches we need to recognise that we can actually achieve big through small, and that a tiny little tweak can actually make the world of difference to a team. For myself I’ve recently found that I’ve been getting very frustrated with bigger changes we want to make, and concepts not landing with people, despite repeated attempts at engagement and involvement. Actually, sometimes it’s better to focus on those tiny tweaks/experiments that can make a big difference.

This concept is explained really well in Peter Sims “Little Bets”, a great book on innovation in organisations through making series of little bets, learning critical information from lots of little failures and from small but significant wins.

Here’s to more little bets with teams, rather than big changes!

Digital Accelerators

This week we also ran the first of two sessions introducing Agile to individuals taking part in our Digital Accelerator programme at PwC. The programme is one of the largest investments by the firm, centered on upskilling our people on all things digital, covering everything from cleansing data and blockchain to 3D Printing and drones.

Our slot was 90 minutes long, where we introduced the manifesto and “Agile Mindset” to individuals, including a couple of exercises such as the Ball Point Game and Bad Breath MVP. With 160 people there we had to run 4 concurrent sessions with 40 people in each, which was the smallest group size we were allowed!

I thoroughly enjoyed my session, as it had been a while since I’d done a short, taster session on Agile — good to brush off the cobwebs! The energy in the room was great, with some maybe getting a little too competitive with plastic balls!

Seems like the rest of our team also enjoyed it, as well as the attendee feedback being very positive. We also had some additional help from colleagues co-facilitating the exercises which I’m very thankful for as it would have been chaotic without their help! Looking forward to hearing how the Digital Accelerators take this back to their day to day, and hopefully generate some future work for us with new teams to work with.

Next week

Next week is another busy one. I’m helping support a proposal around Enterprise Agility for a client, as well as having our first sprint review for our ways of working programme. On top of that we have another Digital Accelerator session to run, so a busy period for our team!

Weeknotes #31

OKRs

We started the week off getting together and formally reviewing our Objectives and Key Results (OKRs) for the last quarter, as well as setting them for this quarter.

Generally, this quarter has gone quite well when you check against our key results, with the only slight blip being around the 1-click deployment and the cycle time at portfolio level. 

A hypothesis I have is due to the misunderstanding around people feeling that they had to hold a retrospective before moving something to “done”, we have inadvertently caused cycle times to elongate. With us correcting this and again re-emphasizing the need to focus on the small batch, the goal for this quarter really will be to focus on getting that as close to our 90-day Service Level Expectation (SLE) at portfolio level. As well as this will be putting some tangible measurements around spinning up new, dedicated product teams and building out our lean offering.

Prioritisation

Prioritisation is something that is essential to success. Whether it be at strategic, portfolio, program or team level, priorities need to be set so that people have a clear sense of purpose, have a goal to work towards, have focus and that ultimately we’re working on the right things. Prioritisation is also a very difficult job, too often we rely on HiPPO (Highest Paid Person's Opinion), First In, First Out (FIFO) or just sheer gut feel. In previous years, I provided teams with this rough, fibonacci-esque approach to formulating a ‘business value’ score, then dividing this by effort to get an ‘ROI’ number:

Business Value Score

10 — Make current users happier

20 — Delight existing users/customers

30 — Upsell opportunity to existing users/customers

50 — Attract new business (users, customers, etc.)

80 — Fulfill a promise to a key user/customer

130 — Aligns with PwC corporate/strategic initiative(s)

210 — Regulatory/Compliance (we will go to jail if we don’t do it)

It’s fairly “meh” I feel, but was a proposed stop gap between getting them doing nothing and something that used numbers. Rather bizarrely, the Delight existing users/customers aspect was then changed by people to be User has agreed deliverable date — which always irked me, mainly as I cannot see how this has anything to do with value. Sure people may have a date in mind, but this to do with urgency, not value. Unfortunately a date-driven (not data-driven) culture is still very prevalent. Just this week for example we had someone explain how an option was ‘high priority’ as it was to going to be delivered in the next three months(!).

Increasingly, I’m finding a simple, lightweight approach to prioritisation I’m gravitating towards, and one that is likely to get easier buy in, is Qualitative Cost of Delay.

Source: Black Swan Farming — Qualitative Cost of Delay

Cost of Delay allows us to combine value AND urgency, which is something we’re all not very good at. Ideally, this would be quantified so we would all be talking a common language (i.e. not some weird dark voodoo such as T-Shirt sizing, story points or fibonacci), however you tend to find people fear numbers. My hope is that this way we can get some of the benefits of cost of delay, whilst planting the seed of gradually moving to more of a quantified approach.

Next Week

Next week is a big week for our team. We’re running the first of two Agile Introduction sessions as part of the firms Digital Accelerator program. With four sessions running in parallel with roughly 40 attendees in each, we’ll be training 160 people in a 90-minute session. Looking forward to it but also nervous!

Weeknotes #30

CI/CD

We started the week with Jon running a demo for the rest of UKIT on CI/CD, with a basic website he built using Azure DevOps for the board, pipeline, code and automated testing. I really enjoyed the way it was pitched, as it went into just enough detail for people who like the technical side, but also was played out in a ‘real’ way of a team pulling an item from the backlog, deploying a fix and being able to quickly validate that the fix worked whilst not compromising on quality and/or security. This was a key item for our backlog this quarter, as it ties in nicely to one of our objectives around embedding Agile delivery in our portfolio, specifically around the technical excellence needed. We’re hoping this should start to spark curiosity and encourage others to start exploring this with their own teams — even if not fully going down the CI/CD route, the pursuit of technical excellence is something all teams should be aspiring to achieve.

Aligned Autonomy

This week we’ve been having multiple discussions around the different initiatives that are going on in our function around new ways of working. Along with moving to an Agile/Product Delivery Model, there are lots of other conversations taking place around things such as changing our funding model, assessing suppliers, future roles, future of operations and the next generation cloud, to name a few. With so many things going on in parallel, it’s little surprise that overlap happens, blockers quickly emerge, and/or a lack of shared understanding ceases to exist. Henrik Kniberg has a great talk where he talks about the importance of aligned autonomy, precisely the thing that we’re missing currently.

Thankfully, those of us involved in these various initiatives have come together to highlight the lack of alignment, with the aim of something a bit more cohesive to manage overlap and dependencies. A one day workshop is planned to build some of this out and agree priorities (note: 15 different ‘priorities’ is not prioritisation!) — which should provide a lot more clarity. 

An important learning though has to be around aligned autonomy, making sure any sort of large ways of working initiative has this.

Next Week

Next week has a break midweek for me, as I have a day off for my birthday 😀 We’ll have a new DevOps Engineer — Dave starting on Monday, looking forward to having him join our organisation and drive some of those changes around the technical aspects. Dan is running a lunch and learn for the team on LeSS, which will be good to hear about his learnings from the course. We’ve also got an OKR review on Monday which will be good to assess how we’ve done against our desired outcomes and what we need to focus on for next quarter.

Weeknotes #29

Back to training

It was back to training this week, as myself and Stefano ran another of our Agile Foundations sessions for people across the firm. It was also Stefano’s first time delivering the course, so a good experience for him. We had an interesting attendee who gave us “a fact for you all, agile is actually a framework” in the introduction which did make me chuckle and also made things a little awkward 10 minutes later for our Agile is a mindset slide:

Source

We also had a number of our new graduates attend, which was good to meet them all and not have to deal with quite so many “but in waterfall” or “how would you do this (insert random project) in an Agile way” type questions. They also got pretty close to building everything in our Lego4Scrum simulation which would have been a first, had it not been for a harsh business stakeholder attending their sprint reviews! There were a couple times they got lost when doing the retro and misunderstanding its purpose, which was hard to not interject and correct it. Feedback was they would have preferred being steered the right way (as we let it play out), so good learning if that happens again.

BXT Jam

On Wednesday night myself and Andy ran a session in the evening as part of a BXT Jam.

BXT (Business, eXperience, Technology) is all about modern ways of working and helping our clients start and sustain on that journey. It has four guiding principles of:

1. Include diverse perspectives

2. Take a human centred approach

3. Work iteratively and collaboratively

4. Be bold

Our session was mainly centered on understanding Agile at its core, really focusing on mindset, values and principles as opposed to any particular practices. We looked at the manifesto, some empirical research supporting Agile (using DORA) and played the Ball Point Game with attendees. We took a little bit of a risk as it was the first time we’d run the game, but given it’s a pretty easy one to facilitate there thankfully weren’t any issues.

Andy handled some particularly interesting questions well (“how are we supposed to collaborate with the customer without signing a contract?”) and I think the attendees left with a better understanding of what Agile at its core is all about. We’ve already been approached to hold a similar session for our Sales and Marketing team in October, so hoping this can lead to lots more opportunities to collaborate with the BXT team and wider firm. Special thank you has to go to Gurdeep Kang for setting up the opportunity and connecting us with the BXT team.

Next Week

Next week I’m heading to Birmingham to run a couple workshops with Senior Managers in our Tax teams, helping them understand Agile and start to apply it to a large, uncertain programme of change they are undertaking. We’ll also be holding a sprint review with one of our vendors centered on new ways of working, looking at how some of our pilot teams are getting on and learning from their feedback.

Weeknotes #28

Incremental Changes

This week I observed a great example of approaching work with an Agile mindset. Within our office we have a number of electronic displays which show desk availability on each floor, as well as room names/locations. John Cowx, one of our Experience Design team, showed to me an incremental change they had made this week, introducing a single touch screen for one display on one floor which would allow staff to interact, type in a room name and then have a route plotted to the room to show them the way. This is a great example of an Agile mindset to work, as rather than roll this out through every single screen across every single office across the country, here we’re piloting it (small batch) and observe the interactions/obtaining feedback, before making changes and/or deciding whether or not to scale it across all locations (inspecting and adapting). It was great to not only see someone so passionate about the product, but to see an example of the Agile mindset being evidenced in the work we do.

Retrospectives

This week we were having a conversation around the Continuous Improvement initiative being run in IT and encouraging people in our ‘Project’ model to conduct retrospectives, regardless of delivery approach (then taking any wider improvements identified in the retrospectives into the initiative to implement). It’s something that has been running for a while with limited success as, generally, the observations are people aren’t conducting Retrospectives or the improvements being implemented are low hanging fruit rather than anything meaningful of impact. The former doesn’t really surprise me, even with using our team to provide lots of guidance, templates and lunch & learns. For me it’s clear that people don’t want to use retros (which is fine), therefore we need to learn from that feedback and change direction, rather than continuing to push the retrospectives agenda, as otherwise we can end up falling into the trap below:

Imposition of Agile

It’s perfectly reasonable to see that people can continuously improve without doing retrospectives but more importantly, it’s to recognise that doing retrospectives != continuously improving. I’ve suggested the group conduct some end user interviews/field research to understand why people are struggling with retros and also around what they see the purpose of the initiative as. Possibly there could be an unearthing from that around what the real improvements are that are needed, rather than relying on the retrospectives as the mechanism to capture them.

TLDR; individuals and interactions over tools and processes

Training

It was back to the training rhythm this week, running a half day session on Wednesday as part of our Hands On With Azure DevOps course. Given it had been so long since running any type of training, I found myself a little bit rusty in parts, but generally thought it went well. Dan from our team was shadowing, so we can reduce the single point dependency in the team of only myself running the session. This was really good from my perspective as there are certain nuances that can be missed, which he was there to either point out to attendees or to ask me about. Having started doing the session months ago it finally feels like now the content flows nicely and that we give a sufficient learning experience without teaching too much unnecessary detail. My favourite point is the challenge on configuring the kanban board, as normally there are a lot of alarmed faces when it’s first presented! However they all end up doing it well and meeting the criteria, which is of course a good indicator that attendees are learning through doing. There is only one slot available across all sessions in the next four months, so please that demand is so high!

Next Week

Next week it’s back to running Agile Foundations courses — with myself and Stefano running a session on Tuesday. I’ll also be working with Andy from our team on presenting at a BXT Jam on Wednesday night, with a 30–45 minute slot introducing people to Agile. A few slides plus the ball point game is our planned approach, hoping it can scale to 40 people!

Weeknotes #27

New Ways of Working

This week we’ve had some really positive discussions around our new delivery model and how we start the transition. We’ve tentatively formulated a ways of working group, bringing expertise across operations, software engineering, programme & portfolio management and Agile — so it should provide a nice blend in managing the change. The more conversations we’re having with people the consistent feedback is that it “makes sense” with no real holes in the way of working, however a recognition that we are some way away in terms of the skills needed with the current workforce. I’m hopeful we’ll be able to spin up our next pilot product teams and service in the next month.

Brand Building

Now that the holiday season is over with, we’re back into full flow on the training front. This week Dan ran another Agile Foundations session on Monday, which was followed up with some fantastic feedback. Speaking to someone who went in a separate conversation on Wednesday she said it was “the best training I have ever attended”, which is a fantastic endorsement to Dan and the material that we have. Currently it’s forecast for us to have delivered 41 training sessions in 2019, which will be a great achievement.

The good thing about these sessions going well is that word of mouth spreads, and this week I was approached about us getting involved in other initiatives across the firm. BXT is one of our Digital Services we offer to clients, and we’ve been requested to present at a BXT Jam later this month for roughly 40–50 people, to help familiarise attendees with what Agile at its core is really about, plus showcasing what we’re doing to grow and embed that thinking in our culture. We’ve also been asked to help run sessions on Agile as part of the firms Digital Accelerator initiative, which is helping over 250 people (open to anyone from Senior Associate to Director, across all LoS and IFS) upskill in all things digital, in order for them to become advocates and help the firms next phase of our transformation, helping us to build our digital future faster. With over 2000 applicants across the UK it’s something recognised by the whole firm and highly visible, so hoping it’s more positive press for the Agile Collaboratory — maybe we can get an exec board member playing with Lego!

Change is hard

This week in general I’ve found things to be really tough from a work perspective. I’m finding it increasingly difficult when involved in calls/meetings to not get frustrated at some of the things that are being discussed. This is mainly due to it being things like bad knowledge, deliberately obstructive behaviour, misinterpretation or statements being made about things people really just don’t know anything about. Despite trying to help guide people you can often be shouted down or simply not consulted in conversations. It must have been noticeable in particular this week as within our own team people asked if my weeknotes were going to be as bad as my mood!

A day in the life…

I think when you’re involved in change like we are, there are going to be setbacks and off days/weeks — you are going to get frustrated. Like all things it’s important to recognise what caused that, and what preventive/proactive measures you’re going to take in order to not feel that way again. For myself, I’m going to temporarily taking myself out of certain meetings, in order to free up time to focus on individual discussions and share/build understanding that way.

Next Week

Next week it’s back to training, with another of our Hands On With Azure DevOps sessions in London. The week concludes with a trip to Manchester to look at identifying our next pilot product teams and areas of focus, a meeting I sense may provide more challenges!

Weeknotes #26

Manchester Travels

This week I spent a couple days on the newly designed fourth floor of our Manchester office. Despite the rain (seemingly every time I go to Manchester) it was great to see a new, modern work environment with lots of space for visualisation and collaboration amongst teams.

Source: PwC_NorthWest Twitter

One of the main reasons for my visit was to present our proposed new ways of working model to get an agreement around this being where we *think* (as it’s emergent) we want to go, as well as formulating a working group and how to approach the change (incremental rather than big bang). It was one of the most positive meetings I’ve been to in recent months, both in the sense of getting feedback/providing clarity to others, and from a personal standpoint being able to passionately showcase the work our team have spent the last few months on.

Another reason for my visit was to meet our new Agile Delivery Manager — Stefano Ciarambino who has moved across from Consulting to do a six month secondment with our team. Me and Stefano have chatted on and off about all things Agile for the past 6/7 months, after he attended one of the Hands On With Azure DevOps course I ran. I was impressed with his experience and understanding as a practitioner, and with us starting to gain momentum with our new ways of working model, we needed a new face in the team to help *do* and help others do. Having learnt through some past mistakes, I’m quite particular now around who we have in our core team and them bringing something unique to the table. I’m hoping it proves to be an enjoyable six months for him and for us, so that we can make his stay a permanent one — welcome aboard Stefano!

Reflection

This week is a bit of one for anniversaries! This post will mark 26 weeks/6 months of writing weeknotes. In reflecting on the writing of them, I’ve found it to be a great vehicle for checking that the work I’m doing actually has purpose. For example if I’m getting to the end of the week and struggling to come up with things to write about, then maybe I’ve not been working on the right things! I hope sharing the things I’m learning through our own internal Agile adoption should help others who are experiencing it in a big organisation, and show to those who I do work with that I’m learning all the time, just as they may be.

This week also marks the four year anniversary since I joined PwC. When I think about that first team I joined to work with, who were split by developer per application, estimating tasks in story points but stories in hours, not delivering anything working at the end of sprints and working without PO’s, it’s fair to say I’ve come a long way since then! There’s been some memorable high points for me, a highlight being over the last twelve months in building a team of people who I look forward to working with every day. Also it contains some low points, for example being maybe too dogmatic at the beginning around Agile or having to walk away from teams as the negative behaviours from a management perspective inflicted on them were not going to change.

Hoping for both to continue for the foreseeable future and to writing the same again in twelve months time!

Next Week

Next week is our sprint review, looking forward to getting feedback on work we’ve done and what we should focus on next. I’ve also got some conversations with people who could help us in our transition towards new ways of working — looking forward to hearing their thoughts and seeing what adjustments we could make.

Weeknotes #25

Team Identity

One of the experiments we’re running is for our newly formed teams to come up with a team name as well as some form of team identity. Typically we’ll suggest teams complete something like a team canvas, coming back to revisit it as the team evolves and matures, to better reflect ways of working.

We’re struggling at the moment around this becoming a ‘thing’ that teams do, and it’s often greeted with derision, a blank stare or a roll of the eyes. As well as that teams aren’t always the most creative with either team names (we’ve suggested a theme of ‘major places — fictional or non-fictional’) or filling out the canvas (i.e. completing them with Agile buzzwords so as to ‘convince’ management). This week in particular I’ve mentioned it a number of times to people, the majority of the time getting one or all of the reactions above. I’m struggling with why this is, in particular when I think about some of the great teams we know of. A great (albeit begrudgingly) example for me is Manchester United. When I think about the Ferguson era, the class of 92 and the values and principles he instilled at that club, everyone there knew what it meant to be ‘Manchester United’ — in terms of what was expected both on the pitch and off it. You hear ex players now talk with great passion about what it means to be at that club, to wear that shirt and how certainly in recent teams they’ve lost their way, with those values seemingly going out the window. When coming up with a team identity, this is what we want teams to strive for. I’m not sure if it’s because work has become so transactional for people they can’t possibly fathom something that isn’t solution focused (i.e. naming yourself after the application you’ve worked on) or that the psychological safety is lacking in the context they are in (i.e. teams don’t feel safe enough that they can be viewed as having fun or show vulnerability). One to watch in the coming months but a topic I’m certainly finding difficult at the moment.

Agile Portfolio Management

This week I’ve also been helping on some client work where we’re looking at helping their PMO transition/adopt Agile ways of working, and what it means for their area. For a lot of organisations this is probably one of the hardest areas to change — with a PMO that is focused on milestones, RAG statuses and being on time and on budget, getting them to shift mindset is one of the biggest challenges. We’ve been trying to do some of this internally, with a real shift towards a focus on flow from a metrics perspective, but also agreeing some principles around what the PMO is there for. Even just adopting the metrics more relevant to Agile teams is not enough, it requires a whole shift in thinking around the PMO being an enabler for business agility. Things like focusing on the small batch, focusing on outcomes, as well as the scaling of team work through to portfolio and strategy at the relevant flight levels is really how a PMO “transforms”.

We’ll be playing back to them some example next week, which should hopefully lead to some good conversations and follow on opportunities for how we can enable their Agile adoption.

Next Week

Next week it’s back to Manchester, as we look to setup a ways of working group to take our adoption to the next level. We’re looking to blend experienced Agilists with PwC’ers, which should give us that balance. As well as that we have our first Agile Delivery Manager starting, looking forward to welcoming Stefano to the team!

Weeknotes #23

Incremental Approach to Change

This week we’ve been having numerous discussion around roles in our proposed delivery model, with some differing views as to how to approach it. My view, much like how Agile delivery is, would be to incrementally roll-out new ways of working, once confidence builds and constraints are addressed. Taking the expectations/skills we desire in new roles and mapping the current skill set, it’s clear we have a shortfall in what is needed. That’s not to say we don’t have the right people, in particular lots of people can bring contextual knowledge to the table that will help us as change leaders learn. However there should be a recognition that a period of upskilling is needed, one which can only be done through continuous coaching with these individuals, something that you can’t do in one big ‘batch’. Similarly, the technical excellence needed to aid business agility is few and far between currently, which therefore presents a dangerous flirtation of something akin to a TACOS implementation.

[embed]https://twitter.com/tottinge/status/943518694436679681[/embed]

A learning this week therefore has been witnessing first hand the reasons why so many change programmes fail. Taking a SAFe rollout as an example, to suddenly train everyone, then set them up in a release train with a bunch of new roles/titles without any sort of consideration for the individuals impacted is very naive. To expect people to be ‘transformed’ overnight (or over 2/3/4/5 days — depending on your preferred training course) to Agile people in an Agile organisation is simply not how change works. 

Change is hard, change takes time and those involved in change need to be prepared for this. Shortcuts only guarantee one thing, change won’t stick.

Metrics for Business Decisions

I’ve been continuing to read Mattia Battiston and Chris Young’s book — Team Guide To Metrics for Business Decisions. Despite only being 70% complete it’s been a great read so far, and should provide a lot of inspiration for practitioners, both those that are experienced and those that are just starting their Agile journey. A favourite section of mine is the one on forecasting, and the pitfalls of using story points. I conveyed similar thoughts to our internal and client facing teams around why we feel the need to do this with teams, yet it still feels like as an industry we’re finding it hard to let go of some of those things we hold dear. Buy it here.

Ron Jeffries said it best:

[embed]https://twitter.com/ronjeffries/status/943790497981706242?lang=en[/embed]

Team Health Checks

I’ve started introducing the concept of us running Team Health Checks with our Agile teams, to make visible if teams are in an environment they can succeed and if our culture fits their needs. Whilst retrospectives are great, sometimes a high level view of certain areas and identification of trends is needed, particularly if you’re a leader and sometimes need a TL;DR.

This is an example of the outputs from the health check we ran within our own Agile coaching team at the beginning of our retro on Tuesday. We take the mode (most frequent) value in the responses, settling on the lower of the two if there is a tie. I slightly tweaked the well known version famed by Spotify, as not all of our teams are doing software development (however a separate template exists for those that do) as well as adding a question on psychological safety which, given Google’s findings, is one of the most important things to consider. Looking forward to seeing more teams give it a try and spot some underlying trends that may not immediately be apparent.

Next week

Next week involves a trip to Manchester, where we’re interviewing for a new Product Manager and DevOps Engineer. It’s a good indicator around the backing we have in the change that we’re able to make these hires, aligning with roles that are recognised across the industry, rather than creating roles specific to our IT department. Looking forward to meeting some interesting candidates!