Wednesday, May 27, 2015

Glossary Proposal


One of the problems Kanban practitioners have faced over the past several years is the lack of agreement of the terminology to use to describe flow systems. This in turn has led to confusion in both those learning the method and those implementing tools to support it. This blog has made a few previous attempts to disambiguate common terms (see discussions of Cycle Time for example). Mike Burrow's Glossary of Terms, [burr] reproduced on the Lean Kanban University site [lkun] is also very useful, though it does not give guidance on which terms to use when applying Little's Law to sub-processes in a more complex Kanban system. 
This article is another foray into this minefield and is principally a proposal for the definitions of commonly used terms relating to Little's Law, particularly seeking terms applicable in complex flow systems and sub-processes within such systems. It is an invitation to others in the community to endorse these definitions, or propose alternatives. Let's not go another seven years with this unresolved!

Note: This is a work in progress and will be updated from time to time in response to feedback from other authors and practitioners.



The Kanban Method is a process improvement approach based on understanding knowledge work as a flow system. The life cycles of the "work items" in such flow systems are analysed and improvements to the process are made based on observable positive change. So let's start with Little's Law since it is the first basis of understanding flow systems. For a given system it may be defined as follows [litt]:
Arrival Rate = WiP / TiP     
where the overline indicates the arithmetic mean in a "stationary" or other compliant system.
Arrival Rate
Measured in: work items per unit of time (seconds, hours, days. working days, etc.)
Definition: The number of units entering the system per unit of time. The work item must be defined for the metric to be meaningful (e.g whether a User Story, Feature, Case, Initiative, Physical Item, Episode, Request, etc.). Little uses Arrival Rate in his definition of the law and, if a distinction is made between delivered and discarded items, this is necessary in Kanban systems too. Since items may be either delivered or discarded:
Arrival Rate = Delivery Rate + Discard Rate 
Note: When examining historical data it may be desirable to exclude discarded items and use the alternative formulation of Little's Law commonly found in Kanban system analysis:
Delivery Rate = WiP / TiP
In this case the historical values of WiP should include only those items that have been delivered and TiP should be the time in process for delivered items only, not discarded ones.
Related termsDelivery RateDiscard Rate, DiscardAbort
References: [ande], [burr], [litt]

Delivery Rate
Measured in: work items per unit of time
Alternatives: Throughput, Throughput Rate,
Definition: The number of work items emerging complete from the system per unit of time, This is a key metric to understand the productivity of the system.
Related terms: Arrival RateDiscard Rate
References: [ande], [burr]

Discard Rate

Measured in: work items per unit of time
Definition: The number of work items discarded before completion per unit of time. In typical Kanban systems this metric may be significant relative to Arrival Rate, particularly where the "2 stage commit" is used to prepare but not necessarily complete options. Discard is a general term for abandoning a work item. More specifically to Abort a work items means to discard the item after the Commitment Point in a development system
Related termsArrival RateDelivery Rate, Commitment Point, Abort, Discard

Commitment Point

Measured in: not a metric, a specific point in a defined process
Definition: In a development system process, it is the point at which a commitment is made to develop the work item. Before this point work done supports the decision whether or not to develop the item.
Related terms: Abort, Discard

Abort

Measured in: not a metric, an action
Definition: To Discard a work item after the Commitment Point.
Related terms: Commitment Point, Discard

Discard
Measured in: not a metric, an action
Definition: To stop work on an item and remove it from the process. Note that an item is "discarded" in this sense even if it might be worked on in the future, for example if the work item is moved back to a queue prior to the system/sub-process under consideration. The term is not specific about when in the process the item is discarded, however in a development system process it may apply to items discarded prior to the Commitment Point, since after this point the term Abort is applicable.
Related terms: Abort, Commitment Point

WiP, Work in Progress

Measured in: work items
Definition: The number of work items which have entered the system but which are not yet either completed or discarded.
Related terms: Arrival Rate, Delivery Rate, TiP
References: [ande], [burr], [hopp], [marc], [rein]

TiP, Time in Process
Measured in: units of time
AlternativesCycle Time (but see cautionary note below), Lead Time (when referring specifically to the time in process in a Kanban development system from the Commitment Point to delivery)
Definition: The time that a work item remains in the system or sub-process under consideration prior to being either completed or discarded. This is the key metric in understanding the time to delivery of a system. More specific terms may be derived by replacing "Process" with the particular part of the process of interest, for example "Time in Development". As with all the terms in Little's Law the scope of the system or sub-process under consideration must be well defined to ensure they are meaningful.
A key reason for recommending this term is that it sidesteps the "Cycle Time versus Lead Time" debate which shows no sign of resolution within the communities that use these terms.
Related termsCycle TimeLead TimeTouch TimeTakt Time
References: [macc]

Cycle Time
Measured in: units of time
Alternatives: For CT1 (defined below) use its reciprocal - Delivery Rate; for CT2 use TiP
Definition: The time taken for a "cycle". This is a very ambiguous term which should not be used in Kanban without qualification. Examples of where it is commonly used in the literature are:
  • In a factory: the time between completed units exiting the system
  • For a queue: the time an item remains in the queue
  • For a work station or machine: the time between completed parts exiting the station
  • For a worker/team: the time between starting and completing an item
  • For a project/team: the time between deliveries of completed items.
It is incorrect to use the term for any period which is not contiguous, e.g. Touch Time. Unfortunately such usage may be found in some tool implementations.

Broadly speaking there are two categories of usage for Cycle Time which may be referred to as CT1 and CT2. CT1 is the time between successive items emerging from a station or system. CT2 is the time an item takes from entering the system to leaving it. It is left to the reader to decide which of the examples above are CT1 or, CT2. Note that there is a special case (when WiP=1) where CT1=CT2. Unfortunately this just tends to confuse people further, especially when the example given to define the term is an example where WiP=1!
Where Cycle Time is used in the Kanban community, its definition "generally" coincides with that of Lead Time for Kanban development systems given below.
Author's Note: Since there is no universally accepted definition of what Cycle Time means in a flow system, the term should simply be avoided.
Related termsTiPLead TimeTouch TimeTakt Time
References: [burr], [chew], [hopp], [litt], [marc] [modi], [rein], [roth], [woma]

Lead Time
Measured in: units of time
Definition: In general usage, Lead Time means the time from the request for an item to the delivery of the item (this may simply be the time to get an item from stock or the time to specify, design, make and deliver an item). However its usage in Kanban development systems is more specific. It indicates the time from the Commitment Point to the delivery. For this to be useful the commitment and delivery points must be made explicit.
Note there remains some ambiguity in this term and I would recommend using TiP in most circumstances, and certainly when analysing sub-processes in a larger flow system. If you use Lead Time, qualify it if necessary (e.g. Development Lead Time and ensure that you define the meaning that you wish to be assigned to it in your context.
Related terms: TiPCycle TimeTouch TimeTakt Time
References: [ande], [burr], [marc]

Touch Time
Measured in: units of time
Alternatives: Value-Creating Time
Definition: The sum of all the times during which a work item is actively being working on (excluding wait times, for example being held in stock or in queues).
Related termsTiPCycle TimeLead TimeTakt Time
References: [modi], [woma]

Takt Time
Measured in: units of time
Definition: The projected customer demand expressed as the average unit production time (i.e. the time between the completion of work items) that would be needed to meet this demand. It is used to synchronise the various sub-processes within the system being designed to meet demand without over or under production.
Related termsTiPCycle TimeLead TimeTouch Time
References: [marc], [rike], [woma]

Flow Efficiency
Measured in: %
Definition: The ratio of the time spent working on an item (Touch Time), to the total time in process (TiP), i.e.:
Flow Efficiency = Touch Time / TiP
Related terms: Resource Efficiency
References: [modi]

Resource Efficiency
Measured in: %
Definition: The ratio of the time a resource (for example a person!) is actively working on a work item, to their total available time.
Related terms: Flow Efficiency
References: [modi]


References
  • [ande] Anderson, David J. Kanban, Blue Hole Press. (2010)
  • [burr] Burrows, Mike. Kanban from the Inside, Blue Hole Press. (2014)
  • [chew] Chew, W. Bruce, Harvard Business School Glossary of Terms [as referenced by Fang Zhou]. (2004)
  • [hopp] Hopp, W.J and M. L. Spearman, Factory Physics, 3rd ed., McGraw Hill, International Edition. (2008)
  • [like] Liker, Jeffrey K. The Toyota Way, McGraw Hill. (2004)
  • [litt] Little, J. D. C and S. C. Graves. Little's Law, pp 81-100, in D. Chhajed and TJ. Lowe (eds.) Building Intuition: Insights From Basic Operations Management Models and Principles. doi: 10.1007/978-0-387 -73699-0, (c) Springer Science + Business Media, LLC. (2008)
  • [lkun] Lean Kanban University. Glossary of Terms, from Kanban from the InsideMike Burrows. (2014)
  • [marc] Marchwinski, C. et al Eds, 4th ed, Lean Lexicona graphical glossary for Lean Thinkers. (2008)
  • [macc] Maccherone, Larry. Introducing the Time In State InSITe Chart. LSSC. (2012)
  • [modi] Modig, N. and P. Åhlström, This is Lean, Rheologica Publishing. (2013)
  • [rein] Reinertsen, Donald G, The Principles of Product Development Flow, Celeritas Publishing. (2005) 
  • [roth] Rother, Mike and John Shook, Learning to See: Value Stream Mapping to Add Value and Eliminate MudaLean Enterprise Institute. (2003)
  • [woma] Womack, J. P. and D. T Jones, Lean Thinking, Simon and Schuster. (1996, 2003)

Friday, May 15, 2015

Growing Kanban in Three Dimensions

Kanban systems can work at different scales and in widely different contexts. Indeed any organisation that delivers discrete packages of value ("work items") and which is interested in maximising the value and timeliness of its delivery, can analyse and improve its performance using the Kanban method. 

Kanban systems can grow - in fact in most cases it's much better that they grow than a massive process change is made suddenly across a whole organisation. "Big bangs" tend to be quite destructive, even if they could clear the way for something new. There are three dimensions in which Kanban systems grow:


  • Width-wise growth: encompassing a wider scope of the lifecycle of work items than the typical "to do - doing - done" a single division of the process. It can cover from the idea to real value - or "concept to cash", though cash may come before or after the realisation of real value.
  • Height-wise growth: by considering the hierarchy of items that make up valuable deliveries, each level of the hierarchy having differing flow characteristics. (This dimension use the "scale-free" nature of Kanban, the same principles and practices apply whatever the size of the work item.)
  • Depth-wise growth: not only depth of understanding but depth of penetration through the full set of services required by the organisation to deliver value. (Sometimes referred to as "Scaling by not scaling" or "service-oriented Kanban", the approach here connects multiple services at the same level through feedback loops that balance the capacity of the various kanban systems.)

We'll look at each of these dimensions in upcoming articles. Which dimension to grow first will depend on context and the motivations for change. Any change needs to pay for itself with improvements in the flow of value, so asking "why?" is a more important first question than "what?".

When you come across a good idea ("agile" in general springs to mind at this point) it is very tempting to sweep away whatever you were doing before you were converted to the new idea, and start doing it everywhere. It should not come as a surprise to those who do this, that very soon a new idea will come along. With the poor results from mass conversion to the caricature of the original idea you adopted, the same cycle will be repeated. Instead grow the changes organically.

Try this: start small; understand the ideas as you assimilate them; grow what works and understand what doesn't work; work out why. Success will follow.

Acknowledgement: Thanks to +Pawel Brodzinski for the discussions on Portfolio Kanban... and one of the graphics on the top floor of the above diagram.

Thursday, May 14, 2015

Earned Value Management and Agile Processes

I've recently been working with a client whose customer requires project reporting using Earned Valued Management metrics (EVM). It made me realise that, since they are also wishing to use agile methods, a paper I wrote back in 2008 could be relevant to them, and maybe a few others. When I looked for it online it was no longer available, so I thought I'd remedy that here. You can access the paper by clicking this link: EVM and Agile Processes – an investigation of applicability and benefits.

EVM is a technique for showing how closely a project is following both its planned schedule and planned costs. It's a superior method to simply reporting time and cost variance, since if the project has slipped but also underspent you cannot tell from the simple variances the degree to which the underspend has caused the slippage. EVM's cost efficiency and schedule efficiency (nothing to do with efficiency by the way!) can tell you this.

However agile methods do not have a fixed scope during their lifecycle and this can make EVM reporting effectively meaningless. The paper explains a technique for using the substitutability of User Stories, estimated in points, for overcoming this problem. If this is relevant to your business environment, I hope you find it useful.

Agile EVM has continued to develop since this paper and you can find more details and further references in the Wikipedia entry here: Earned value management: Agile EVM.

Citation: Andy Carmichael (2008). EVM and Agile Processes – an investigation of applicability and benefits, The 2nd Earned Value Management Conference, NEC, Birmingham UK, 12 March 2008.
Project Manager Today Events. www.pmtoday.co.uk.

Friday, March 20, 2015

Does your Definition of Done allow known defects?

Is it just me or do you also find it odd that some teams have clauses like this in their definition of done (DoD)?
Done... or Done-But?
... the Story will contain defects of level 3 severity or less only ...
Of course they don't mean you have to put minor bugs in your code - that really would be mad - but it does mean you can sign the Story off as "Done" if the bugs you discover in it are only minor (like spelling mistakes, graphical misalignment, faults with easy workarounds, etc.). I saw DoDs like this some time ago and was seriously puzzled by the madness of it. I was reminded of it again at a meet-up discussion recently - it's clearly a practice that's not uncommon.

Let's look at the consequences of this policy. 

Potentially for every User Story that is signed off as "Done" there could be several additional Defect Stories (of low priority) that will be created. It's possible that finishing a Story (with no additional user requirements) will result in an increase in the Product Backlog size! (Aaaagh...) You're either never going to finish or, more likely, never going to fix those Defects in spite of all the waste that will be generated around recording, estimating, prioritising and finally attempting to fix the defects (when the original developer has forgotten how he coded the Story, or has been replaced with someone who never knew it in the first place).

What should happen then? 

Clearly the simple answer is that if you find a bug (of whatever severity) before the Story is "Done", fix it. You haven't finished until it works - just avoid double-think like I've finished it even though the product now contains new defects.

Can there be exceptions to this?

Those who think quality is "non-negotiable" would probably answer "No", but actually (whether acknowledged or not) we all work with a concept of "sufficient quality". It is inherent in ideas like "minimum viable product" and "minimum marketable feature". Zero defects is a slogan not a practicable policy for most product developments. Situations where we find defects that are hard to fix when working on a User Story, bring this issue to the fore.

So here's what I recommend Product Owners do. Firstly, don't sign off a Story if it contains defects! Secondly if defects are found choose to do one of the following:
  1. Insist it's fixed. Always preferred, and should nearly always be followed. Occasionally however it is too expensive, but unless the cost of fixing it is greater than the time already spent on the Story I would always recommend fixing. (We discuss below the problem of "deadlines".)
  2. Accept it's not a defect... at least not a defect that will ever get fixed (unless it's found and added to the Backlog by users). This doesn't feel right but it is more honest than adding items to the Product Backlog that will never be prioritised.
  3. Agree the defect is actually a different Story, functionality that will be covered elsewhere even though it is part of the same Epic or Feature. The original Story will not be released without all the functionality of that Epic/Feature, so it will be fixed before release. Note that this option depends on a well understood concept of Epic/Feature and appropriate release policies around it.
What I am arguing for here is that our Definition of Done trumps deadlines, Sprint boundaries and Sprint "commitments". I believe it is confusion in this area that leads teams to adopt misguided DoDs. That confusion in turn results in the need for "Maintenance Teams" that clear up after Development teams have scattered defects through the product, or the common practice of dumping defects into massive Defect logs that will never be cleared, even if the development continues for decades! As +Liz Keogh has observed, deadlines should really be renamed "sad-lines" - if they're missed nobody's dead; maybe a few are sad! It is not that such planned dates are unimportant, of course they are not. It is that agreed dates should not have greater importance than agreed quality.

These "Done-But" policies are most common in development departments where the concept of commitment ("Look me in the eye and tell me you will complete these Stories by this date") is considered more important than Done, i.e. that completing a Story means it will be at the quality agreed. The Scrum Guide replaced the word "commitment" with "forecast" in a recent revision for a reason - commitment should be what a team member brings to the overall goals of the organisation, not to a date that at best was derived from very limited information.

Of course in reality both commitment to dates and a particular Definition of Done must be subservient to the overall business goals. We can move a release date for an Epic/Feature to a later (or earlier) date if that will better fulfill the overall goals. Similarly changing the DoD or quality expectations up or down should always be considered in order to improve business outcomes.

Does your Definition of Done allow known defects? If so please come back to me and tell me why... or if you would change it, tell me how?

Tuesday, September 16, 2014

Care about business strategy? - Tune to this channel...

If you think the principles and values of agile extend beyond the narrow boundaries of software development teams to organisations and corporate cultures, I think, like me, you'll be inspired by a couple of presentations from the recent Agile On The Beach conference, They are great bedtime viewing (for when you've finally had enough of Bakeoff!).

Firstly a video from Tom Sedge: TDD for Business Strategies – Developing Business Strategies Test-First.
Tom Sedge provides very practical advice on how to define mission (why we're here - our purpose and driving cause), vision (where we're heading - how the world will be different), goals (what we want - destinations or desired outcomes), and strategies (how we will get there - potential routes to the destination). His examples of good (Tesla, SpaceX) and bad (Kodak) missions/visions are particularly helpful. How could the inventor of the digital camera go bust, just as digital photography exploded on scene, particularly as their founder George Eastman expressed his vision in the 1880's as "a world where the camera is as convenient as the pencil". These days I quite often wish I had a pencil on me, yet I always have a camera! His vision makes a sad contrast with Kodak's mission and vision statement from the early 2000's - a paragraph of unmemorable platitudes about customer focus and shareholder value, that no one outside the company would care a fig about!


The second one is from Bjarte Bogsnes, Vice President of Performance Management Development at the major international oil company, Statoil. It's on Beyond Budgeting – an agile management model for new business and people realities. If you give it a listen you'll understand why (even though I think budgets are essential) I'm not that keen on investing much time in annual budgeting. In his words, the approach "... is about rethinking how we manage organisations in a post-industrial world, where innovative management models represent the only sustainable competitive advantage ... releasing people from the burdens of stifling bureaucracy and suffocating control systems, trusting them with information and giving them time to think, reflect, share, learn and improve."

Remember he's talking about a massive oil company - not the easiest place to introduce agile thinking! Gives hope to the rest of us.

Thursday, September 11, 2014

x-Banning a process

I've just proposed an experience paper for LKUK14 - "x-Ban the process! (or how a product team is improving value delivery rate with Kanban)". Feel free to vote for it by the way here!

Scrumban, Xanpan (XP-ban) - even Prince-ban and DSDM-ban - have all been used as portmanteau words to explain the journeys from a particular named process or framework to a continually evolving and improving process, guided by the principles and practices of Kanban. If you are trying to apply a named process but frustrated by a patchy track-record of improvement, consider the alternative: x-Ban it!

When I was asked in early 2013 if I would work with Clearvision's product development team, they had just adopted Scrum (a matter of weeks before). Their process, like most I've reviewed from teams claiming to use Scrum, was not compliant with a large number of Scrum rules. It was pragmatic, constrained, variably applied and ripe for improvement... but it certainly wasn't Scrum. We had two choices - apply Scrum rules as soon as possible (defining the backlog of necessary changes and a timetable to apply them), or “x-Ban” it (use Kanban to attain evolutionary changes that we kept only if we were confident they resulted in improvements). We did the latter.

There are many lessons I've learned from this experience: some things that worked - and some that didn’t. They're lessons and general principles that others can apply on a similar journey. It has taken much longer to adopt some practices than I expected, the current process is quite different than I expected when I started 18 months ago (it’s more Scrum-like now than when I arrived for example!), but it is a route I would recommend to others.

Start x-Banning your process now!