Mike MacDonagh's Blog

Somewhere in the overlap between software development, process improvement and psychology

Tag Archives: process

Epics or Integration Scenarios?

Many teams I’ve worked with struggle to make sense of their huge backlogs of stories, struggle to explain the scope of a release and current work – and most importantly, struggle to explain how the stories they’re working on contribute to business value!

All of these problems are killers when it comes to joining up what a development team is doing to a corporate portfolio, governance and milestones. Features and business benefits seem to be too high level, stories seem to be to low level.


 Epics

A common approach to trying to solve these problems is to use “epics”. Originally intended to just be another type of story (one that was too big for a sprint) “epics” are now commonly used in a requirements hierarchy by teams and organizations. I’ve seen loads of backlogs where every story had an epic “parent” – sometimes even a 1-1 mapping from epics to stories(!!!!). I know that some process frameworks recommend this approach, but I don’t and here’s why:

Stories can just be on a backlog

Stories don’t have to have a parent. A backlog made up of stories is just great, it doesn’t need a hierarchy that’s a Work Breakdown Structure using some agile-y sounding words to hide the fact. A backlog isn’t a gantt chart, stop trying to turn it into one by stealth!

Actual epics are useful for teams

“Epics” in their original form were simply stories too big for a sprint, sometimes that just creeps up on you so it’s useful to be able to tear up a big story and replace it with some smaller ones. By formalizing a Feature -> Epic -> Story hierarchy as an “agile at scale” process I’ve seen corporate processes become protectionist over epics, trying to use them to fix scope, counting and metricating them to the extent that teams can’t simply discover some of their stories are epics.

Bigger stories need more than a sentence

I know that a story can have more elaboration than the standard form of “as a user I want to…” but few stories in the wild have more than a sentence (you’re often lucky if you find some with acceptance criteria). I find that describing how various stories contribute together (especially when across several teams and/or systems) to deliver business value I need more than a sentence. I often need a diagram or a paragraph or two.

“Epics” sound silly

What’s in a name? Would a rose by any other name smell so sweet? Probably. Names shouldn’t be that important and yet they are, what we call things has a significant impact on how people represent them. “Epics” are part of the money spinning agile movement’s language. They either sound silly to businesses or as a way to pretend they’re doing “agile at scale” by adopting the term for their portfolio requests.

I like Integration Scenarios more

Ok, so this is clearly subjective and personal but that’s kinda my point. I don’t use process elements because someone told me to, and I don’t think that something like Integration Scenarios are always better than epics. I prefer to pick and choose different tools for different jobs. In my experience Integration scenarios are frequently a better tool for scoping, elaborating and agreeing  end-to-end threads across a system or system-of-systems so they’re one of my “go to” tools.


So what are Integration Scenarios?

Originally from Holistic Software EngineeringIntegration Scenarios

Integration Scenarios describe an end to end workflow across a Product or Product Family. Integration scenarios are threads across a number of low level requirements such as user stories that collectively deliver Business Value. They are ideally suited to describing the scope of Releases and providing the input for integration testing necessary to achieve higher levels of acceptance.

Integration Scenarios can also be used to manage cross-cutting Non-Functional Requirements and drive the discovery of Architectural Mechanisms.

Integration Scenarios provide the context for discussions around User Stories

Integration SceExample of graphical Integration Streamnarios can be described in a number of ways, ranging from textual to graphical depending on the nature of the system and the best way to achieve clarity. In all of these examples we recommend using a meaningful title, not “Integration Scenario”!

One of our favorite ways is to draw an activity diagram of user interactions using GUI mockups in the nodes with references to features/use cases/stories as appropriate. We do not recommend strictly following a diagram format but using the visual or textual elements that will communicate the scenario as effectively as possible. Story-boarding can also be an effective way of describing Integration Scenarios.

When using a graphical view we recommend including links to stories or systems that are covered by the scenario and adding rich graphics to standard modeling notation to increase communication.

Example of textual Integration Scenario Integration Scenarios using textual formats are best described as a narrative with links to contributing stories or systems.

They should ideally be no more than a couple of paragraphs that describe interactions between actors across a number of stories. Sometimes adding extra alternative or exceptional flows can be useful, in the style popularized by use cases, but if there’s a lot of “flow logic” involved then we recommend using a diagrammatic form (e.g. workflow) to avoid convoluted textual documents. Similarly adding a non-functional section indicating non-functional requirements or constraints that apply across the stories making up the scenario can be useful.

Integration Scenarios describe the dynamics between stories and how they collectively contribute to delivering business value. To that end we recommend highlighting and directly linking to the stories that make up the integration scenario. In this way Integration Scenarios add context to the mass of user stories and provide a sound basis for scoping a system, describing a release, functional architectural mechanisms, integration testing, and product acceptance.

Regardless of format we recommend that Integration Scenarios are described from the user perspective and focus on achieving business value. The choice of formats should be driven by the type of Product being developed (e.g. graphical/user facing vs. back end service) and further influenced by team experience and available tooling. We strongly advise not being concerned with the constraints imposed by formal modeling languages, the communication is important not the notation.

Example of simple graphical list view of an Integration ScenarioSometimes we wish to use Integration Scenarios as part of diagrams, even workflow/activity diagrams (e.g. for Value Stream Analysis) in which case the full text version might be a bit big. In these situations we recommend using a “zoomed out” view which just lists a title and the related stories. In fact this “elided” form of an Integration Scenario is sometimes useful for simply gathering up a collection of stories into a larger chunk for planning purposes. Story lists in these diagrams don’t have to be exhaustive, just indicative.

Practicality

Textual Integration Scenarios can be crafted in and decent rich text editor that allows for hyperlinks. We recommend trying to keep them short since the detail of the requirements should be in lower level stories or UX designs/storyboards. Instead the Integration Scenario focuses on the dynamics between requirements. A good rule of thumb is that they should be only a couple of paragraphs, and at their biggest if alternates/exceptions and nonfunctionals are included they should only be 2 or 3 pages. Creating a large number of fully documented Integration Scenarios prior to any development effort is a very bad idea as it will introduce the problems inherent in waterfall development, however elaborating the Integration Scenarios for a Release prior to development iterations/sprints or continuous flow development (with customer representation and team representation) is an excellent way to align collective understanding of scope and requirements.

Each Release should have no more than a couple of Integration Scenarios in it, if there’s more and they are not too interdependent then they can be delivered in separate Releases enabling earlier feedback cycles, (even if they are internal Releases rather than operationally deployed customer release.

Links to other elements

The H-ModelIntegration Scenarios provide a mid-level requirement type, higher than User Stories but lower than Portfolio Requests containing a lot more information than a Feature. As a result Integration Scenarios provide a unit of incremental business value which can be used for Release Planning, tying in milestones for programme/portfolio tracking and governance.

Because Integration Scenarios are constructed in a narrative form that describes a scenario, tried and tested scenario based testing methods can be applied for Integration testing.

Integration Scenarios also provide a useful starting point for understanding Solution Architecture, in the case of Product Families or systems of systems,  and even System Architecture in terms of indicating structure and identification of mechanisms. It is possible to resonate architectural structure with Integration Scenario structure although there is a risk of inappropriately introducing a Homomorphic force to the architecture.

By aligning Integration Scenarios with Integration Streams, Releases and Milestones we can focus the various views of Portfolio Management, Programme Management, Project Management, Architecture, Delivery, Quality, Deployment, Acceptance and and Business Change onto Releases at the center of the H-Model de-conflicting business management and technical delivery. Providing a context for requirements, planning and delivery Integration Stories are a common currency for the entire IT organization to work with.

Integration Modeling

Example of an Integration ModelIntegration Scenarios can also be used as part of Integration Modeling (creating scope diagrams) to indicate which actors/systems sets of requirements interact with, helping to clarify overall scope and identify potential gaps in scope.

If using Integration Modeling we recommend color coding Integration Scenarios based on priority (e.g. Musts, Shoulds etc.).

A Scoping Diagram is used to draw a picture of the primary requirements and users/interacting systems.

Read more: Scoping Diagram


What are your experiences of “Epics”, “Integration Scenarios”, “Scoping Use Cases” or other alternatives?

Scaling agile is the wrong approach

The heart of agile software development is feedback loops. Doing a bit of software, looking at it, at how we did it and then improving things as we do another bit of software. The “things” that can be improved can be quality, scope, usability, performance etc. etc. Perhaps most importantly, relationships and ways of working can also be improved.

This rapid feedback cycle ensures that product development is aligned to customer needs, that progress against guestimates is understood (so that plans can be refined) and that teams which include customers find problems early, and so can fix them, as quickly as possible. It’s a quicker, better, cheaper and happier way of working.

What about that needs to scale? Empirical feedback is just fine at any scale. Across 100s of teams or teams-of-teams and entire organisations short term feedback cycles work perfectly well.

The problem – Portfolio Management?

The problem is that there’s other stuff that goes on too. Product Development in big or complex organisations is part of an investment portfolio and costs a lot of money. So people want to track it and check that work that they’re paying for is delivering, and is aligned to strategy.

The answer… short feedback cycles. Feedback cycles let you measure what you’re spending and you can go and have a look at the output to check it’s delivering business value.

The problem – Programmes?

Sometimes software solutions aren’t simple products or mashups. Sometimes they’re big/complex combinations of things developed internally, externally or collaboratively with partners. In these cases we sometimes need to align teams towards common scope and architecture so that products can work together to deliver business value. Collections of Projects working together  = Programmes.

This means we need an understanding of the stories across products (I call these Integration Scenarios) and the common product family architecture (Solution Architecture).

The answer… short feedback cycles. Feedback cycles at product level work as described above. At Product Family level contributing product releases can be brought together to form a Product Family Release – as often as possible to get feedback working. (Hint: 10 weeks is too long, especially if integration fails because 20 is definitely too long). Continuous, event-driven integration streams enable flexible and recursive short feedback cycles.

The problem – Support Costs?

If teams are chasing the latest shiny technology all the time then a large organisation can quickly end up in a support hell of mixed platforms, technologies, middlewares, deployment stacks, licensing and support costs. Needing to recruit specialists in everything and duplicating operational environments.

This one can be partly solved by moving the external complexity to external clouds (where the diversity is embraced and normal) and also by (lightweight!) Enterprise Architecture. Lightweight Enterprise Architecture makes technology choices for the organisation (for it’s mainstream development, it shouldn’t constrain research & innovation) to prevent this fragmentation in the first place.

This can also be partly solved by getting IT Operations stakeholders involved early and often as part of the holistic team (DevOps) along with the other stakeholder sets.

Of course it needs checking to make sure it’s still a good idea and that it’s aligned to strategy. Probably should do that frequently and empirically… short feedback cycles.

Don’t scale the practices, just be agile!

Often proponents of “Agile at scale” or “scaled Agile” will talk about scaling Scrum (or similar). Using practices that work really well at team iteration level to run a business strategy, or a team of teams of teams (at programme level). In my experience that’s not a great idea, just because you’ve got a hammer doesn’t mean every problem in an organisation is a nail.

Staged daily standups or scrum-of-scrums are just painful to behold. Execs don’t want a strategy backlog, nor do organisations want architecture to emerge with no regard for support costs, recruitment or cross-project dependencies.

If I was invited to a 2 day planning session I’d probably walk out of the door never to return. I’d rather watch animated musicals on repeat.

We don’t need to scale agile, we just need to be agile… with a little “a”. Short feedback cycles, a focus on people and relationships with tight customer collaboration. Open, transparent and empirical collaboration between people – that’s the right approach. One that work at every level of an organisation.

Recursive feedback cycles enable scaled agile

Feedback cycles scale to all levels of an enterprise

 

#Devops – A sticking plaster for a bigger problem

Why not StratDev, BusOps, etc.?

Holistic Communication

The term “DevOps” refers to tight integration between the Software Development and Operations parts of an organization. As I mentioned in my article on Conway’s Law the reinforcement of there being two separate parts “Dev” and “Ops” causes a separation in the way we think about these problems and then how we interact within an organisation. If the industry truly believed in DevOps then “dev” and “ops” would no longer exist. I realise that some people intend “DevOps” to be a verb, but it sounds like a noun.

Of course the idea behind of DevOps, of bringing together stakeholders involved in creating, hosting, running and maintaining is a great idea – focussing on these things can make it easier to deploy products and get user feedback. However we might just end up building the wrong products quickly. We must make sure that the things we build are aligned to strategy and deliver business value. To do that we need a holistic approach, that involves all stakeholders (not just the dev and ops folks).

Operations stakeholders should be involved in the Portfolio Build/Selection practice to ensure that the impact of new product development or maintenance decisions on current Operational positions are understood. Significant changes to current Operational positions are raised as Portfolio Requests so they can be properly costed and scoped.

Work that is approached holistically must by definition include addressing the needs of all stakeholders from any part of the business. If a divide exists between parts of the business then the dysfunctional relationship needs addressing rather than process and tooling being introduced. Processes and Tools are helpful to gain agreement on a common approach and automate ways of working but not to solve dysfunctional relationships.

HSE promotes regular builds and releases in either iterative/agile or continuous flow models Operations stakeholders must be involved early and often in software development product delivery as a product is likely to be delivered frequently, or even continuously, and so will need transitioning activities performed repeatedly.

So, I like the idea of joining up dev and ops stakeholders and work. But I only like it in the context of also joining up business strategy, requirements, architecture and quality with dev and ops – all flowing business value to customers. On it’s own, DevOps isn’t enough and worse could just be a management fad.

Launch: Holistic Software Engineering

How do we join up business strategy to agile development? Is program management relevant? Where do project managers fit in? What about architecture?

Holistic Software Engineering (HSE) answers all of these questions – for free.

Agile and continuous flow are great for small teams or a small number of inter-related small teams working on exploratory or maintenance work. But what if we’re spending 100s of millions on an IT strategy of inter-related products that need to work together to deliver business value. What is business value anyway?

H-Model To answer these questions (and more) my friend Steve Handy and I have distilled our collective 30+ years of software experience in a single, cohesive model of software development. We’ve developed the H model that moves on from the v-model and it’s siblings by combining:

…all elegantly combined and de-conflicted by release planning.

We’ve not invented very much, we’ve simply put a lot of good ideas from others together into a cohesive framework. We’ve drawn it all as a big picture and developed a website that shows how to get value from all of this stuff. Everything is clickable, everything has content.

The best bit: it’s free! There’s no paywall, there’s no private “full” version, you can just use it or not as you like.

We don’t believe in process zealotry, or putting academic concerns above clarity and usefulness. HSE is indicative, not prescriptive. You’re already doing it and if you use the big picture to draw red blobs on the bits that aren’t working well, or missing, in your organisation then you can use the model to make tangible improvements – immediately.

Using HSE doesn’t replace any of your existing processes, it provides the glue that joins them all up together in one simple, elegant and cohesive model.

Holistic Software Engineering
is for large/complex software organizations
who need to understand modern software engineering in a business context
our solution is to present a big picture of software business backed up with practical detail avoiding academic or heavyweight process documentation
that covers people issues, business strategy, portfolio, programme and project management as well as architecture, technical delivery and integration
unlike simple small team based processes such as RUP, Scrum etc.
The big picture of software engineering

Holistic Software Engineering

And if it’s too big, we’ve got a small picture, which is essentially “normal” agile development.

Please share HSE with your friends and colleagues.

No more Project Managers, bring in the Movie Producers

I was reading some course material recently that was trying to teach people something to do with software development and it was using the same old tired “ATM machine” example. I’ve worked with hundreds of projects, many in the finance sector and none of them are anything like an ATM machine.  One of the reasons that example is soooo tired is that it’s describing commoditised software development, it’s something I’d expect to buy off a shelf or go to a specialised vendor for. It’s not something I’d put my team of uber |33t  haxorz on.

Developing software products is a balance between a creative and scientific pursuit, so it’s a little hard to describe, especially when that software can range from a tiny smartphone app, to a website, to router firmware, to an enterprise hr system (urgh!), to a big data processing system of systems etc. You get the gist.

The things these types of system have in common with each other is how different they are. And yet the traditional approach to managing these diverse kinds of work has been classical project management with a one size fits all process (I’m looking at Prince2, RUP, Iterative, Agile etc.) or worse hiring a large company full of body-shop project managers. For me this is one of the root causes behind large scale IT disasters.

I once had a discussion with the leader of a PMO in a large organisation of thousands of people about the nature of Project Management for software and he assured me that “someone who knows how to manage a bit of work can manage software, it’s just a different type of work” and indeed I should “stop thinking of software as special”. I’ve seen this attitude resonate in many organisations and is to me best summed up by the same Head of PMOs statement “from a project management point of view building software is the same as building a bridge”.

Now then. I know a little about software development, but not that much about bridge building (if you exclude my illustrious Lego engineer career when I was 7). If I managed the building of a bridge next door to one build by someone with a track record of bridge building who’s bridge would you drive your family on? Not mine if you’ve got any sense.

There are so many problems in the software development industry caused by people not understanding it and applying bad metaphors. When people who do understand it start to get the point across (e.g. the early agile movement) they often get misunderstood by the old guard who dress up their normal ways of working in new language.

Many of our “best” metaphors come from manufacturing lines where the same thing is made over and over again. That’s nothing like software.

To me a software project is more similar to making a movie than anything else:

  • It’s a unique one off product (or a remake of a previous version) with new parts working together in new ways
  • Once made we’ll often duplicate and digitally distribute
  • It can cost a lot of money because it can be very complex and needs lots of specialist skills
  • You need a high level plan for how you’re going to make it
  • There’s no guarantee on return
  • What’s written down originally bares little resemblance to the finished product
  • We know when it’s good or bad but it’s hard to objectively quantify
  • There’s lots of different types that need different collections of specialist skills
  • Both involve people and time and so are subject to change, they have to be adaptive
  • You can tell when the people making it had fun
  • It’s not feasible that any team member can fit in any role
  • There’s almost always going to be some quality problems
  • You wouldn’t get a movie maker to build your bridge
  • You wouldn’t get a bridge builder to make your movie
  • You don’t make a movie faster by telling the actors to act faster

So, I don’t want a Project Managers that know how to work a gannt chart. I want movie producers that know how to work with a team holistically to get the best out of them both technically and artistically.

 

Agile contracts? What about Weighted Fixed Price?

MichaelangeloAgile contracts are tricky. In fact Software Development contracts are tricky. Software development is a non-deterministic creative endeavour, you’d be better off trying to nail down the acceptance criteria for one of Michelangelo‘s sculptures than a typical software project. Software is complex.

A few years ago I was involved in putting together a contract model called “Weighted Fixed Price” and recently it occurred to me that I’ve never blogged about it, so I’ve dragged myself away from the source to do so.

What’s wrong with the normal models?

So, fixed price is bad because it only works when you can really accurately define the requirements, acceptance criteria and the potential risk exposure is tiny. Otherwise you get schedule and cost over-runs, formal change control etc. etc. Of course supplier’s always build in their risk buffer into the fixed price so it tends to cost a little more than honest T&M would. Software work is rarely this predictable which is why there are so many high profile failures.

Time and Materials is bad because it’s hard to manage across commercial boundaries and customers can feel that they’re paying for nothing, endlessly. Even defining waypoints is difficult due to changes in requirements and over time. Unscrupulous suppliers could just sit back taking the money, delivering as little as possible to avoid gross negligence but nothing more. Scrupulous suppliers could be working hard but hit by the many and varied risks arising from such complexity and yet perceived by their customers as the unscrupulous.

Of course there’s variations like Capped T&M, Phased Fixed, Phased Capped T&M etc. but they all start getting a bit complicated and hard to manage, not to mention open to gaming.

Then there’s models like pay-per-point and other “incentive” schemes that raise over-complication, missing the point and gamability to an art form. Just say “no”.

Competition

The real incentive to suppliers to do a good job is follow on work. That means that as a customer you need your suppliers to be in a competitive market, being locked in to individual suppliers leads to captive market behaviours.

Weighted Fixed Price?

In an attempt to put something in the middle between Fixed Price for low risk exposure and T&M/internal for high exposure I put together “Weighted Fixed Price”. The idea here is to to basically use a Fixed Price model but have suppliers weighted by previous performance affecting whether they get the business or not.

So if a supplier 1 delivers 10% late on Project A then when we’re looking to place Project B for 500k and we’re looking at each supplier’s estimates then we’ll add 10% to supplier 1’s estimate.  That means all over things being equal we’ll choose the other supplier’s who typically come in closest to, or even under, their previous estimates. Weighting is transparent to each supplier but not across suppliers.

This provides the following pressures:

  • Suppliers don’t want to underestimate, they’ll come in over their initial estimates and adversely affect their weighting and therefore their chance for follow on work
  • Suppliers don’t want to overestimate to artificially lower their weighting or build in risk buffers, they won’t get the work because they’ll cost too much
  • Suppliers don’t want to take on complex, high risk work because they can’t predict the effect on their weighting (more below)

Out-sourcing doesn’t transfer risks to suppliers, because the customer is still the one needing the business benefit and paying for it, especially when things go wrong. The Weighted Fixed Price model is intended to genuinely transfer the complexity risk to suppliers, if they’re willing to take it.

What happens when the supplier finishes early?

They deliver early, watch their weighting affect their potential for winning future work increase massively and enjoy their fixed price money meaning that they’ve increased their profit margin on this job.

What happens when they deliver late?

They deliver late, their weighting increases and they’re less likely to win against other suppliers next time. However, if they could predict that they’re going to deliver late early in the project then they could renegotiate with the customer in light of what everyone’s learned about the work.

What happens when suppliers don’t want to bid for work because it’s too risky/complex?

Excellent, this means that they’re honestly telling us that the work is too risky to predict. This means we shouldn’t be out-sourcing it in the first place, time to stand up an internal team or supplement with some time-hire contracts if we need extra capacity.

Conclusion

The Weighted Fixed Price model is an evidence based contract model can be used in competitive supplier markets for reasonably well understood work with low risk exposure. It incentivises suppliers to estimate accurately and deliver to their estimates as that affects their opportunities for follow on work. It incentives customers not to out-source overly complex risky work.

It’s not perfect because we’re still trying to contractually define how a sculpture should make us feel but it’s something you can add to the mix of contract models you use.

How to make real progress in a software project

I’ve not been blogging much this year because I’ve been crazy busy writing software (more on that soon I hope). As you might expect I’ve changed my ways of working more often than process improvement consultants change the name of the process they’re selling but one key thing always seems to drop out whether I’m working on my own, with a team, a team of teams or a globally distributed team.

Do the top thing on the list, whatever it is.

Don’t do anything else until it’s done.

Or in process speak: “Reduce the WIP and clarify the Definition of Done“.

Too often I fall into the trap of doing the most interesting thing from the list, which normally means the most complex. And far too often I get carried away with the next interesting thing before finishing something. Both of these lead me to have far too many things in various states of progress all interfering with each other and me with no idea of how well I’m doing. And that’s assuming I actually know what “finished” means.

Software is complex enough without us making it harder with processes, remember the KISS principle?

Reduce waste: Visualise the value stream

A big thing in software process improvement (urgh) is reducing waste.  A great idea, but how do you identify waste?

One powerful method that I use is value stream visualisation. There’s a number of ways of getting information about the value stream ranging from physically walking the chain to simply asking people.

Sounds simple doesn’t it? But one of the best sources of information about wasteful processes and ways of working in a team is simply asking the team what they feel they shouldn’t be spending time on. By freeing up people to actually do their job they can reduce waste and improve productivity. One method I use is to ask each team member to identify a list of n items that they have to spend time on that they think are either entirely or partially wasteful. Basically a list of their impediments. I then anonymize the list look for commonality and then the top few are a reasonable target for improvement.

Alternatively actually walking the path of a piece of work through an organisation can really open the eyes to the sheer number of people and unnecessary waiting involved in some wasteful processes…

Anyway, having got some data I find it useful to visualise it in terms of waiting time and doing time. In Lean-speak cycle time is the time it takes to actually do something, lead time is the time it takes from the request being made to it finally being delivered. In many tasks the lead time far outstrips the cycle time, visualising it can make that very clear.

Low Lead time For example, a task with has a lead time of 5 days might actually be made up of 2 days waiting time before it actually gets picked up and the cycle time of 3 days starts.

Large lead timeIf you look at all of the tasks in your value chain you might find that many of them look more like this, with very large wasteful waiting times. There are many reasons why there might be a large waiting time, typically the team doing the tasks is over-subscribed, or a team might be protecting it’s SLAs, KPIs or other TLAs. These problems can have significant effects when looking at the system as a whole.

Even more interesting is looking at how these tasks might fit together in a value stream. If we imagine a business function that involves a request moving through a bunch of tasks across a number of teams that have different waiting and doing times a simple sequence of tasks, with a little buffering between them from the Project Manager to allow for schedule variance naturally, might end up looking like this:

 sequential_value_chain_arrows

Here we have 26 days of actual work spread over almost 60 days (12 weeks) of elapsed time = 40% waste. Even if the person planning the work is willing to accept some risk and try to optimise their workflow a bit without improving the waste inherent in the tasks involved there’s still a lot of waste in the system.

parallel_value_chain_arrows

By visualising the value stream in this way we can see straight away (apologies to red/green colour blind folks) that there’s a lot of red, a lot of waste. In many cases planners aren’t willing to accept the risks inherent in overlapping activities as shown here, or aren’t even aware that they can leading to the more sequential path shown above. The result is a minimum time that it takes a request to get done, based on the impedence of the current value chain of, in this case, 38 days before we even start thinking about actually doing any work.

Surely that’s not right.

Intentional vs. Emergent Architecture

I’ve been thinking about architecture a lot recently but one thing that I often discuss but have never blogged about for some odd reason is intentional vs. emergent software architecture. Some old fashioned software methods such as waterfall led people into doing a lot of up front architecture work, they analysed and designed away for ages producing huge reams of UML and documentation that no one could ever squeeze into their heads if they had the patience to read it. This is an example of an intentional architecture – the architecture was intended, planned and deliberate.

Lots of folks hated that way of doing things as it meant people were writing docs and drawing diagrams instead of making working software, not to mention an all to frequent tendency to over-engineer architecture past the point of usefulness. This led to some people saying that we’re better off not trying to do any architecture and just letting it emerge from the work we do developing small little customer focused requirements (like user stories or similar).  Ok, so there’d be some rework along the way as we encounter a deeper understanding of the system and refactor our emergent architecture but it’d still be better than the old way of doing large upfront architecture.

So, there seem to be two opposed viewpoints: intentional architecture is best, emergent architecture is best.

For me, neither is true. I’ve seen some really terrible examples of badly over-engineered architectures that crippled a project and projects that never got past their over-reaching architectural analysis. Equally I’ve seen products with emergent architecture that had to be entirely re-architected as time went pay because their emergent architecture was so wrong it was comical (imagine a software management tool that only supports a single project, and that concept being deeply embedded in the architecture).

There’s a scale with intentional architecture on one side and emergent architecture on the other.

Intentional vs. Emergent ArchitectureVarious factors might push us one way or another… The second I listed on the right is interesting as if you’ve got a well known technology and problem domain you can get away with emergent architecture, but similarly if you have a totally unknown technology and problem domain it can be very effective to evolve towards a solution and architecture rather than crystal ball gaze by creating a (probably wrong) intentional architecture.

Which rather sums up the point I’m trying to make. The purpose of architecture is to shape the solution and address technical risks. Solving the big problems, creating common ways of doing something (lightweight architectural mechanisms) are all good architectural goals but only if we’re sure of the solution. If we’re not we’re better off evolving an emergent architecture, at least initially.

I think that the extremes at either end of the scale, as with most extremes, are a bad idea at best and impossible at worst. If you gather a group of people together and tell them to create a web app given a backlog but they’re not allowed to think or communicate about the architecture up front then you’ll find they all start dividing the problem in different ways and writing in different languages for different server and client frameworks. Hardly a good idea. Of course on the other end of the scale, believing that we can foresee all of the technical issues, all of the technology and requirements changes that might happen is as likely as a 12 month project plan Gantt chart being correct after a few levels of cumulative error margin have been combined.

For more on architecture see:

Quality Confidence: a lead measure for software quality

For a long time I’ve wanted to be able to express the quality of my current software release in a simple intuitive way. I don’t want a page full of graphs and charts I just want a simple visualisation that works at every level of requirements to verification (up and down a decomposition/recomposition stack if you’ve got such a beasty). My answer to that is Quality Confidence.

What it is

RAG GaugeQC combines a number of standard questions together in a single simple measure of the current quality of the release, so instead of going to each team/test manager/project manager and asking them the same questions and trying to balance the answers in my head I can get a simple measure that I can use to quantitively determine whether my teams are meeting our required “level of done“. The QC measure combines:

  • how much test coverage have we got?
  • what’s the current pass rate like?
  • how stable are the test results?

We can represent QC as a single value or show it changing over time as shown below.

How to calculate it

Quality Confidence is 100% if all of the in scope requirements have got test coverage and all of those tests have passed for the last few test runs.

To calculate QC we track the requirements in a project and when they’re planned for/delivered. This is to limit the QC to only take into account delivered requirements. There’s no point in saying we’ve only got 10% quality of the current release because it only passes some of the tests because the rest having been delivered yet.

We also track all of the tests related to each requirement, and their results for each test run. We need to assert when a requirement has “enough coverage” so we know whether to include it or not – the reason for this is that if I say a requirement has been delivered but doesn’t yet have enough test coverage then even if all of it’s testing has passed and been stable then I don’t want it adding to the 100% of potential quality confidence. The assertion that coverage isn’t enough means that we aren’t confident in the quality of that requirement.

So 100% quality for a single requirement that’s in scope for the current release is when all the tests for that requirement have been run and passed (not just this time but for the last few runs) and that the requirement has enough coverage. For multiple requirements we simply average (or maybe weighted average) the results across the requirements set.

If we don’t run all the tests each during each test run then we can interpolate the quality for each requirement but I suggest decreasing the confidence for a requirement (maybe by 0.8) for each missing run. After all just because a test passed previously doesn’t mean it’s going to still pass now. We also decrease the influence of each test run on the QC of a requirement based on it’s age so that if 5 tests ago the test failed it has less impact on the QC that the most recent test run. Past 5 or so (depending on test cycle frequency) test runs we decrease the influence to zero. More info on calculation here.

So… how much coverage is enough?

Enough coverage for a requirement is an interesting question… For some it’s when they’ve covered enough lines of code, for others the cyclomatic complexity has an impact, or the number of paths through the requirements/scenarios/stories/use cases etc. For me, a requirement has enough test coverage when we feel we’ve covered the quality risks. I focus my automated testing on basic and normal flows and my human testing on the fringe cases. Either way, you need to make the decision of when enough is enough for your project.

To help calibrate this you can correlate the QC with the lag measure of escaped defects.

Measurement driven behaviour

Quality ConfidenceThe QC measure is quite intuitive and tends to be congruent with people’s gut feel of how the project/release is going, especially when shown over time, however there’s simply no substitute for going and talking to people. QC is a useful indicator but not something that you can use in favour of real communication and interaction.

The measurement driven behaviour for QC is interesting as you can only calculate it if you track requirements (of some form) related to tests and their results. You can push it up by adding more tests and testing more often :) Or by asserting you have enough coverage when you don’t :( However correlation to escaped defects would highlight that false assertion.

If you’ve got a requirements stack ranging from some form of high level requirements with acceptance tests to low level stories and system tests you can implement the QC measure at each level and even use it as a quality gate prior to allowing entry to a team of teams release train.

Unfortunately, because me and my mate Ray came up with this in a pub there aren’t any tools (other than an Excel spreadsheet) that automatically calculate QC yet. If you’re a tool writer and would like then please do, I’ll send you the maths!

Follow

Get every new post delivered to your Inbox.

Join 388 other followers

%d bloggers like this: