Mike MacDonagh's Blog

Somewhere in the overlap between software development, process improvement and psychology

Tag Archives: scrum

Collective nouns for software development roles

An embarrassment of Project Managers

An impasse of Architects

A confusion of Business Analysts

A mob of Developers

A silo of Testers

A brethren of Scrum Coaches

A waste of Lean Consultants

A conspiracy of Process Improvement Consultants

People are important, not roles! But recently I saw a weekly team meeting for a project that had 12 project managers! I thought it was a joke at first.

Can you think of any others?

Advertisements

Launch: Holistic Software Engineering

How do we join up business strategy to agile development? Is program management relevant? Where do project managers fit in? What about architecture?

Holistic Software Engineering (HSE) answers all of these questions – for free.

Agile and continuous flow are great for small teams or a small number of inter-related small teams working on exploratory or maintenance work. But what if we’re spending 100s of millions on an IT strategy of inter-related products that need to work together to deliver business value. What is business value anyway?

H-Model To answer these questions (and more) my friend Steve Handy and I have distilled our collective 30+ years of software experience in a single, cohesive model of software development. We’ve developed the H model that moves on from the v-model and it’s siblings by combining:

…all elegantly combined and de-conflicted by release planning.

We’ve not invented very much, we’ve simply put a lot of good ideas from others together into a cohesive framework. We’ve drawn it all as a big picture and developed a website that shows how to get value from all of this stuff. Everything is clickable, everything has content.

The best bit: it’s free! There’s no paywall, there’s no private “full” version, you can just use it or not as you like.

We don’t believe in process zealotry, or putting academic concerns above clarity and usefulness. HSE is indicative, not prescriptive. You’re already doing it and if you use the big picture to draw red blobs on the bits that aren’t working well, or missing, in your organisation then you can use the model to make tangible improvements – immediately.

Using HSE doesn’t replace any of your existing processes, it provides the glue that joins them all up together in one simple, elegant and cohesive model.

Holistic Software Engineering
is for large/complex software organizations
who need to understand modern software engineering in a business context
our solution is to present a big picture of software business backed up with practical detail avoiding academic or heavyweight process documentation
that covers people issues, business strategy, portfolio, programme and project management as well as architecture, technical delivery and integration
unlike simple small team based processes such as RUP, Scrum etc.
The big picture of software engineering

Holistic Software Engineering

And if it’s too big, we’ve got a small picture, which is essentially “normal” agile development.

Please share HSE with your friends and colleagues.

Intentional vs. Emergent Architecture

I’ve been thinking about architecture a lot recently but one thing that I often discuss but have never blogged about for some odd reason is intentional vs. emergent software architecture. Some old fashioned software methods such as waterfall led people into doing a lot of up front architecture work, they analysed and designed away for ages producing huge reams of UML and documentation that no one could ever squeeze into their heads if they had the patience to read it. This is an example of an intentional architecture – the architecture was intended, planned and deliberate.

Lots of folks hated that way of doing things as it meant people were writing docs and drawing diagrams instead of making working software, not to mention an all to frequent tendency to over-engineer architecture past the point of usefulness. This led to some people saying that we’re better off not trying to do any architecture and just letting it emerge from the work we do developing small little customer focused requirements (like user stories or similar).  Ok, so there’d be some rework along the way as we encounter a deeper understanding of the system and refactor our emergent architecture but it’d still be better than the old way of doing large upfront architecture.

So, there seem to be two opposed viewpoints: intentional architecture is best, emergent architecture is best.

For me, neither is true. I’ve seen some really terrible examples of badly over-engineered architectures that crippled a project and projects that never got past their over-reaching architectural analysis. Equally I’ve seen products with emergent architecture that had to be entirely re-architected as time went pay because their emergent architecture was so wrong it was comical (imagine a software management tool that only supports a single project, and that concept being deeply embedded in the architecture).

There’s a scale with intentional architecture on one side and emergent architecture on the other.

Intentional vs. Emergent ArchitectureVarious factors might push us one way or another… The second I listed on the right is interesting as if you’ve got a well known technology and problem domain you can get away with emergent architecture, but similarly if you have a totally unknown technology and problem domain it can be very effective to evolve towards a solution and architecture rather than crystal ball gaze by creating a (probably wrong) intentional architecture.

Which rather sums up the point I’m trying to make. The purpose of architecture is to shape the solution and address technical risks. Solving the big problems, creating common ways of doing something (lightweight architectural mechanisms) are all good architectural goals but only if we’re sure of the solution. If we’re not we’re better off evolving an emergent architecture, at least initially.

I think that the extremes at either end of the scale, as with most extremes, are a bad idea at best and impossible at worst. If you gather a group of people together and tell them to create a web app given a backlog but they’re not allowed to think or communicate about the architecture up front then you’ll find they all start dividing the problem in different ways and writing in different languages for different server and client frameworks. Hardly a good idea. Of course on the other end of the scale, believing that we can foresee all of the technical issues, all of the technology and requirements changes that might happen is as likely as a 12 month project plan Gantt chart being correct after a few levels of cumulative error margin have been combined.

For more on architecture see:

A cross-project Release Burnup?

Huh?

Points are abstract and relative, comparing them across projects is like comparing apples and elephants.

Releases only make sense in themselves, let alone across project time lines so why would I want to look at a cross-project burnup? Such a thing is foolishness surely…

Well maybe not, it might be useful as an agile at scale measure, as a way of looking at the work churn in a department or other high level unit in an organisation. I’ve been pondering if there’s a way of aggregating burnup information and point burnage across teams (with distinct disjoint timelines) and thought that there might be a way.

Ideally I want to be able to show the amount of work planned within an organisational group and progress towards that scope, showing when scope goes up and down (does that ever happen?). Then I want to show progress towards that scope overall, the angle of the progress line could give me an organisational velocity – perhaps I could even add an ideal velocity that would indicate what perfect robots would to if real life never intervened (although that could be dangerous).

Aggregate burnup

First I need a way of normalising points and understanding what 100% of scope means when it can be incorporating many projects at different points in their lifecycles. Perhaps a way of doing it is considering everything in terms of percentage, after all that’s an easy thing for people to consume. To define 100% of a scope of various contributing projects is tricky since it’ll change and be dependant on their releases, continuous flows, changing scope.

A simplistic approach to this is to use a moving baseline, perhaps we can determine 100% of a projects scope as whatever it thinks it would deliver within the time area being considered (the scope of the x-axis) at 15% of it’s timeline (or whatever).

In the example above this tells me that work is consistently overplanned not just in terms of actual velocity, but in terms of idealised capacity aswell – the demand is higher than the supply. I think this could be useful for “agiley” portfolio management.

Perhaps I could start establishing a budget cycle velocity, and start limiting work planned based on empirical evidence. Ok, so no project is the same and points aren’t comparable but the Law of Large Numbers is on my side.

What do you think? Is this bonkers?

What is enough agile architecure?

I wasn’t planning on writing a “how long is a piece of string?” post but it’s a question I often get, and something that I’ve played with a bit. The point of architecture is to address the aesthetics of a system, to consider its reusable bits or common forms, the overall shape and nature, the technology it’ll use, the distribution pattern and how it will meet its functional and non-functional requirements.

Of course in an agile, or indeed post-agile world, we don’t want to spend forever document and designing stuff in analysis paralysis. I’ve worked in projects where I had to draw every class in detail in a formal UML tool before I could go and code it. I’m pretty sure this halved my development speed without adding any real value. But I’ve also worked on projects where we’ve drawn some UML on a whiteboard while discussing what we were going to do and how we were going to do it – and that was really valuable.

This makes an architect’s job difficult. Of course, it’s always been hard:

The ideal architect should be a man of letters, a mathematician, familiar with historical studies, a diligent of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of jurisconsults,  familiar with astronomy and astronomical calculations.

Vitruvius ~ 25 BCE

But as well as being a bit of a Renaissance man an architect also needs to know when enough is enough. I’ve found that I’ve done enough architecture with the team (not to the team) when we collectively feel like we understand the proposed solution, how it’s going to hang together, how it will address the risky bits and meet it’s requirements.
To do that, we tend to draw a few diagrams and write some words.

First, an architectural profile that gives us an idea of where the complexity is and therefore where the technical and quality risks are.

Second an overview sketch that shows the overall structure, maybe technology, target deployment platforms and major bits.

Third a set of lightweight mechanisms that cover the common ways of doing things or address particularly knotty problems and address some of those risks. These tend to describe the architecture (or mechanism flows) by example rather than aiming for total coverage.

I might add some other stuff to this if the project calls for it, like maybe a data model, a GUI mockup but generally that’s it 🙂

This post is an extract from the Agile Architecture content from Holistic Software Engineering

Scaled Agility: Architectural profiling

Architectural Profiling is borrowed from Holistic Software Engineering

When it might be appropriate

  • In situations where a lightweight approach to intentional architecture is required
  • In situations where high design formality isn’t required
  • When a simple approach to architecture analysis is required at a team of teams level before more analysis in contributing teams
  • Where a team wants to cut wasteful requirements and architectural “analysis paralysis” without throwing out ignoring technical risks
  • System of systems development

What is it?

When I look at a potential (or existing) system I think of it in terms of it’s complexity in terms of a few dimensions, they’re not set in stone and I might add or remove dimensions as the mood, and context, takes me.  Doing this early on gives me a feel for the shape of a project’s requirements, architecture and solution. In fact it also means I can short cut writing a whole bunch of requirements, acceptance tests, designs and even code and tests.

Here’s an example of thinking about a simple-ish app that does some fairly hefty data processing, needs to do it reasonably quickly but not excessively and has got to do some pretty visualisation stuff. Other than that it’s reasonably straight forward.

You might notice that the x-axis is pretty much FURPS with a couple of extras bolted on (I’ll come back to the carefully avoided first dimension in a minute).

The y-axis ranges from no complexity to lots of complexity but is deliberately not labelled (normally one of my pet hates) so we can focus on the relative complexity of these dimensions of the requirements, quality,  architecture and therefore solution.

The height of one of these bars helps me shape the architectural approach I’ll take to the project, and which bits and bobs I can reuse from my super bag of reuse.

Read more of this post

Lightweight architectural mechanisms – specification by example

In my previous post I talked about using a sketch to describe architectural structure, but the other part of a useful architectural description is it’s dynamics, best expressed as architectural mechanisms.

Mechanisms are little snippets of the architecture that address an important problem, provide a common way of doing something or are good examples of how the architecture hangs together.

Mechanisms exist within the context of an architecture, which provides overall structure for the mechanisms. I tend to use a simple architectural overview sketch to do that and then further refine the architecture, if necessary, in terms of mechanisms according to the architectural profile and (during development) the needs of my team.

Sometimes during a project the team will comment on the need to have a common way of doing something, or that they’ve uncovered something tricky that we need to consider as a more significant part of the system than our early analysis showed. In these cases it’s time to create a mechanism.

Mechanisms are great, but you don’t want to many of them, or to document and detail them too much, just enough to communicate the architecture and support maintenance efforts. Indeed writing too much actually makes it harder to communicate.

Mechanisms are best expressed in terms of their structure and behaviour, I tend to use a simple class diagram for the first and whatever seems appropriate for the second. This might be a UML sequence diagram, but I don’t really like those, instead I might use a good old fashioned activity diagram, or a flowchart with GUI mockups in the nodes. Either way I recommend limited the documentation and description, just because one flow is worth writing down to explain it the others might not be. In this way I do architectural specification by example. Once I’ve written enough about a mechanism that the rest can be inferred I stop.

The words aren’t important in this example but you can see that I try to fit the description into a fairly small concise area – that helps me focus on just the really important stuff. In the top left there’s a list titled “Appropriate for stories like:” which is an indicative list of a few things to which the mechanism is appropriate.  Next to it is some blurb that says what it’s for and the main scenarios it covers, so in the case of persistency it’s the normal query, create, edit, save & delete. There might be some notes around important constraints or whatever else is important.

I’ll then describe each important scenario in terms of it’s behaviour in whatever language or visual form makes sense. Sometimes  this is a photo of a whiteboard 🙂 Sometimes it’s text, sometimes it’s a combination of those things.

The flip side

Just like stories having a flip side which contains their acceptance tests I also like to put acceptance tests on the flip side of my mechanisms. Although many are easy to frame in terms of customer acceptance tests (e.g. Search Mechanism will have performance, consistency and accuracy acceptance criteria) some are a little harder to frame. Technical mechanism formed to provide a common way of doing something in an architecture or to express the shape and aesthetics of an architecture may feel like they only make sense in terms of the development team’s acceptance criteria, however I always make sure they relate back to a story if this is the case, otherwise I could be needlessly gold plating.

Mechanisms are best found by understanding the architectural profile initially and then by actually building the system. If the customer doesn’t have a story that will be satisfied in part by a mechanism then it probably shouldn’t be there. Even if it is shiny.

Lightweight architecture sketch in a single diagram

This blog is based on Architecture in Holistic Software Engineering.

I’ve been doing architecture for a while, in fact it’s what I used to do as my main job. I’ve taught UML, Object Orientated design and coding and various bits of various processes for years. One thing that’s stuck me over the years is that most of the descriptions of how to capture and communicate architecture aren’t simple enough.

I quite like UML, it’s useful to be able to draw a symbol and others know what it is without me having to explain to everyone what I mean, but I don’t like the way it has so many restrictive rules that stop me from making a nice sketch to explain what I mean, also everyone else doesn’t seem to know the language to the same degree, I need something a bit lighter.

I don’t want:

  • to be limited to the symbology of UML
  • a lot of model structure with interconnected diagrams
  • endless detail
  • every class on the diagram
  • to follow all of the rules
 I do want:

  • the symbology of UML
  • the important elements
  • to give a feel of the important structural, logical and physical stuff
  • just one diagram

I’ve always done a high level diagram that shows the overall pattern for my architecture, something like layers or pipes and filters or whatever. I’ve also always done a breakdown of the important stuff within each layer but I’ve had the best success (in terms of communicating with others) when I’ve mixed both, with elements of the target deployment and actor interaction.

Being terrible at naming things I call this marvellous diagram the “Architectural Overview Sketch”. Here’s an example:

It  expresses all of the structural things that I care about. It expresses the:

  • the shape and feel of the system
  • high level layers
  • primary interfaces between subsystems
  • target client platforms
  • User – GUI interaction paradigm
  • important classes in each layer and major layer package structure
  • critical data schema
  • interaction with external APIs
  • the middleware and database hosting and distribution

I might have more diagrams to explain more structure in part of this if it’s really important, but I don’t want to have every class in my system on a diagram somewhere. I’m using my diagrams to communicate, not specify. I’m broken a bunch of UML rules of course, and there’s a lot of implied stuff but adding those details makes it harder to explain what I really want. One thing I really like about it is how it shows the important detailed parts of a design in the context of the bigger architecture.

Architecture, like any design, is best expressed in terms of both structure and behaviour. So far this is just structure, there’s some hints at behaviours but nothing terribly useful. My next post will be about how I minimally specify the important bits of an architectures behaviour – the mechanisms.

Quality Confidence: a lead measure for software quality

For a long time I’ve wanted to be able to express the quality of my current software release in a simple intuitive way. I don’t want a page full of graphs and charts I just want a simple visualisation that works at every level of requirements to verification (up and down a decomposition/recomposition stack if you’ve got such a beasty). My answer to that is Quality Confidence.

What it is

RAG GaugeQC combines a number of standard questions together in a single simple measure of the current quality of the release, so instead of going to each team/test manager/project manager and asking them the same questions and trying to balance the answers in my head I can get a simple measure that I can use to quantitively determine whether my teams are meeting our required “level of done“. The QC measure combines:

  • how much test coverage have we got?
  • what’s the current pass rate like?
  • how stable are the test results?

We can represent QC as a single value or show it changing over time as shown below.

How to calculate it

Quality Confidence is 100% if all of the in scope requirements have got test coverage and all of those tests have passed for the last few test runs.

To calculate QC we track the requirements in a project and when they’re planned for/delivered. This is to limit the QC to only take into account delivered requirements. There’s no point in saying we’ve only got 10% quality of the current release because it only passes some of the tests because the rest having been delivered yet.

We also track all of the tests related to each requirement, and their results for each test run. We need to assert when a requirement has “enough coverage” so we know whether to include it or not – the reason for this is that if I say a requirement has been delivered but doesn’t yet have enough test coverage then even if all of it’s testing has passed and been stable then I don’t want it adding to the 100% of potential quality confidence. The assertion that coverage isn’t enough means that we aren’t confident in the quality of that requirement.

So 100% quality for a single requirement that’s in scope for the current release is when all the tests for that requirement have been run and passed (not just this time but for the last few runs) and that the requirement has enough coverage. For multiple requirements we simply average (or maybe weighted average) the results across the requirements set.

If we don’t run all the tests each during each test run then we can interpolate the quality for each requirement but I suggest decreasing the confidence for a requirement (maybe by 0.8) for each missing run. After all just because a test passed previously doesn’t mean it’s going to still pass now. We also decrease the influence of each test run on the QC of a requirement based on it’s age so that if 5 tests ago the test failed it has less impact on the QC that the most recent test run. Past 5 or so (depending on test cycle frequency) test runs we decrease the influence to zero. More info on calculation here.

So… how much coverage is enough?

Enough coverage for a requirement is an interesting question… For some it’s when they’ve covered enough lines of code, for others the cyclomatic complexity has an impact, or the number of paths through the requirements/scenarios/stories/use cases etc. For me, a requirement has enough test coverage when we feel we’ve covered the quality risks. I focus my automated testing on basic and normal flows and my human testing on the fringe cases. Either way, you need to make the decision of when enough is enough for your project.

To help calibrate this you can correlate the QC with the lag measure of escaped defects.

Measurement driven behaviour

Quality ConfidenceThe QC measure is quite intuitive and tends to be congruent with people’s gut feel of how the project/release is going, especially when shown over time, however there’s simply no substitute for going and talking to people. QC is a useful indicator but not something that you can use in favour of real communication and interaction.

The measurement driven behaviour for QC is interesting as you can only calculate it if you track requirements (of some form) related to tests and their results. You can push it up by adding more tests and testing more often 🙂 Or by asserting you have enough coverage when you don’t 😦 However correlation to escaped defects would highlight that false assertion.

If you’ve got a requirements stack ranging from some form of high level requirements with acceptance tests to low level stories and system tests you can implement the QC measure at each level and even use it as a quality gate prior to allowing entry to a team of teams release train.

Unfortunately, because me and my mate Ray came up with this in a pub there aren’t any tools (other than an Excel spreadsheet) that automatically calculate QC yet. If you’re a tool writer and would like then please do, I’ll send you the maths!

I’m coming out as not-Agile and not post-Agile

Big A vs. Little a

Huh? What? I’ve written a fair bit on this blog about agile topics, but I always try to write about agility with a small “a”. I’m not really into Agile with a big “A” though – I’m not into doing things according to a set of rules and having arguments about whether I’m doing it right or not. I’m not anti-agile, but I’m increasingly anti-Agile.

To me, the ideological arguments, definitions of everything, frameworks, money-spinning certifications and money-spinning tooling are what’s wrong with doing “Agile”. Being empirical, reflecting and adapting, honestly communicating and putting people first as an approach is being “agile”.

I don’t really like the term “post-Agile” either though as it comes with a bunch of baggage and is easily misinterpreted – and I still see benefits in adopting agile practices. I don’t want to see another specific set of rules or a statement or beliefs with elitist signatures. For me what’s next after agile is about dropping the ideology in software process, destroying the ivory tower of trademarks, formal definitions, money spinning tools and money-spinning certification programmes.

We need to get rid of the League of Agile Methodology Experts and if anyone says “Ah, but this is different” when showing a website of process content then you have my permission to hit them with a stick.

So what does the future look like?

Software development is a complex social activity involving teams of people forming and self-organising to work together, sometimes in teams of teams which is even harder. As technology is increasingly abstracting up and raising in maturity so is the way that developers, managers and organisations think about software and doing software. I think the problem is getting increasingly social, and the solutions will start looking increasingly social using more “soft practices”.

Software process improvement agents/consultants/coaches/mentors (including myself) need to take a long hard look at themselves. Are they telling people how to do it right when they can’t even write HelloWorld in a modern language? I’ve said that to some acquaintances in the industry, generously qualifying “modern language” as something significantly commercially used in the last 10 years and seen them look quite offended at my insulting affront to their professional integrity. I’ll go out on a limb and say you can’t coach a software development team if you don’t know how to write software.

So… software process improvement?

TOverlapping concerns in process improvementhe world runs on software, it’s everywhere and it’s critical. Getting better at doing software, improving the software value chain, is a noble aim and will continue to be as it means getting better at doing our businesses and our lives.

For me process improvement (dislike that phrase as well) is going to be more about combining psychology based business change practices with the best bits of a wide variety of ways of working (agile, lean, Scrum, what you currently do, RUP, Kanban, various workflow patters etc.) with technical software development practices like continuous integration, continuous delivery.

We need to work together, not as “leaders and followers” or “consultants and clients” but as collaborative peers working together to apply specialist skills and experience so that we can all improve. Smart people should be treated as smart people, we have much to learn from them and should be thinking in terms of fellowship rather than leadership.

I’m calling this overlap “soft practices”* because the term is evocative of:

  • The practice of doing software
  • The practices involved in doing software
  • Being able to deal with people is sometimes called  having “soft skills
  • Soft power

What do you think about “post-Agile”?

* I’ve even set up a company called Soft Practice to do this stuff, that’s why I’ve not been blogging much recently, who knew there’d be so many forms to fill in!

Edit 14/8/13: Seems others are now talking about the same things: Ken Schwaber, Dave Anderson, Aterny

%d bloggers like this: