Mike MacDonagh's Blog

Somewhere in the overlap between software development, process improvement and psychology

Scaled Agility: Architectural profiling

Name: Architectural Profiling (short cut architectural analysis and lightweight capture) – Agile at Scale practice

When it might be appropriate

  • In situations where a lightweight approach to intentional architecture is required
  • In situations where high design formality isn’t required
  • When a simple approach to architecture analysis is required at a team of teams level before more analysis in contributing teams
  • Where a team wants to cut wasteful requirements and architectural “analysis paralysis” without throwing out ignoring technical risks
  • System of systems development

What is it?

When I look at a potential (or existing) system I think of it in terms of it’s complexity in terms of a few dimensions, they’re not set in stone and I might add or remove dimensions as the mood, and context, takes me.  Doing this early on gives me a feel for the shape of a project’s requirements, architecture and solution. In fact it also means I can short cut writing a whole bunch of requirements, acceptance tests, designs and even code and tests.

Here’s an example of thinking about a simple-ish app that does some fairly hefty data processing, needs to do it reasonably quickly but not excessively and has got to do some pretty visualisation stuff. Other than that it’s reasonably straight forward.

You might notice that the x-axis is pretty much FURPS with a couple of extras bolted on (I’ll come back to the carefully avoided first dimension in a minute).

The y-axis ranges from no complexity to lots of complexity but is deliberately not labelled (normally one of my pet hates) so we can focus on the relative complexity of these dimensions of the requirements, quality,  architecture and therefore solution.

The height of one of these bars helps me shape the architectural approach I’ll take to the project, and which bits and bobs I can reuse from my super bag of reuse.

Read more of this post

Lightweight architectural mechanisms – specification by example

In my previous post I talked about using a sketch to describe architectural structure, but the other part of a useful architectural description is it’s dynamics, best expressed as architectural mechanisms.

Mechanisms are little snippets of the architecture that address an important problem, provide a common way of doing something or are good examples of how the architecture hangs together.

Mechanisms exist within the context of an architecture, which provides overall structure for the mechanisms. I tend to use a simple architectural overview sketch to do that and then further refine the architecture, if necessary, in terms of mechanisms according to the architectural profile and (during development) the needs of my team.

Sometimes during a project the team will comment on the need to have a common way of doing something, or that they’ve uncovered something tricky that we need to consider as a more significant part of the system than our early analysis showed. In these cases it’s time to create a mechanism.

Mechanisms are great, but you don’t want to many of them, or to document and detail them too much, just enough to communicate the architecture and support maintenance efforts. Indeed writing too much actually makes it harder to communicate.

Mechanisms are best expressed in terms of their structure and behaviour, I tend to use a simple class diagram for the first and whatever seems appropriate for the second. This might be a UML sequence diagram, but I don’t really like those, instead I might use a good old fashioned activity diagram, or a flowchart with GUI mockups in the nodes. Either way I recommend limited the documentation and description, just because one flow is worth writing down to explain it the others might not be. In this way I do architectural specification by example. Once I’ve written enough about a mechanism that the rest can be inferred I stop.

The words aren’t important in this example but you can see that I try to fit the description into a fairly small concise area – that helps me focus on just the really important stuff. In the top left there’s a list titled “Appropriate for stories like:” which is an indicative list of a few things to which the mechanism is appropriate.  Next to it is some blurb that says what it’s for and the main scenarios it covers, so in the case of persistency it’s the normal query, create, edit, save & delete. There might be some notes around important constraints or whatever else is important.

I’ll then describe each important scenario in terms of it’s behaviour in whatever language or visual form makes sense. Sometimes  this is a photo of a whiteboard :) Sometimes it’s text, sometimes it’s a combination of those things.

The flip side

Just like stories having a flip side which contains their acceptance tests I also like to put acceptance tests on the flip side of my mechanisms. Although many are easy to frame in terms of customer acceptance tests (e.g. Search Mechanism will have performance, consistency and accuracy acceptance criteria) some are a little harder to frame. Technical mechanism formed to provide a common way of doing something in an architecture or to express the shape and aesthetics of an architecture may feel like they only make sense in terms of the development team’s acceptance criteria, however I always make sure they relate back to a story if this is the case, otherwise I could be needlessly gold plating.

Mechanisms are best found by understanding the architectural profile initially and then by actually building the system. If the customer doesn’t have a story that will be satisfied in part by a mechanism then it probably shouldn’t be there. Even if it is shiny.

Lightweight architecture sketch in a single diagram

I’ve been doing architecture for a while, in fact it’s what I used to do as my main job. I’ve taught UML, Object Orientated design and coding and various bits of various processes for years. One thing that’s stuck me over the years is that most of the descriptions of how to capture and communicate architecture aren’t simple enough.

I quite like UML, it’s useful to be able to draw a symbol and others know what it is without me having to explain to everyone what I mean, but I don’t like the way it has so many restrictive rules that stop me from making a nice sketch to explain what I mean, also everyone else doesn’t seem to know the language to the same degree, I need something a bit lighter.

I don’t want:

  • to be limited to the symbology of UML
  • a lot of model structure with interconnected diagrams
  • endless detail
  • every class on the diagram
  • to follow all of the rules
 I do want:

  • the symbology of UML
  • the important elements
  • to give a feel of the important structural, logical and physical stuff
  • just one diagram

I’ve always done a high level diagram that shows the overall pattern for my architecture, something like layers or pipes and filters or whatever. I’ve also always done a breakdown of the important stuff within each layer but I’ve had the best success (in terms of communicating with others) when I’ve mixed both, with elements of the target deployment and actor interaction.

Being terrible at naming things I call this marvellous diagram the “Architectural Overview Sketch”. Here’s an example:

It  expresses all of the structural things that I care about. It expresses the:

  • the shape and feel of the system
  • high level layers
  • primary interfaces between subsystems
  • target client platforms
  • User – GUI interaction paradigm
  • important classes in each layer and major layer package structure
  • critical data schema
  • interaction with external APIs
  • the middleware and database hosting and distribution

I might have more diagrams to explain more structure in part of this if it’s really important, but I don’t want to have every class in my system on a diagram somewhere. I’m using my diagrams to communicate, not specify. I’m broken a bunch of UML rules of course, and there’s a lot of implied stuff but adding those details makes it harder to explain what I really want. One thing I really like about it is how it shows the important detailed parts of a design in the context of the bigger architecture.

Architecture, like any design, is best expressed in terms of both structure and behaviour. So far this is just structure, there’s some hints at behaviours but nothing terribly useful. My next post will be about how I minimally specify the important bits of an architectures behaviour – the mechanisms.

Testing is dead

Long live testing!

There’s an apparent conflict between the idea of a cross-functional team collaborating together on their work and specialist testers that only do testing. There’s an awful lot wrong with the “them and us” attitude between developers and testers (in both directions) in some organisations. One of the biggest problems in isolating testing from development is it abdicates responsibility for quality from the developers, it makes a poacher-and-gamekeeper culture driving divisions between people. There’s a well established cliché of development chucking rubbish over the wall, there’s also a well established cliché of testers being empire building bureaucrats who only ever say no.

The biggest problem from my point of view in having separate developers and testers is that it creates two teams, a team of teams  and that causes all sorts of problems with communication, identity, culture, decision making, definition of done etc. If you’ve got a small team and you split it into two by having devs and testers you’re taking something that is potentially high performing and deliberately interfering to make it less likely to succeed. A typical management activity.

Testing as part of the team

In my teams we’ve always considered that we have to build quality into every activity if we have a hope of achieving quality in the product. That means our lowest levels of done aren’t that we wrote some code, or did a build but that we have done some testing. We have tested the basic and alternate flows, or the obvious paths through the stories. We don’t consider it stable/promoted/out of development until this has happened.

Ideally I like to do this two ways… using automated tests as part of my continuous integration to test the obvious paths and the various parameters that impact the system (checking different data sets against expected functionality etc.). The other is for the team to test each others stuff – not with a test script but with the requirement/acceptance criteria. This allows us to check what isn’t “normal”, some of the fringe cases, but mostly to check that our view of “normal” is consistent across the team and improve the common understanding of the product.

Testing isn’t done by a separate team using separate tools, it’s done by the team for the team.

So what about Testers?

So where to professional testers live in this world?  There is still a role for professional testers, but it’s not at the lowest level of development-testing, it’s a little higher up where having creative intelligent humans with a quality focus can provide the most value to the software value chain. Human testing isn’t especially useful for testing the normal flows, it’s useful for testing the “fringe cases”.

Rather than focus independent human testing at the lowest levels of done, which are already handled by the team, focus humans on working on the more unusual cases, doing the end-to-end integration scenarios where many of the real problems tend to lie. Testers have a wealth of experience and knowledge of different test techniques that can be applied to address different quality risks, wasting that on simple stuff means they don’t have time to do the important stuff.

Specialist professional testers do have a place in cross-functional agile software development, but it’s doing fringe testing, not basic flow testing.

What is Fringe Testing?

Fringe testing is testing the unusual cases, the things that weren’t written down, weren’t predictable and weren’t obvious. In short it’s the kind of thing inventive, creative humans are good at.
Use Case basic flowWhen we’ve got requirements they’re often written in a form that states the normal way something will go, like a basic flow in a use case (or similar). Testing this might be useful but if it doesn’t work then the developer who wrote it clearly owes everyone a doughnut because after writing it, or finishing writing it, the very first thing that should be tried is going through the basic flow. In fact it’s a good candidate for automation and testing it should be considered part of the development team’s lowest levels of their definition of done.

Use Case Alternate FlowsUse Cases are normally elaborated in terms of more than just a basic flow with a set of alternative or exceptional cases or scenarios. You might consider each of these scenarios as Use Stories – I quite like this way of working with both Use Cases to make a high level scope diagram and User Stories for actually elaborating the requirements and acceptance criteria (see Use Case vs. User Story). The thing is, if it’s been written down in the requirements then the team should be testing it as a matter of course, the probability of finding defects in these flows is again pretty low. If they’re written as stories and implemented incrementally with acceptance criteria then it’s even more unlikely as they’re not much different from the basic flow. Just another story that should be developed and tested as part of the lowest levels of getting things done.

Use Case FringesThe fringe cases are the weird things that weren’t written down, that are odd ways through the requirements and are the places where the quality risks probably lie. Coming up with these is what real professional testers are good at and what proper test techniques are good for. Testing the basic paths of everything is less useful.

I’ve often thought that one way to simulate a user using my app is to take a scatter gun approach to clicking and pressing buttons in an app, because I think algorithmically, users often don’t. Many “normal” usages of your software may actually be fringe cases because things aren’t necessarily always used as designed. Of course that makes for a good argument for simpler interfaces (both GUIs and APIs).

So “Fringe Testing” is simply testing the unusual paths through your software, the places that most likely are the highest quality risks. Of course the most “fringey” cases are often the cross requirement paths, the end to end scenarios, that users take through your software or set of integrating components. As for traditional testing… I think it’s dead.

Quality ConfidenceI’ve previously blogged on the Quality Confidence metric being a useful lead indicator for software quality. One common question I’ve had on that is “Are you expecting us to fully test every requirement?”. The simple answer is “no”, testing should be focussed on the fringe cases to put an expensive resource where it’s most valuable. The Quality Confidence measure requires you to assert when you have enough coverage, for me that’s when you feel you’ve got a handle on the quality risks by testing a mix of important flows, risky stories and weird fringe cases – the rest I cover with simple professional development which always covers verification as part of development.

Quality Confidence: a lead measure for software quality

For a long time I’ve wanted to be able to express the quality of my current software release in a simple intuitive way. I don’t want a page full of graphs and charts I just want a simple visualisation that works at every level of requirements to verification (up and down a decomposition/recomposition stack if you’ve got such a beasty). My answer to that is Quality Confidence.

What it is

RAG GaugeQC combines a number of standard questions together in a single simple measure of the current quality of the release, so instead of going to each team/test manager/project manager and asking them the same questions and trying to balance the answers in my head I can get a simple measure that I can use to quantitively determine whether my teams are meeting our required “level of done“. The QC measure combines:

  • how much test coverage have we got?
  • what’s the current pass rate like?
  • how stable are the test results?

We can represent QC as a single value or show it changing over time as shown below.

How to calculate it

Quality Confidence is 100% if all of the in scope requirements have got test coverage and all of those tests have passed for the last few test runs.

To calculate QC we track the requirements in a project and when they’re planned for/delivered. This is to limit the QC to only take into account delivered requirements. There’s no point in saying we’ve only got 10% quality of the current release because it only passes some of the tests because the rest having been delivered yet.

We also track all of the tests related to each requirement, and their results for each test run. We need to assert when a requirement has “enough coverage” so we know whether to include it or not – the reason for this is that if I say a requirement has been delivered but doesn’t yet have enough test coverage then even if all of it’s testing has passed and been stable then I don’t want it adding to the 100% of potential quality confidence. The assertion that coverage isn’t enough means that we aren’t confident in the quality of that requirement.

So 100% quality for a single requirement that’s in scope for the current release is when all the tests for that requirement have been run and passed (not just this time but for the last few runs) and that the requirement has enough coverage. For multiple requirements we simply average (or maybe weighted average) the results across the requirements set.

If we don’t run all the tests each during each test run then we can interpolate the quality for each requirement but I suggest decreasing the confidence for a requirement (maybe by 0.8) for each missing run. After all just because a test passed previously doesn’t mean it’s going to still pass now. We also decrease the influence of each test run on the QC of a requirement based on it’s age so that if 5 tests ago the test failed it has less impact on the QC that the most recent test run. Past 5 or so (depending on test cycle frequency) test runs we decrease the influence to zero. More info on calculation here.

So… how much coverage is enough?

Enough coverage for a requirement is an interesting question… For some it’s when they’ve covered enough lines of code, for others the cyclomatic complexity has an impact, or the number of paths through the requirements/scenarios/stories/use cases etc. For me, a requirement has enough test coverage when we feel we’ve covered the quality risks. I focus my automated testing on basic and normal flows and my human testing on the fringe cases. Either way, you need to make the decision of when enough is enough for your project.

To help calibrate this you can correlate the QC with the lag measure of escaped defects.

Measurement driven behaviour

Quality ConfidenceThe QC measure is quite intuitive and tends to be congruent with people’s gut feel of how the project/release is going, especially when shown over time, however there’s simply no substitute for going and talking to people. QC is a useful indicator but not something that you can use in favour of real communication and interaction.

The measurement driven behaviour for QC is interesting as you can only calculate it if you track requirements (of some form) related to tests and their results. You can push it up by adding more tests and testing more often :) Or by asserting you have enough coverage when you don’t :( However correlation to escaped defects would highlight that false assertion.

If you’ve got a requirements stack ranging from some form of high level requirements with acceptance tests to low level stories and system tests you can implement the QC measure at each level and even use it as a quality gate prior to allowing entry to a team of teams release train.

Unfortunately, because me and my mate Ray came up with this in a pub there aren’t any tools (other than an Excel spreadsheet) that automatically calculate QC yet. If you’re a tool writer and would like then please do, I’ll send you the maths!

scm checkin and deliver script for IBM Jazz RTC SCM

Following on from a previous post where I covered how to load some content from a Rational Team Concert server in a single command here’s a script I use to checkin and deliver in one command back to the stream. As before this is just a wrapper around the standard command line and maybe more appropriate for the more occasional user rather than a cli loving geek but I’m both so here it is!

Also available on github easy cloning with SimpleGit

#!/bin/bash

echo ""
echo "Checks in and delivers current workspace changes against a workitem"
echo "Usage: scmdeliver <workitem_number>"
echo ""

workitem=$1
error=0

if [ -z "$1" ]
then
  echo "Error: Please specify a workitem id"
  error=1
fi

if [ $error -eq 0 ]
then
  echo "Checking in..."
  checkin_str=`scm checkin * | grep "<No comment>"`
  change_id=`echo $checkin_str | cut -d "(" -f2 | cut -d ")" -f1`
  echo "Checked local changes into changeset $change_id"
  echo "Associating with work item $workitem"
  scm changeset associate $change_id $workitem
  echo "Delivering..."
  scm deliver
fi

So using this script and the scmload script in the previous post you can do this to hack some code:

scmload "Stream A" "Component A"

# Make a bunch of changes

scmdeliver 1234

Simplez!

Note this script assumes a clean workspace before you start making edits, if you want it to work with a previously created changeset you’ll need to tweak it.

Unfortunately you can’t find out your workitems from the command line but that’s a problem for IBM to fix!
Checkout my RTC Masterclasses and my advanced streaming guide:

  1. Development, Integration and Release Streams
  2. Streams for parallel branched development
  3. Streams for component reuse

Can’t open links in Firefox on Ubuntu when it’s open – Fixed!

This is a little obscure but there’s a number of people with this problem and too many “simple” answers that just don’t work. There are a number of situations that cause Firefox to respond with an error message when you click on a link from outside of the browser (like Thunderbird or any other application that try to launch a url). You get an error message saying:

Firefox is already running, but is not responding. To open a new window, you must first close the existing Firefox process, or restart your system

The normal reasons for this are things like locked profiles etc. which are well covered here. But some users get this error message even when Firefox is open and there aren’t locked profiles :( The solution is to edit the way that Ubuntu invokes Firefox.

Grab a terminal and go to ~/.local/share/applications this is Unity stores it’s information for launchers in the unity dock thingy. You should have a Firefox.desktop in here, you might have several in which case the one called “Firefox Web Browser.desktop” is probably the one that Ubuntu is using by default.

Edit the file and have a look at the “Exec=” line about 4 lines down. In my case the problem was caused because this line was referring to an old profile that I’d previously removed trying to fix the problem. If you’re not sure what it should look like set it to “Exec=firefox %u“, save, exit and click links once more :)

I’m coming out as not-Agile and not post-Agile

Big A vs. Little a

Huh? What? I’ve written a fair bit on this blog about agile topics, but I always try to write about agility with a small “a”. I’m not really into Agile with a big “A” though – I’m not into doing things according to a set of rules and having arguments about whether I’m doing it right or not. I’m not anti-agile, but I’m increasingly anti-Agile.

To me, the ideological arguments, definitions of everything, frameworks, money-spinning certifications and money-spinning tooling are what’s wrong with doing “Agile”. Being empirical, reflecting and adapting, honestly communicating and putting people first as an approach is being “agile”.

I don’t really like the term “post-Agile” either though as it comes with a bunch of baggage and is easily misinterpreted – and I still see benefits in adopting agile practices. I don’t want to see another specific set of rules or a statement or beliefs with elitist signatures. For me what’s next after agile is about dropping the ideology in software process, destroying the ivory tower of trademarks, formal definitions, money spinning tools and money-spinning certification programmes.

We need to get rid of the League of Agile Methodology Experts and if anyone says “Ah, but this is different” when showing a website of process content then you have my permission to hit them with a stick.

So what does the future look like?

Software development is a complex social activity involving teams of people forming and self-organising to work together, sometimes in teams of teams which is even harder. As technology is increasingly abstracting up and raising in maturity so is the way that developers, managers and organisations think about software and doing software. I think the problem is getting increasingly social, and the solutions will start looking increasingly social using more “soft practices”.

Software process improvement agents/consultants/coaches/mentors (including myself) need to take a long hard look at themselves. Are they telling people how to do it right when they can’t even write HelloWorld in a modern language? I’ve said that to some acquaintances in the industry, generously qualifying “modern language” as something significantly commercially used in the last 10 years and seen them look quite offended at my insulting affront to their professional integrity. I’ll go out on a limb and say you can’t coach a software development team if you don’t know how to write software.

So… software process improvement?

TOverlapping concerns in process improvementhe world runs on software, it’s everywhere and it’s critical. Getting better at doing software, improving the software value chain, is a noble aim and will continue to be as it means getting better at doing our businesses and our lives.

For me process improvement (dislike that phrase as well) is going to be more about combining psychology based business change practices with the best bits of a wide variety of ways of working (agile, lean, Scrum, what you currently do, RUP, Kanban, various workflow patters etc.) with technical software development practices like continuous integration, continuous delivery.

We need to work together, not as “leaders and followers” or “consultants and clients” but as collaborative peers working together to apply specialist skills and experience so that we can all improve. Smart people should be treated as smart people, we have much to learn from them and should be thinking in terms of fellowship rather than leadership.

I’m calling this overlap “soft practices”* because the term is evocative of:

  • The practice of doing software
  • The practices involved in doing software
  • Being able to deal with people is sometimes called  having “soft skills
  • Soft power

What do you think about “post-Agile”?

* I’ve even set up a company called Soft Practice to do this stuff, that’s why I’ve not been blogging much recently, who knew there’d be so many forms to fill in!

Edit 14/8/13: Seems others are now talking about the same things: Ken Schwaber, Dave Anderson, Aterny

Process vs. Tools

What’s more important: Process or tools?

I’ve sat through numerous arguments with people trying to work out whether process or tools are more important to their software development effort. The arguments go along the lines of:

Process Geek: Tools are just the things you use to do the process, the value is in the process. Let’s do a process first roll out.

Tools Geek: Tools are the things people actually use. Process is academic and is constrained by the tool features anyway. Let’s do a tools based roll out.

Sadly both of these are real life quotes from people I’ve worked around in the past. I won’t embarrass them by naming them because they’re both really wrong. For me the real value is not in the tools or the process but in the people. Process and tools are something that you add if the people need them.

Minimal Tooling

For me the best planning tool is a whiteboard. The best architecture and design tool is also a whiteboard. I like to capture some info about what I’m doing and how close I am to achieving my goals, some really simple measures. The best tool for that is probably Excel, despite my love of Open Source. Other than that I need to manage some lists of things like requirements (stories/use cases whatever), defects and whatever other bits the team thinks are important. They might fit on a whiteboard but typically we want to have some attributes on them, be able to sort them and link them. The one downside of a white board is the lack of drag and drop. I’ve used a few different types of smart board and active touch screens but none of them have really given me the smooth experience I want.

The one bit of tooling I really need is my IDE + SCM + Continuous Integration.

Minimal Process

This is potentially a can of worms if I go into things like process kernels and ideological arguments about different agile approaches and practice collections. There’s no such thing as a process silver bullet so for me though it’s all about the team and being effective. Teams need to:

  • Have Clear Obligations
  • Have Clear Business Priorities
  • Seize their autonomy
  • Have the relevant technical skills and resource availability
  • Be active in actually doing their work (seems obvious but it’s something a lot of processes seem to hinder rather than help)
  • Demonstrate visible progress (via working software not a tracking gantt chart)
  • Continuously improve themselves (because no way of working is perfect)

This list based on the 7 Habits of Agile Teams by Roly Stimson

So why go more complicated?

Unfortunately life is rarely simple. Many teams work in complex environments where a number of factors might make them choose to need more than this kind of minimal set including things like physical distribution; audit, regulatory or security constraints; cross-team interaction in system of systems environments, more people etc.

When faced with increased complexity it might be appropriate to add more powerful tooling or development practices. The more people involved in a project the harder it is to herd all of those socially complex cats into working together.

This is why ALM suites exist ranging from plugging together some open source bits like git, SimpleGit, jenkins, agilefant, bugzilla etc. to commercial vendor products like IBM Rational RTC (which does agile planning, work item management, scm and build in one tool) and other IBM Rational Jazz tools – it’s worth noting that you can also get RTC for free for just 10 users or for academic/research use on jazzhub.

Alternatively there’s Microsoft TFS for source control, work item tracking etc. tying together existing Microsoft stuff like MS Office, Visual Studio and sharepoint if you’re a more Microsoft focussed development house.

Of course you can also mix and match this stuff to a greater or lesser degree with various integrations. No toolset is a silver bullet however and each have their own problems.

So what’s my point?

My point is that process and tools need to reinforce each other, that both should be selected based on the needs of the team and the environment the team has to operate in. In the process vs. tools argument both lose out to the people. People in a development organisation create value, they just use tools and process to do it.

Follow

Get every new post delivered to your Inbox.

Join 344 other followers

%d bloggers like this: