How do we join up business strategy to agile development? Is program management relevant? Where do project managers fit in? What about architecture?
Holistic Software Engineering (HSE) answers all of these questions – for free.
Agile and continuous flow are great for small teams or a small number of inter-related small teams working on exploratory or maintenance work. But what if we’re spending 100s of millions on an IT strategy of inter-related products that need to work together to deliver business value. What is business value anyway?
To answer these questions (and more) my friend Steve Handy and I have distilled our collective 30+ years of software experience in a single, cohesive model of software development. We’ve developed the H model that moves on from the v-model and it’s siblings by combining:
- Business strategy
- People and team issues
- Iterative and feedback loops
- Lightweight requirements and architecture
- Lean portfolio and program management
- Agile and continuous product delivery
- A focus on integration, quality and business value
…all elegantly combined and de-conflicted by release planning.
We’ve not invented very much, we’ve simply put a lot of good ideas from others together into a cohesive framework. We’ve drawn it all as a big picture and developed a website that shows how to get value from all of this stuff. Everything is clickable, everything has content.
The best bit: it’s free! There’s no paywall, there’s no private “full” version, you can just use it or not as you like.
We don’t believe in process zealotry, or putting academic concerns above clarity and usefulness. HSE is indicative, not prescriptive. You’re already doing it and if you use the big picture to draw red blobs on the bits that aren’t working well, or missing, in your organisation then you can use the model to make tangible improvements – immediately.
Using HSE doesn’t replace any of your existing processes, it provides the glue that joins them all up together in one simple, elegant and cohesive model.
And if it’s too big, we’ve got a small picture, which is essentially “normal” agile development.
Please share HSE with your friends and colleagues.
Points are abstract and relative, comparing them across projects is like comparing apples and elephants.
Releases only make sense in themselves, let alone across project time lines so why would I want to look at a cross-project burnup? Such a thing is foolishness surely…
Well maybe not, it might be useful as an agile at scale measure, as a way of looking at the work churn in a department or other high level unit in an organisation. I’ve been pondering if there’s a way of aggregating burnup information and point burnage across teams (with distinct disjoint timelines) and thought that there might be a way.
Ideally I want to be able to show the amount of work planned within an organisational group and progress towards that scope, showing when scope goes up and down (does that ever happen?). Then I want to show progress towards that scope overall, the angle of the progress line could give me an organisational velocity – perhaps I could even add an ideal velocity that would indicate what perfect robots would to if real life never intervened (although that could be dangerous).
First I need a way of normalising points and understanding what 100% of scope means when it can be incorporating many projects at different points in their lifecycles. Perhaps a way of doing it is considering everything in terms of percentage, after all that’s an easy thing for people to consume. To define 100% of a scope of various contributing projects is tricky since it’ll change and be dependant on their releases, continuous flows, changing scope.
A simplistic approach to this is to use a moving baseline, perhaps we can determine 100% of a projects scope as whatever it thinks it would deliver within the time area being considered (the scope of the x-axis) at 15% of it’s timeline (or whatever).
In the example above this tells me that work is consistently overplanned not just in terms of actual velocity, but in terms of idealised capacity aswell – the demand is higher than the supply. I think this could be useful for “agiley” portfolio management.
Perhaps I could start establishing a budget cycle velocity, and start limiting work planned based on empirical evidence. Ok, so no project is the same and points aren’t comparable but the Law of Large Numbers is on my side.
What do you think? Is this bonkers?
There’s a lot of talk in the process improvement industry about the meaning of “agile at scale”, devops, lean and agile for a number of reasons. One reason that I’ve seen in successful agile organisations is that as development maturity increases with true agile adoption bigger problems present themselves. This is the natural progression of the science of development, what used to be considered complex (like Object Orientation) is now normal, a commodity. Innovation in ways of working is happening at the organisational, cross-team, cross-product level.
For me agile at scale (I’ve got fed up of the quotes already) means a couple of different things:
- Repeating agile successes embodied in a team across an organisation (scaling out?)
- Applying agile thinking to cross-product projects
- Applying agile and lean thinking to development organisations
- Applying agile and lean thinking to high assurance environments like medical, security, financial, safety critical, audited, regulated businesses.
Agile and lean? Yep, both with lower case letters. I’m not particularly interested in ideological approaches to software development, I believe strongly in taking the best bits of whatever processes, techniques, practices etc. you find externally, mixing them up with internal practices and ways of doing things to develop simple, pragmatic approaches to ways of working. Both agile and lean schools of thought promote minimising unnecessary work, shorter delivery cycles and higher quality, continuously learning lessons and empirical decision making.
The agile manifesto gave us:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.
Some great practices have evolved for applying agile and lean thinking like scrum, kanban etc. However all of the complex organisations I’ve worked with have found that there’s still space for more thinking in terms of how to run a software business, how to deal with big complex system-of-system problems, multiple competing stakeholder sets, programme and portfolio management etc. Not surprising really because the agile movement wasn’t about trying to do any of that stuff.
However organisations who are successful with agile transformations want to apply the successful open and honest philosophy behind the agile manifesto to other parts of their business as well as bigger and bigger projects and programmes because the results of “doing agile” (I promise I’ll stop with the quotes soon!) are so attractive when it’s done well, namely:
- Shorter delivery cycles, higher quality
- Deep engagement between customers and development teams leading to respect, collaboration and better morale
- Quick identification of when things are starting to go wrong
Consider the following model, not uncommon amongst large organisations:
This represents a typical software department or vertical section of a software department with a portfolio that provides the funding for work. Big portfolio’s are normally broken down into a number of programmes which in turn may levy high level requirements onto organisations (organisational sub-divisions that own multiple products) which may affect both legacy and new product development. Often within a vertical section of a business there will be many cross-dependencies in terms of requirements, technical dependencies etc. For many large businesses this picture is overly simplistic, indeed I’ve not included things like projects, component teams and a variety of business organisation constructs like product centres, feature teams etc. So how do you apply agile and lean philosophy to this lot and more?
You can’t simply repeat the same practices recursively throughout an organisation to deal with larger scale complexity. Imagine a chain of scrum-of-scrums, daily stand-ups at every level (at the same time, or staggered to allow representation up the chain?), sprint plans at programme level etc. What about if the business is regulated, audited, security focussed, high risk financial, safety critical, etc.
Ok, so what’s agile at scale then?
Agility at Scale is applying the spirit of agility and lean thinking if not the letter to these bigger problems. It’s about:
- Valuing individuals and interactions, encouraging collaboration, reducing layers of communication over processes, tools and hierarchy
- Valuing working software in the form of quality releases from short development cycles over comprehensive documentation, business analysis, enterprise architecture documentation
- Valuing customer, business, developer and operations (see DevOps) collaboration over contract negotiation
- Valuing good governance, transparency and honesty in progress, plans, costs and impediments over regular reporting
- Valuing responding to change over following a plan at all levels of the business
(Borrowed from the Holistic Software Manifesto)
Agility at scale is focussed simply on reducing unnecessary bureaucracy, reducing time to market and improving value.
So how do you achieve it?
The application of:
- Agile at scale practices
- soft practices
- technical practices
Of course each of those (and more!) is a complex can of worms in itself. A lot of these higher scale practices are only just emerging from high maturity complex (post-)agile organisations but in time more of those things will turn into links.
A good example of “Agile at Scale” in action is the Project Forum practice
As always this blog is a stream of consciousness so please, let me know your opinion?
Kelly Drahzal recently published this great presentation on Knowledge Centered Support which made me think a bit on the nature of support mechanisms. I’m currently engaged in rolling out a large and complex enterprise tool (Rational Portfolio Manager) and associated governance, portfolio management and project management practices in a large and complex client.
One of the things we need to do to get these pracitices and the tool embedded in an organisation is manage support. Our support takes two forms, tools support and process support. Normally when a person thinks they’re asking for one of them they’re actually asking for the other 😛 One of the interesting things about the support that my rollout team provides to the practitioners is that ultimately it’s a transient function – we won’t be the long term support team on this product, in fact support will be handed over to the centralised support function and the rollout team (comprised of external consultants (some IJIers, an IBMer and some independents) and contractors) will dissapate into the ether from whence it came. So obviously, as per Kelly’s presentation we’re very keen on knowledge centered support – we don’t want to waste our time, effort and brain power by re-recreating the answers to people’s problems.
So what do we actually do to try and avoid some of these problems and do some knowledge based support? We’re a transient support function so we don’t have and super tools or even specialist knowledge base management skills. What we do have is a highly skilled team and a number of communication channels.
We capture all support requests in a humble excel spreadsheet, regardless of their communication channel and categorise the requests into a number of categories. (Of course this gave me an excuse to write some cunning macros to keep everything updated automatically).
As well as providing lovely graphs the spreadsheet captures the issue and the response. As a result the team can all see who had what problem and how it was resolved. As problems are solved knowledge is created, capturing it in a spreadsheet is all well and good, and can be searched on by the support team but it’s not great in terms of sharing that knowledge broadly. (As it happens the support spreadsheet is publicly accessible via a guest account on our config management repository – but that doesn’t mean anyone is looking!)
To share the knowledge we communicate it through many channels. Sometimes it’s apparent that our education has been lacking some good guidance so we update the education programme (training courses, open surgeries, lunch ‘n’ learns). We have a wiki where we can post new bits of information, a message board/forum, emailing lists, laminated desk drops, a FAQ on the wiki and also some mentoring guides. One of the functions of our team is to mentor practitioners in the adoption of practices and tools and to do that we have a number of mentoring packages that we give to adopting teams. Ensuring that the mentors are all saying the same thing, giving the same solution to the same problem is important. One of the best ways of doing this it to get the mentors together to talk to each other, run through scenarios and gain consensus on the common answers. We also document these scenarios, sometimes in the practitioner facing User Guide and sometimes through mentor guides.