I’ve done a bunch of professional certifications over the years in processes, languages and technologies. Usually because someone else wanted me to or was paying me to. Generally, I don’t think certifications are worth the paper they’re printed on as they rarely seem to be indicators of real world experience.
As an example, look at Certified Scrum Master. First you have to go on a mandatory 2-day in-person course taught by another CSM that cost around $1000 (pyramid scheme anyone?) and then correctly answer 24 out of 35 questions (68%) in an open-book google-able test.
Personally, I found AWS certification different. First, I did it for myself, because I wanted to. But most importantly, it’s quite cheap, and quite difficult.
AWS certification has no mandatory course and has closed-book, offline tests that cost $150. So it’s significantly cheaper, and having done them myself, I don’t think it’d be possible to pass with purely theoretical knowledge – the emphasis is on practical understanding, not theory. The questions are nicely technical, often with many correct answer where the point is to choose the “best” answer, not just one that’s factually correct. They don’t publish the pass mark, instead there’s a bell curve that you’ve got to be ahead of (but forums indicate it’s around 75%)
Interestingly I was getting job offers within an hour of getting certified almost a year ago!
I enjoyed doing it so much I’m going to do another one at this year’s AWS Re:invent conference 🙂
An embarrassment of Project Managers
An impasse of Architects
A confusion of Business Analysts
A mob of Developers
A silo of Testers
A brethren of Scrum Coaches
A waste of Lean Consultants
A conspiracy of Process Improvement Consultants
People are important, not roles! But recently I saw a weekly team meeting for a project that had 12 project managers! I thought it was a joke at first.
Can you think of any others?
How do we join up business strategy to agile development? Is program management relevant? Where do project managers fit in? What about architecture?
Holistic Software Engineering (HSE) answers all of these questions – for free.
Agile and continuous flow are great for small teams or a small number of inter-related small teams working on exploratory or maintenance work. But what if we’re spending 100s of millions on an IT strategy of inter-related products that need to work together to deliver business value. What is business value anyway?
To answer these questions (and more) my friend Steve Handy and I have distilled our collective 30+ years of software experience in a single, cohesive model of software development. We’ve developed the H model that moves on from the v-model and it’s siblings by combining:
- Business strategy
- People and team issues
- Iterative and feedback loops
- Lightweight requirements and architecture
- Lean portfolio and program management
- Agile and continuous product delivery
- A focus on integration, quality and business value
…all elegantly combined and de-conflicted by release planning.
We’ve not invented very much, we’ve simply put a lot of good ideas from others together into a cohesive framework. We’ve drawn it all as a big picture and developed a website that shows how to get value from all of this stuff. Everything is clickable, everything has content.
The best bit: it’s free! There’s no paywall, there’s no private “full” version, you can just use it or not as you like.
We don’t believe in process zealotry, or putting academic concerns above clarity and usefulness. HSE is indicative, not prescriptive. You’re already doing it and if you use the big picture to draw red blobs on the bits that aren’t working well, or missing, in your organisation then you can use the model to make tangible improvements – immediately.
Using HSE doesn’t replace any of your existing processes, it provides the glue that joins them all up together in one simple, elegant and cohesive model.
And if it’s too big, we’ve got a small picture, which is essentially “normal” agile development.
Please share HSE with your friends and colleagues.
I’ve been thinking about architecture a lot recently but one thing that I often discuss but have never blogged about for some odd reason is intentional vs. emergent software architecture. Some old fashioned software methods such as waterfall led people into doing a lot of up front architecture work, they analysed and designed away for ages producing huge reams of UML and documentation that no one could ever squeeze into their heads if they had the patience to read it. This is an example of an intentional architecture – the architecture was intended, planned and deliberate.
Lots of folks hated that way of doing things as it meant people were writing docs and drawing diagrams instead of making working software, not to mention an all to frequent tendency to over-engineer architecture past the point of usefulness. This led to some people saying that we’re better off not trying to do any architecture and just letting it emerge from the work we do developing small little customer focused requirements (like user stories or similar). Ok, so there’d be some rework along the way as we encounter a deeper understanding of the system and refactor our emergent architecture but it’d still be better than the old way of doing large upfront architecture.
So, there seem to be two opposed viewpoints: intentional architecture is best, emergent architecture is best.
For me, neither is true. I’ve seen some really terrible examples of badly over-engineered architectures that crippled a project and projects that never got past their over-reaching architectural analysis. Equally I’ve seen products with emergent architecture that had to be entirely re-architected as time went pay because their emergent architecture was so wrong it was comical (imagine a software management tool that only supports a single project, and that concept being deeply embedded in the architecture).
There’s a scale with intentional architecture on one side and emergent architecture on the other.
Various factors might push us one way or another… The second I listed on the right is interesting as if you’ve got a well known technology and problem domain you can get away with emergent architecture, but similarly if you have a totally unknown technology and problem domain it can be very effective to evolve towards a solution and architecture rather than crystal ball gaze by creating a (probably wrong) intentional architecture.
Which rather sums up the point I’m trying to make. The purpose of architecture is to shape the solution and address technical risks. Solving the big problems, creating common ways of doing something (lightweight architectural mechanisms) are all good architectural goals but only if we’re sure of the solution. If we’re not we’re better off evolving an emergent architecture, at least initially.
I think that the extremes at either end of the scale, as with most extremes, are a bad idea at best and impossible at worst. If you gather a group of people together and tell them to create a web app given a backlog but they’re not allowed to think or communicate about the architecture up front then you’ll find they all start dividing the problem in different ways and writing in different languages for different server and client frameworks. Hardly a good idea. Of course on the other end of the scale, believing that we can foresee all of the technical issues, all of the technology and requirements changes that might happen is as likely as a 12 month project plan Gantt chart being correct after a few levels of cumulative error margin have been combined.
For more on architecture see:
Points are abstract and relative, comparing them across projects is like comparing apples and elephants.
Releases only make sense in themselves, let alone across project time lines so why would I want to look at a cross-project burnup? Such a thing is foolishness surely…
Well maybe not, it might be useful as an agile at scale measure, as a way of looking at the work churn in a department or other high level unit in an organisation. I’ve been pondering if there’s a way of aggregating burnup information and point burnage across teams (with distinct disjoint timelines) and thought that there might be a way.
Ideally I want to be able to show the amount of work planned within an organisational group and progress towards that scope, showing when scope goes up and down (does that ever happen?). Then I want to show progress towards that scope overall, the angle of the progress line could give me an organisational velocity – perhaps I could even add an ideal velocity that would indicate what perfect robots would to if real life never intervened (although that could be dangerous).
First I need a way of normalising points and understanding what 100% of scope means when it can be incorporating many projects at different points in their lifecycles. Perhaps a way of doing it is considering everything in terms of percentage, after all that’s an easy thing for people to consume. To define 100% of a scope of various contributing projects is tricky since it’ll change and be dependant on their releases, continuous flows, changing scope.
A simplistic approach to this is to use a moving baseline, perhaps we can determine 100% of a projects scope as whatever it thinks it would deliver within the time area being considered (the scope of the x-axis) at 15% of it’s timeline (or whatever).
In the example above this tells me that work is consistently overplanned not just in terms of actual velocity, but in terms of idealised capacity aswell – the demand is higher than the supply. I think this could be useful for “agiley” portfolio management.
Perhaps I could start establishing a budget cycle velocity, and start limiting work planned based on empirical evidence. Ok, so no project is the same and points aren’t comparable but the Law of Large Numbers is on my side.
What do you think? Is this bonkers?
I wasn’t planning on writing a “how long is a piece of string?” post but it’s a question I often get, and something that I’ve played with a bit. The point of architecture is to address the aesthetics of a system, to consider its reusable bits or common forms, the overall shape and nature, the technology it’ll use, the distribution pattern and how it will meet its functional and non-functional requirements.
Of course in an agile, or indeed post-agile world, we don’t want to spend forever document and designing stuff in analysis paralysis. I’ve worked in projects where I had to draw every class in detail in a formal UML tool before I could go and code it. I’m pretty sure this halved my development speed without adding any real value. But I’ve also worked on projects where we’ve drawn some UML on a whiteboard while discussing what we were going to do and how we were going to do it – and that was really valuable.
This makes an architect’s job difficult. Of course, it’s always been hard:
The ideal architect should be a man of letters, a mathematician, familiar with historical studies, a diligent of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of jurisconsults, familiar with astronomy and astronomical calculations.
Vitruvius ~ 25 BCE
But as well as being a bit of a Renaissance man an architect also needs to know when enough is enough. I’ve found that I’ve done enough architecture with the team (not to the team) when we collectively feel like we understand the proposed solution, how it’s going to hang together, how it will address the risky bits and meet it’s requirements.
To do that, we tend to draw a few diagrams and write some words.
First, an architectural profile that gives us an idea of where the complexity is and therefore where the technical and quality risks are.
Second an overview sketch that shows the overall structure, maybe technology, target deployment platforms and major bits.
Third a set of lightweight mechanisms that cover the common ways of doing things or address particularly knotty problems and address some of those risks. These tend to describe the architecture (or mechanism flows) by example rather than aiming for total coverage.
I might add some other stuff to this if the project calls for it, like maybe a data model, a GUI mockup but generally that’s it 🙂
This post is an extract from the Agile Architecture content from Holistic Software Engineering
When it might be appropriate
- In situations where a lightweight approach to intentional architecture is required
- In situations where high design formality isn’t required
- When a simple approach to architecture analysis is required at a team of teams level before more analysis in contributing teams
- Where a team wants to cut wasteful requirements and architectural “analysis paralysis” without throwing out ignoring technical risks
- System of systems development
What is it?
When I look at a potential (or existing) system I think of it in terms of it’s complexity in terms of a few dimensions, they’re not set in stone and I might add or remove dimensions as the mood, and context, takes me. Doing this early on gives me a feel for the shape of a project’s requirements, architecture and solution. In fact it also means I can short cut writing a whole bunch of requirements, acceptance tests, designs and even code and tests.
Here’s an example of thinking about a simple-ish app that does some fairly hefty data processing, needs to do it reasonably quickly but not excessively and has got to do some pretty visualisation stuff. Other than that it’s reasonably straight forward.
You might notice that the x-axis is pretty much FURPS with a couple of extras bolted on (I’ll come back to the carefully avoided first dimension in a minute).
The y-axis ranges from no complexity to lots of complexity but is deliberately not labelled (normally one of my pet hates) so we can focus on the relative complexity of these dimensions of the requirements, quality, architecture and therefore solution.
The height of one of these bars helps me shape the architectural approach I’ll take to the project, and which bits and bobs I can reuse from my super bag of reuse.