Mike MacDonagh's Blog

Somewhere in the overlap between software development, process improvement and psychology

Tag Archives: software

No more Project Managers, bring in the Movie Producers

I was reading some course material recently that was trying to teach people something to do with software development and it was using the same old tired “ATM machine” example. I’ve worked with hundreds of projects, many in the finance sector and none of them are anything like an ATM machine.  One of the reasons that example is soooo tired is that it’s describing commoditised software development, it’s something I’d expect to buy off a shelf or go to a specialised vendor for. It’s not something I’d put my team of uber |33t  haxorz on.

Developing software products is a balance between a creative and scientific pursuit, so it’s a little hard to describe, especially when that software can range from a tiny smartphone app, to a website, to router firmware, to an enterprise hr system (urgh!), to a big data processing system of systems etc. You get the gist.

The things these types of system have in common with each other is how different they are. And yet the traditional approach to managing these diverse kinds of work has been classical project management with a one size fits all process (I’m looking at Prince2, RUP, Iterative, Agile etc.) or worse hiring a large company full of body-shop project managers. For me this is one of the root causes behind large scale IT disasters.

I once had a discussion with the leader of a PMO in a large organisation of thousands of people about the nature of Project Management for software and he assured me that “someone who knows how to manage a bit of work can manage software, it’s just a different type of work” and indeed I should “stop thinking of software as special”. I’ve seen this attitude resonate in many organisations and is to me best summed up by the same Head of PMOs statement “from a project management point of view building software is the same as building a bridge”.

Now then. I know a little about software development, but not that much about bridge building (if you exclude my illustrious Lego engineer career when I was 7). If I managed the building of a bridge next door to one build by someone with a track record of bridge building who’s bridge would you drive your family on? Not mine if you’ve got any sense.

There are so many problems in the software development industry caused by people not understanding it and trying to apply their outdated knowledge at best or badly aligned metaphors at worst. When people who do understand it start to get the point across (e.g. the early agile movement) they soon start selling certifications and losing their grip.

Many of our “best” metaphors come from manufacturing lines where the same thing is made over and over again. That’s nothing like software.

To me a software project is more similar to making a movie than anything else:

  • It’s a unique one off product (or a remake of a previous version) with new parts working together in new ways
  • Once made we’ll often duplicate and digitally distribute
  • It can cost a lot of money because it can be very complex and needs lots of specialist skills
  • You need a high level plan for how you’re going to make it
  • There’s no guarantee on return
  • What’s written down originally bares little resemblance to the finished product
  • We know when it’s good or bad but it’s hard to objectively quantify
  • There’s lots of different types that need different collections of specialist skills
  • Both involve people and time and so are subject to change, they have to be adaptive
  • You can tell when the people making it had fun
  • It’s not feasible that any team member can fit in any role
  • There’s almost always going to be some quality problems
  • You wouldn’t get a movie maker to build your bridge
  • You wouldn’t get a bridge builder to make your movie
  • You don’t make a movie faster by telling the actors to act faster

So, I don’t want a Project Managers that know how to work a gannt chart. I want movie producers that know how to work with a team to get the best out of them both technically and artistically.

 

Agile contracts? What about Weighted Fixed Price?

MichaelangeloAgile contracts are tricky. In fact Software Development contracts are tricky. Software development is a non-deterministic creative endeavour, you’d be better off trying to nail down the acceptance criteria for one of Michelangelo‘s sculptures than a typical software project. Software is complex.

A few years ago I was involved in putting together a contract model called “Weighted Fixed Price” and recently it occurred to me that I’ve never blogged about it, so I’ve dragged myself away from the source to do so.

What’s wrong with the normal models?

So, fixed price is bad because it only works when you can really accurately define the requirements, acceptance criteria and the potential risk exposure is tiny. Otherwise you get schedule and cost over-runs, formal change control etc. etc. Of course supplier’s always build in their risk buffer into the fixed price so it tends to cost a little more than honest T&M would. Software work is rarely this predictable which is why there are so many high profile failures.

Time and Materials is bad because it’s hard to manage across commercial boundaries and customers can feel that they’re paying for nothing, endlessly. Even defining waypoints is difficult due to changes in requirements and over time. Unscrupulous suppliers could just sit back taking the money, delivering as little as possible to avoid gross negligence but nothing more. Scrupulous suppliers could be working hard but hit by the many and varied risks arising from such complexity and yet perceived by their customers as the unscrupulous.

Of course there’s variations like Capped T&M, Phased Fixed, Phased Capped T&M etc. but they all start getting a bit complicated and hard to manage, not to mention open to gaming.

Then there’s models like pay-per-point and other “incentive” schemes that raise over-complication, missing the point and gamability to an art form. Just say “no”.

Competition

The real incentive to suppliers to do a good job is follow on work. That means that as a customer you need your suppliers to be in a competitive market, being locked in to individual suppliers leads to captive market behaviours.

Weighted Fixed Price?

In an attempt to put something in the middle between Fixed Price for low risk exposure and T&M/internal for high exposure I put together “Weighted Fixed Price”. The idea here is to to basically use a Fixed Price model but have suppliers weighted by previous performance affecting whether they get the business or not.

So if a supplier 1 delivers 10% late on Project A then when we’re looking to place Project B for 500k and we’re looking at each supplier’s estimates then we’ll add 10% to supplier 1′s estimate.  That means all over things being equal we’ll choose the other supplier’s who typically come in closest to, or even under, their previous estimates. Weighting is transparent to each supplier but not across suppliers.

This provides the following pressures:

  • Suppliers don’t want to underestimate, they’ll come in over their initial estimates and adversely affect their weighting and therefore their chance for follow on work
  • Suppliers don’t want to overestimate to artificially lower their weighting or build in risk buffers, they won’t get the work because they’ll cost too much
  • Suppliers don’t want to take on complex, high risk work because they can’t predict the effect on their weighting (more below)

Out-sourcing doesn’t transfer risks to suppliers, because the customer is still the one needing the business benefit and paying for it, especially when things go wrong. The Weighted Fixed Price model is intended to genuinely transfer the complexity risk to suppliers, if they’re willing to take it.

What happens when the supplier finishes early?

They deliver early, watch their weighting affect their potential for winning future work increase massively and enjoy their fixed price money meaning that they’ve increased their profit margin on this job.

What happens when they deliver late?

They deliver late, their weighting increases and they’re less likely to win against other suppliers next time. However, if they could predict that they’re going to deliver late early in the project then they could renegotiate with the customer in light of what everyone’s learned about the work.

What happens when suppliers don’t want to bid for work because it’s too risky/complex?

Excellent, this means that they’re honestly telling us that the work is too risky to predict. This means we shouldn’t be out-sourcing it in the first place, time to stand up an internal team or supplement with some time-hire contracts if we need extra capacity.

Conclusion

The Weighted Fixed Price model is an evidence based contract model can be used in competitive supplier markets for reasonably well understood work with low risk exposure. It incentivises suppliers to estimate accurately and deliver to their estimates as that affects their opportunities for follow on work. It incentives customers not to out-source overly complex risky work.

It’s not perfect because we’re still trying to contractually define how a sculpture should make us feel but it’s something you can add to the mix of contract models you use.

How to make real progress in a software project

I’ve not been blogging much this year because I’ve been crazy busy writing software (more on that soon I hope). As you might expect I’ve changed my ways of working more often than process improvement consultants change the name of the process they’re selling but one key thing always seems to drop out whether I’m working on my own, with a team, a team of teams or a globally distributed team.

Do the top thing on the list, whatever it is.

Don’t do anything else until it’s done.

Or in process speak: “Reduce the WIP and clarify the Definition of Done“.

Too often I fall into the trap of doing the most interesting thing from the list, which normally means the most complex. And far too often I get carried away with the next interesting thing before finishing something. Both of these lead me to have far too many things in various states of progress all interfering with each other and me with no idea of how well I’m doing. And that’s assuming I actually know what “finished” means.

Software is complex enough without us making it harder with processes, remember the KISS principle?

Reduce waste: Visualise the value stream

A big thing in software process improvement (urgh) is reducing waste.  A great idea, but how do you identify waste?

One powerful method that I use is value stream visualisation. There’s a number of ways of getting information about the value stream ranging from physically walking the chain to simply asking people.

Sounds simple doesn’t it? But one of the best sources of information about wasteful processes and ways of working in a team is simply asking the team what they feel they shouldn’t be spending time on. By freeing up people to actually do their job they can reduce waste and improve productivity. One method I use is to ask each team member to identify a list of n items that they have to spend time on that they think are either entirely or partially wasteful. Basically a list of their impediments. I then anonymize the list look for commonality and then the top few are a reasonable target for improvement.

Alternatively actually walking the path of a piece of work through an organisation can really open the eyes to the sheer number of people and unnecessary waiting involved in some wasteful processes…

Anyway, having got some data I find it useful to visualise it in terms of waiting time and doing time. In Lean-speak cycle time is the time it takes to actually do something, lead time is the time it takes from the request being made to it finally being delivered. In many tasks the lead time far outstrips the cycle time, visualising it can make that very clear.

Low Lead time For example, a task with has a lead time of 5 days might actually be made up of 2 days waiting time before it actually gets picked up and the cycle time of 3 days starts.

Large lead timeIf you look at all of the tasks in your value chain you might find that many of them look more like this, with very large wasteful waiting times. There are many reasons why there might be a large waiting time, typically the team doing the tasks is over-subscribed, or a team might be protecting it’s SLAs, KPIs or other TLAs. These problems can have significant effects when looking at the system as a whole.

Even more interesting is looking at how these tasks might fit together in a value stream. If we imagine a business function that involves a request moving through a bunch of tasks across a number of teams that have different waiting and doing times a simple sequence of tasks, with a little buffering between them from the Project Manager to allow for schedule variance naturally, might end up looking like this:

 sequential_value_chain_arrows

Here we have 26 days of actual work spread over almost 60 days (12 weeks) of elapsed time = 40% waste. Even if the person planning the work is willing to accept some risk and try to optimise their workflow a bit without improving the waste inherent in the tasks involved there’s still a lot of waste in the system.

parallel_value_chain_arrows

By visualising the value stream in this way we can see straight away (apologies to red/green colour blind folks) that there’s a lot of red, a lot of waste. In many cases planners aren’t willing to accept the risks inherent in overlapping activities as shown here, or aren’t even aware that they can leading to the more sequential path shown above. The result is a minimum time that it takes a request to get done, based on the impedence of the current value chain of, in this case, 38 days before we even start thinking about actually doing any work.

Surely that’s not right.

What is “good” software architecture or design?

Architecture is a fine balance between a subtle science and exact art that combines cognitive problem solving, technical direction and expressing abstract views to aid common understanding. Design is similarly an inexact science which is inextricably linked to Architecture. As a result it’s quite hard to define what makes a “good” architecture or design.

There are high numbers of design and architecture practices ranging from the purely transformational to the evolutionary and emergent (both manual and automated) but there seems to be some heuristic consistency between these various practices and the approaches of the traditional, agile and post-agile movements.

Initially I saw the simple descriptions of that makes for “good” design from from Ron Jeffries and Kent Beck in Xtreme Programming. I like the style and direction of this kind of description so here’s my take on the characteristics that make for “good” design or architecture:

  • Intentional structure and behaviour
  • Highly modular: consisting of separate services, components, classes, objects or modules
    • Elements are highly cohesive
    • Elements are loosely coupled
    • Elements have low algorithmic complexity
  • Avoids duplication
  • Well described elements: modular elements have simple expressive meaningful name enabling readers to quickly understand their purpose, similar to code cleanliness we should strive for design cleanliness
  • Runs and passes all defined tests or acceptance criteria
  • Lightweight documentation

Intentional vs. Emergent Architecture

I’ve been thinking about architecture a lot recently but one thing that I often discuss but have never blogged about for some odd reason is intentional vs. emergent software architecture. Some old fashioned software methods such as waterfall led people into doing a lot of up front architecture work, they analysed and designed away for ages producing huge reams of UML and documentation that no one could ever squeeze into their heads if they had the patience to read it. This is an example of an intentional architecture – the architecture was intended, planned and deliberate.

Lots of folks hated that way of doing things as it meant people were writing docs and drawing diagrams instead of making working software, not to mention an all to frequent tendency to over-engineer architecture past the point of usefulness. This led to some people saying that we’re better off not trying to do any architecture and just letting it emerge from the work we do developing small little customer focused requirements (like user stories or similar).  Ok, so there’d be some rework along the way as we encounter a deeper understanding of the system and refactor our emergent architecture but it’d still be better than the old way of doing large upfront architecture.

So, there seem to be two opposed viewpoints: intentional architecture is best, emergent architecture is best.

For me, neither is true. I’ve seen some really terrible examples of badly over-engineered architectures that crippled a project and projects that never got past their over-reaching architectural analysis. Equally I’ve seen products with emergent architecture that had to be entirely re-architected as time went pay because their emergent architecture was so wrong it was comical (imagine a software management tool that only supports a single project, and that concept being deeply embedded in the architecture).

There’s a scale with intentional architecture on one side and emergent architecture on the other.

Intentional vs. Emergent ArchitectureVarious factors might push us one way or another… The second I listed on the right is interesting as if you’ve got a well known technology and problem domain you can get away with emergent architecture, but similarly if you have a totally unknown technology and problem domain it can be very effective to evolve towards a solution and architecture rather than crystal ball gaze by creating a (probably wrong) intentional architecture.

Which rather sums up the point I’m trying to make. The purpose of architecture is to shape the solution and address technical risks. Solving the big problems, creating common ways of doing something (lightweight architectural mechanisms) are all good architectural goals but only if we’re sure of the solution. If we’re not we’re better off evolving an emergent architecture, at least initially.

I think that the extremes at either end of the scale, as with most extremes, are a bad idea at best and impossible at worst. If you gather a group of people together and tell them to create a web app given a backlog but they’re not allowed to think or communicate about the architecture up front then you’ll find they all start dividing the problem in different ways and writing in different languages for different server and client frameworks. Hardly a good idea. Of course on the other end of the scale, believing that we can foresee all of the technical issues, all of the technology and requirements changes that might happen is as likely as a 12 month project plan Gantt chart being correct after a few levels of cumulative error margin have been combined.

For more on architecture see:

Linux GUI Development: Lazarus 1.0 Review update

A while ago I wrote a review of Lazarus 0.9.30 and it came out with the 63% mainly because the installation process was so horrible. Well now it’s almost a year later and v1.0 has been released so it’s only fair I update my comments…

Installation

It’s still not exactly shiny but, and it’s a huge but, it is simple and it works! :) To install on Linux I had to download 3 .deb files (1 for Lazarus, 1 for the compiler, 1 for the source) and then I could install them by simply doing:

doing sudo dpkg -i *.deb

It’s worth tidying up any old Lazarus installs first which you can do by running my script here.

Downloading *.deb files and installing them is pretty normal for linux users so although it’s not a shiny wizard installer or single command it is simple and most importantly it just works! I’m going to upgrade this from 1/19 to 8/10

First Impressions

A lot of the little niggles have gone away and it’s a much more stable, solid feeling environment. It does still load up with a million windows (Delphi style) and feel a little old fashioned (a lack of wizzy GUI controls and UI customisation, single window mode etc.) but being open source and written in itself I can change these things if I want to reasonably easily. In fact loading up the AnchorDockingDesign package makes the IDE a docked single window affair and there’s plenty of 3rd part controls to download and play with.

Normal multi-window interface for Lazarus
Lazarus as a single window IDE

There’s still no multi-project support and recompiling the IDE to get a new component on the toolbar feels a bit weird even if it does work perfectly well. I’m going to stick with 6/10 for those reasons.

The GUI designer and Code Editor haven’t really changed since my last review and so they stay happily at 9/10.

Language features

The handling of generics has improved since I last looked at Lazarus and all seems to work pretty well now :)

Otherwise nothing much has changed, no  multicast events or garbage collection but those things don’t really slow me down much. That’s why I gave it 6/10.

Despite the lack of some of these things the language has always been and still is quite elegant (other than the nasty globals), it’s got an excellent simple OO implementation and it’s really easy to quickly put an app together – oh yeah and it’s cross platform.

Cross-platform and backwards compatible

Lazarus works on windows, linux and mac. You can write a bit of code and natively compile it on each platform making for lightening fast code with no external dependencies. Write once, compile anywhere. You can already (partially) compile for android and other platforms are possible – anywhere the fpc compliler can be ported your code could work. That’s pretty impressive and it just works brilliantly.

Similarly all that ancient Delphi code can be loaded up edited and compiled and largely just works. Brilliant again. For that reason I’m going to bump up the language features to 8/10. (9 when android is working well).

I’ve not tried the feedback process again since v1 so I’m going to leave it at 7/10.

Conclusion

Despite it’s old fashioned feel in some places it’s simplicity is elegant and actually quite powerful. If you know the Object Pascal/Delphi language then it’s so fast to create good looking cross platform apps that it’ll knock your socks off.

When I pick up a new language I tend to do a challenge to load an xml file into a multi-column listview and create a details form to edit the selected row. In Java, something so apparently simple is a massive pain due to the imposition of the MVC pattern on everything. In C# it’s pretty easy, I can go MVC or not. In mono it’s reasonably easy too, although not the same easy and not as easy to get away from MVC.

In Lazarus it’s really easy, and it works fast and well on 3 major platforms. That’s a killer feature.

Category Score
Installation 8/10
First Impressions 6/10
GUI Designer 9/10
Code Editor 9/10
Language Features 8/10
Feedback process 7/10
Cross-Platform 10/10
Overall 81% – Easy to use and powerful

In my previous review I said I couldn’t really recommend it, and that made me sad. Now I can recommend it, in fact it’s quickly becoming my technology of choice for linux gui development, because it’s quick to put things together and they look good. Just maybe Lazarus is living up to it’s vision and bringing Delphi back to life, but bigger and better than it was before!

SimpleGit is developed in Lazarus

Scaled Agility: The Project Forum

Name: Project Forum (Middle-out management structure) – Agile at Scale practice

When it might be appropriate

  • In situations where multiple competing stakeholder groups with different agendas are required to work together
  • In situations where multiple product groups need to collaborate on a bigger outcome
  • Where there is a conflict in direction, resource management/ownership or scope between collaborating groups
  • System of systems development

What is it?

The Project Forum is an application of agile philosophy to large project structures. Rather than impose a hierarchy of decision making from the Project Manager downwards the Project Forum is a virtual team in the middle of all stakeholders.

The Project Forum is a self-organising democratic group that balances competing voices and concerns, owns high level scope and architecture, runs the high level release train and performs integration activities for the product.

Use of the Project Forum practice does not prevent any communication directly between contributing groups it only provides a vehicle for that conversation when it’s relevant for the wider project.

From Traditional to Agile at ScaleThe Project Forum practice is an example of Agile at Scale combining social business practices, technical software practices and ways of working to make a simple way of doing big complicated bits of work.

Read more of this post

Simple software project measures

I’m not a big fan of metrics, measures, charts, reporting and data collection. I’m not terribly impressed by dashboards with 20 little graphs on showing loads of detailed information. When I’m involved in projects I want to know 3 simple things:

  • How quick are we doing stuff?
  • Are we on track or not?
  • Is the stuff good enough quality?

There can be some deep science behind the answers to those questions but at the surface that’s all I want to see.

Organisations need to know that teams are delivering quality products at the right pace to fit the business need. To achieve this goal teams need to be able to demonstrate that their product is of sufficient quality and that they can commit to delivering the required scope within the business time scales. If the project goal may not be achieved then the business or the team need to change something (such as scope, resources or time scales). This feedback mechanism and the open transparent communication of this knowledge is key to the success of agile delivery.

The goal of delivering quality products at the right pace can be measured in many complex ways however, when designing the Project Forum agile at scale practice we looked at just 3 measures. In fact I should probably call them 2.5 measures as the throughput/release burnup can be considered mutually exclusive (if you’re continuous flow or iterative). The most important measure is people’s opinions when you go and talk to your team.

Simple Measures Dashboard

Note: in the measures section I often refer to “requirements” as a simple number, this could be a count, a normalised count, magnitude, points, etc. it doesn’t matter what’s used so long as it’s consistent.

Read more of this post

The Kung Fu of Software Engineering

I’ve been studying both kung fu and software engineering for many years. I’ve come to realise that they are very similar and that kung fu is a pretty good metaphor for software engineering.

Done right it looks easy, but it’s not

When you watch kung fu in movies, or martial arts in general it makes sense logically, it looks sensible. Attackers punch in one direction and defenders block in another. Sometimes there are tricks and special moves but an observer can see the logic in them, and they are feasible if clever – they are complicated, not complex. However when you actually try and do these moves you find it’s not so simple. It’s not easy to react the right way under pressure when you’ve not done it before. You need to learn muscle memory, improve your fitness, work on your reactions and internalise sometimes counter-intuitive techniques. When you really do it well you use very little energy to do something that looks easy and it becomes easy but other people can’t get it just by watching you.

Both software engineering and kung fu are deceptively difficult, with hidden complexities and complex emergent behaviour. I think it was Grady Booch who said (although I couldn’t find the quote so any mistake is mine) “Software development has, is and always will be, inherently complex”.

Complexity built from simple small techniques

In many kung fu styles you learn basic small movements in repetition, often called “form” in martial arts, Sil Nim Tao (generally referred to as Siu Nim Tao in Wing Chun kung fu, the RUP of kung fu) is translated as “little idea form”. Learning this form we learn all of the basic movements and stances that set up the body positions required to get the mechanical advantage in a given situation. Each individual movement tends to be very, very simple.

This kind of information and learning is analogous to the basic software engineering knowledge that we give people. We teach them how to write in a language, idioms, patterns, standard architectures, frameworks, build technologies, iterative patterns etc.

However knowing which techniques apply in which situations, which play well with others and how to put them all together is another level of expertise based on experience. Teaching someone the basics does not make them a master. Software engineering, like kung fu is something you should never stop practising and learning.

It gets more complex when you add more people

Defending yourself from one attacker is a whole different ball game than defending yourself from two attackers. Defending yourself from a group of attackers breaks down the whole mixed metaphor of ballgames, sports and anything else in the vicinity. The complexity of the action increases significantly as you add more people, it’s not just a linear relationship. As more and more people are involved there are emergent behaviours that can’t be predicted from the beginning.

This is true of any activity that multiple people take part in, especially complex activities. In kung fu it means you have multiple attacks, more energy in your attackers which means once you’re tired you’re in trouble. In software it means you have multiple people doing things at the same time with subtly or radically different ideas on what should be done and the best way to do it.

The only ways to reduce this complexity in software engineering are to go up or down. We can either abstract away from the complexity moving to higher level technologies where possible (sacrificing fine control typically) although such abstraction tends to bring it’s own complexities or we can dive down and educate the team (in the broadest sense) on the complexities to try and reach a common understanding.

No plan survives contact with the enemy

Trying to plan in detail all of the possible interactions of a kung fu fight, even against a known assailant is about as pointless as trying to plan all of the details of a software project. There is too much uncertainty, too much complexity and too much emergent behaviour. Above all there is too much change. In both kung fu and software engineering we need to remain agile and responsive to change. Change in the environment, the different things being thrown at us and our own actions.

There’s no magical solution

We’re not in the Matrix, we can’t download kung fu skills into our heads in seconds. Or software engineering skills. These things take years to learn, will be slightly different for every individual as they tailor the standard wisdom to their particular individual skills and style.

There’s a lot to learn. Personally I sometimes work as a software development coach helping people structure and plan their work from a business, planning and architecture perspective mixing in  SCM & build techniques, tooling, requirements management, agile and iterative project management, portfolio and business management. In many ways these kind of things, and others similar to them, can be thought of as different styles of martial art. Just because you’re good at one of them doesn’t mean that you’re good at another, or the next new one that comes along. Of course a certain aptitude helps, and knowledge of one certainly makes others easier to learn but “experts in everything” are few are far between.

We can’t all be Bruce Lee, Jackie Chan or Jet Li but we get a choice about where we are on the spectrum between being a master and an armchair expert sitting on the sofa watching others do it.

Finally

I believe that there’s an exact art and subtle science to both martial arts and software engineering. We need to practice these skills, we need to be continually learning and improving. We need to learn from other styles and experienced practitioners.

“Kung Fu” is actually translated to “achievement through great effort”

If you’re in the Cheltenham, UK area come and do some kung fu with me at Chi Wai Black Belt Academy.

I’ll leave you to make your own Chuck Norris software jokes…

Follow

Get every new post delivered to your Inbox.

Join 344 other followers

%d bloggers like this: