MDD

Sunday, 9 September 2012

Parameters of Development

THE PROJECT MANAGEMENT VIEW
PMBOK characterises project management in terms of three variables: Quality, Cost, Time.
In contrast our view on management of high tech development describes the work in terms of four key variables: Quality, Cost, Time and Scope.

Unfortunately the four variables are interdependent in complex ways, correlating both negatively and positively to each other. Have you ever tried finding maxima/minima on a curve? How about a surface? What about a 4-D surface? What if the variables have different and incomparable types (money, people, hardware, tools, feature, effort, complexity, internal dependencies, importance, priority, time, holidays, etc.).

COST
"more software projects have gone awry for lace of calendar time than for all other causes combined... but adding manpower to a late software project makes it later."
(Brooks Jr., 1995)
What do people do when a project slips behind schedule? "Add manpower, naturally." (Brooks Jr., 1995) Cost or its equivalence resources – things like money, people and equipment etc. are a necessary input to any project. Available resources include for example; the salaries of administrators, programmers, office space, computing hardware, software licenses, fast networks, and 3rd party services. Covering these costs and providing people and resources is a necessary prerequisite to project success but soon produces diminishing returns.

That is, all initiatives may reach a point beyond which the addition of further resources produces a diminishing return or may even degrade the project outcome. Why is this so? In the eponymous chapter of his influential book ‘The Mythical Man Month’ (1995) Fred Brooks makes the point that the theoretical unit of effort used for estimating and calculation project schedules is "not even approximately true [for] systems programming."
"the man-month as a unit for measuring the size of a job is a dangerous and deceptive myth. It implies that men and months are interchangeable." (Brooks Jr., 1995)
Brooks’ explanation, is that the idea of an ideal man-month, is only useful as an effort estimation technique if the task is perfectly partitionable and requires no communication whatsoever with others.
From Scrapbook Photos
Figure: Project success as a function of available resources
In the worse case situation a task (‘A’) cannot be partitioned and will take exactly as long as it will take regardless of how many (or few) people are assigned to it. Partitionable work is work that can be divided evenly among an arbitrary number of workers thereby allowing the task to be completed in the shortest possible time by adding more workers.
CASE: Imagine delivering and collecting census forms from 1000 households. Census collectors can discuss and plan in advance who will deliver and collect from which households. The activity of planning adds a finite amount of time to the collection.
A single census collector would need to make at least 1,000 trips (waiting for the form to be completed if the residents are at home).
Ten census collectors would need to make at least 100 trips. Additional time may be required to re-coordinate if collectors double up on the same address etc.
If however the task cannot be partitioned perfectly (some citizens aren't home, need help filling in the form, a census collector is out sick) the collectors need to spend more time communicating and coordinating closely with each other. As the number of collections increases they reach a point beyond which adding additional workers imposes a communication/coordination overhead that in turn delays the work.
From Scrapbook Photos
Figure: Completion time versus number of workers (adapted from Brooks Jr., 1995)

Tasks on high tech projects, almost by definition, involve complex interrelationships with other tasks that in turn demand a high degree of intercommunication between workers. Consequently high tech projects reach a point beyond which adding more people will result in the project delivering later (or not at all) rather than earlier. Understanding the degree of interdependence between project tasks in systems development highlights the need for communication in coordinating team members. It suggests that systems development projects are complex and difficult to manage.

TIME
"How does a project get to be a year late?... One day at a time."
(Brooks Jr., 1995)
Time is a crucial dimension of production activity. It turns out that an appropriate time line is a huge enabler for a project. However too aggressive a time target dooms a project to undue hast, unrealistic delivery times and, potentially, failure. Similarly, an excessively long time frame can defocus a team’s attention and starve the project of valuable feedback and checkpoints (figure below).
From Scrapbook Photos
Figure: Project success as a function of available time.

Time to delivery falls into three categories: too little leading to unrealistic schedules and delivery expectations; too much leading to analysis paralysis or gold plating; and just enough, when work is delivered, often incomplete, but early and usable enough to give useful feedback to both the user and developer.
CASE: In 2002, Mitch Kapor, the creator of Lotus 1-2-3 brought together a group of people to build his dream, a new class of software that redefine how people kept in touch with each other and managed their time. At the time some thought his OSAF was building an open source replacement for Microsoft Exchange but Kapor wanted something much more radical, a distributed mesh-like system that could collect and transform and share generally unstructured data for ideas and calendar items (Rosenberg, 2007). Towards the end of 2008 the project was nearing the end of the financial support that Kapor and others had provided. The paid programmers and contributors have gradually moved on leaving the project in the care of volunteers from the open source community. The software project, code-named Chandler, was funded by charitable contributions amounting to over 7.8 million USD. The project delivered preview versions over 2007/2008 but had finally run out of money, energy and time.
Two practices usefully address the problem of managing time, iterations (or timeboxes) and milestones. Milestones and timeboxing are essential approaches to managing time when project tasks are complexly interrelated and require developers coordinate and communicate closely with each other. Milestones are the large-scale markers for the completion of major stages in a project. The classic waterfall project is broken into stages, a stage-gate model, where the project transitions from one state to another. McConnell (1996) states that milestones are good at providing general direction but are usually too far apart and too coarse-grained to be useful for software project control. He suggests using 'miniature milestones,' small one or two day tasks that can be finished and demonstrated.

An iteration or timebox creates an achievable conceptual boundary for the delivery of multiple work processes and is recognized as good practice for software projects (Stapleton, 1997). In recent years the concept of the iteration has been refined to be a release of new useful functionality developed over a one to four week duration that a customer can use and test (Beck, 2000). The key is to arrive at an appropriate timebox for the project. A timebox of several days or weeks can be considered an iteration or incremental delivery stage (see the section on Software Lifecycles). The key value of using milestones and release iterations is that they are opportunities for feedback; clear, unambiguous feedback.

SCOPE
A written statement of project scope and objectives is often a project’s start point. The scope may describe the problem area being addressed, necessary and desirable features. Project scope will expand over time to include detailed features (figure below).
From Scrapbook Photos
Figure: Project success and value creation as a function of scope.

The desired scope or feature list of a project should be clear and concise. Too large a list of features or feature creep generates problems of priority and coherence. A concise set of the most crucial features probably has a stronger (positive) influence on the underlying architecture of the product. Furthermore "less scope makes it possible to delivery better quality" (Beck, 2000) 'I want it all and I want it now' is simply not reasonable. Consequently scope must always be limited, refined in some way. It is essential therefore that feature requests be valued and prioritised in terms of time importance, and realistically estimated.
"For software development, scope is the most important variable to be aware of. One of the most powerful decisions in project management is eliminating scope. If you actively manage scope, you can provide managers and customers with control of cost, quality, and time." (Beck, 2000)
Requirements will usually appear to have a natural or priority; what is most important, a prerequisite, a 'must have', a 'nice to have'. MoSCoW rules can be used to help expose priority (Stapleton, 1997)
Mo Must have
S Should have
Co Could have
W Want to have but not this time round
A perhaps unexpected consequence of product scope statements is the relationship between Scope's features and the eventual system design or architecture over time. This has implications for team structure, implementation architectures, and functional behaviour among others. The often close mapping between detailed requirements and the end design raises a risk that the user interaction model for the finished product will be strongly linked to or influenced by the underlying implementation model or technical architecture of the product. The end result is that a requirements document can overstretch its own 'scope' and verge into prescription for the eventual technical design.

Consider the following headings from a template for a single software requirement (Pressman, 2000).
Requirements definition: A clear, precise statement of what the user requires the system to do.
Statement of scope: State the goals and objectives of the software.
Functions: Functions or functional description of how information is processed.
Information: Information description (data) depicting data content flow and transformation.
Behaviour: Behaviour or interface description depicting control and change over time within the operating environment.
Validation criteria: What tests demonstrate valid operation and behaviour.
Known constraints: Procedural, legal, environmental, compatibility etc.
QUALITY
"Quality is a terrible control variable"
(Beck, 2000)
Finally quality! However quality might be defined we should keep in mind that a definition of quality is a non-trivial exercise. Quality is usually highly contextual, situated in a prevailing culture of what constitutes good or bad quality. In the case of software the product (or service) is not a physical good and so does not wear out in the way that hardware does. Hardware degrades over time due to physical wear and tear, breaking down and mechanical or physical failure. Software still fails and so it undergoes maintenance work to fix or enhance it over its economic life. For the purpose of a particular project the product’s quality is generally a negotiated concept.
From Scrapbook Photos
Figure: Project success as a function of quality

Measures of product quality (open bugs, stability, user satisfaction, speed, scalability) may be identified in order to lock down the release date or one of the other variables. But the cost of treating quality as the control variable in order to satisfy a release date is often negative in the long run. Compromising quality affects pride-in-work, it erodes customer confidence, and undermines your credibility and reputation. Don’t deliver something you know hasn’t been tested, or fails the tests; quality should be used to set thresholds and targets, using it as a control variable undermines and destroys the values we all aspire to.

AN AGILE TAKE ON THE ECONOMICS OF DIGITAL MEDIA
Kent Beck proposed a reinterpretation of the conventional wisdom on the increasing cost and complexity of software over time (Beck, 2000). The traditional logic of the increasing cost-of-change and steadily increasing complexity over the life of a software project is the motivation for conducting exhaustive up-front analysis. This also accounts for the conventional wisdom of resisting change at the later stages of a development life cycle.
However Kent suggested that the contrary view is the norm and further, that accommodating and responding to change is the normal condition for software projects. He claimed that an 'adapt to change' model should instead guide the management of software development, i.e. implement only what is needed now, check then correct before moving on to the requirement needed next. This process of deliver, correct, deliver, correct, continues for the entire life of the system, even after deployment or being put into production (see below). If the product or service is delivered digitally then distribution to customers can be made an almost trivial process. While the work of applying and using updates shifts to the customer, even the update and deployment processes can be gradually streamlined to facilitate customers who choose to update. Further, if the product is delivered as an on-line service then deployment reverts to the development organisation and a customer's use can be perceived as continuous and unimpeded by regular releases even when training may be needed to use new functionality.
From Scrapbook Photos
Figure: The cost of change over time: Traditional vs. Agile view

Likewise for design complexity, traditional software development invests massive effort in up-front design and requirements analysis, and allows relatively little revision or change during development and no change after deployment. (Beck, 1999) This results in a tapering off of design complexity over time. The initial design starts out relatively complex as much of the architecture and design is done before coding commences. The architecture remains static while code complexity gradually increases and then ceases to change as the product is finalized (see below).
From Scrapbook Photos
Figure: The increase in design complexity over time: Traditional vs. Agile view

Because developers invest only as much up-front design and requirements analysis as is necessary to deliver the minimum required functionality first, they ensure that design complexity increases gradually rather than abruptly. The most valuable features are delivered now because that’s when they are needed, other features will be identified and refined as the project proceeds. The architecture and design complexity will appear to grow organically as new requirements are implemented. An additional process termed ‘refactoring’ is also applied. Refactoring anticipates that earlier design elements may need to evolve whole project gradually expands. Furthermore we encounter occasions when product redesign without additional feature development is needed in response to evolving non-functional requirements like stability, scalability, usability, security etc). Refactoring as a process also acts as a brake on continuously increasing design complexity. Effective Refactoring often produces desirable redesign with the goal of achieving 'design simplicity'.

Traditional projects front-load as much cost (effort in design and requirements gathering) as possible, anticipating that they’ll understand the problem early and select the correct solution. Waterfall exemplifies this approach. Agile approaches usually implement only what is needed now, check-then-correct, before moving on to develop the next requirement and so on; and this continues for the entire life of the software, even after deployment or being put into production


SUMMARY
  • Cost, +/- it can cost more or it can cost less
  • Time, +/- it can take t, or 2t, or t+n. So how long is a piece of string? How long will the software be used? Is this the first release of many?
  • Quality, an artisan strives for quality, the inherent value and appreciation of things being made, pride in solving a difficult problem, in producing an elegant solution. Quality should not be treated as a variable. Instead quality is an indicator of the success or failure of our ability to balance the dynamic interactions between cost, time and scope.
  • Scope, +/- you can have many or fewer features, the goal here is to go for the features you really need now, leave the other stuff till later. Don’t deliver now what you can put off to a later iteration.