Design, Develop, Create

Wednesday, 28 October 2015

Phase 2 of the DIT Hothouse / IADT Media Cube New Frontiers programme is now open for applicants

The New Frontiers Programme is aimed at supporting the establishment and growth of technology or knowledge intensive ventures that have the potential to trade internationally and create employment in Ireland. DIT Hothouse deliver the 3-phased programme in partnership with IADT.

Phase 1 lasts approximately two months and is part-time in nature – typically entailing two evening sessions per week. The programme facilitators will introduce the group to relevant startup strategies, challenge each participant’s business idea and encourage participants to prove some degree of market validation of their proposed product or service.

Phase 2 includes funding support for each participant, subject to assessment of performance and progress. Phase 2 participants, who must be working full-time on their venture, engage in a series of interactive workshops addressing all aspects of the startup process.

Phase 3 is a full-time period of flexible support, helping successful participants towards investment and expansion. Participants have access to hotdesk space within the DIT Hothouse incubation centre during Phase 3.

Phase 2 of the programme is now open for applications.

Link to application site/form
IADT's "Blue Cube"
@MediaCubeIADT @DITHothouse @InventDCU #StartupProgrammes #EI_NewFrontiers #Hothouse #StartupIreland #StartupDublin

Thinking about starting up a venture?

Step 1.
Read through and apply if possible the steps in Cowan's book.

Step 2.
Look for external supports. The following are currently taking place and may drive, inspire and motivate you to progress your business idea...


  • NDRC Catalyser is an early stage investment programme. They are hosting an Open Evening in the NDRC on Monday, 9 November 2015 from 18:00 to 19:30.
  • http://bit.ly/1RdPzlz




  • Phase 2 of the DIT Hothouse / IADT Media Cube New Frontiers programme is now open for applicants
  • http://bit.ly/1p7H0vO

  • Enterprise Ireland also hosts the New Frontiers Entrepreneur Development Programme
  • http://bit.ly/19ypK9c


Tuesday, 27 October 2015

Accenture Digi-Workshop and Guest Lecture

Vicky Godolphin, Head of Digital, Accenture Ireland, presented the guest lecture Tuesday (27 Oct) from 5-6pm in room MH201, Blackrock.
Handover to Vicky Godolphin on Design Led Change #accenturegrads @Accenture_Irl 
"design thinking is central to shaping customer and user experience"
Another version of Damian Newman's design squiggle
And the "Design Wiggle"?
Vicky suggested to follow up: Fjord's era of living services "liquid customer expectations"…



Vicky's talk addressed projects Accenture is involved with in the Digital space, focusing in particular on the concern and need for ID/UX/UI in digital systems (ID is 'interaction design', UX is 'user experience', and UI 'user interface')





From 4-5pm Accenture presented graduate recruitment opportunities and meet with interested students.
Conor on the industries Accenture concentrate on and A's specialisms

Declan's journey; career paths, flexibility and growth within the organisation

Monday, 26 October 2015

@ucddi for Digital Innovation

As part of the newly changed programme title, MSc Digital Innovation, we will use the @ucddi Twitter account for items of public interest and mindshare for the student body.

Do please @ucddi for items on your own Twitter feed you think would be of interest to Digital Innovation.

Friday, 23 October 2015

Embrace Change !?$!

Software professionals involved in the development and delivery of business critical systems are increasingly referring to Agile methods to describe a rigorous process centric suite of practices optimised for adapting to uncertain or changing business needs. The term "extreme programming" was coined and popularised by Kent Beck in the late 1990s to characterise an apposite (and opposite) approach to writing software. IT describes a radical departure in our understanding of how to organise software engineering (Beck, 1999). The 'turn to agility' followed extreme programming as a broad umbrella label for Kent's XP and other approaches that took on this contrary mantel of refocusing software developers back on towards the core values and practices of computer programming in all its many forms (Kruchten, 2007).

Figure: Tracking the rise of popularity of the use of the terms "Extreme Programming" and "Agile Development".

How much of this is hype and how much a radical return to professional practice remains to be seen (Kruchten, 2007).

REFERENCES:
1. Beck, K. (1999) Embracing Change with Extreme Programming.
2. Kruchten, P. (2007) Voyage in the Agile Memeplex

The Rise of Agility

The move to agility involves refocusing on practices and discourse. An introduction to the principles and moves in methods termed ‘agile.’ A review of Extreme Programming (XP), the Agile Manifesto, and Scrum.

"We need to make our software development economically more valuable by spending money more slowly, earning revenue more quickly, and increasing the probable productive lifespan of our project. But most of all we need to increase the options for business decisions."
(Beck, 2000)

AGILE PRACTICE
Towards the end of the 90’s and early 2000’s saw the emergence of so-called Agile or improvisational models, including Extreme programming (Beck, 2000), Agile Development (Highsmith, 2002) and derivative approaches like Lean Software Development (Poppendieck and Poppendieck, 2003). In 1999 Eric Raymond (Raymond, 1999) posited that there were two diametrically opposite strategies evident for organising and engaging in design work; the Cathedral way and the Bazaar way. The then current radical movements in the software industry centred on Open Source software, extreme programming (Beck, 2000) and agile methodologies (Highsmith, 2002), were all characterized by frequent iterations, dynamic planning, intensive testing and making releases available regularly. The Open Source and Agile movements represented the Bazaar way for software development. The use of strict lifecycle models or organisational control frameworks (e.g. CMMI, RUP, ISO9001 style frameworks) was emblematic of the Cathedral way of software development.
From Scrapbook Photos
Figure: Manifesto for Agile Software Development (source: agilemanifesto.org)

Agile methods have since completely transformed practitioner understanding of how to organise software and high-tech development. These practitioner-oriented methods assume software development occurs in response to early, frequent, feedback. This in turn requires management commitment to allow and enable plans to evolve continuously. Agile methods have so successfully capture the development imagination that the CMMI and RUP (among others) have attempted to incorporate ‘Agility’ within their own meta-narratives.

THE AGILE MANIFESTO
In 2001 a group of developers gathered in the Lodge at the Snowbird ski resort in the Wasatch mountains of Utah. Inspired perhaps by Richard Stallmans's GNU manifesto (www.gnu.org 1985) or Mitch Kapor's Software Design Manifesto in 1990 (for an extract see hci.stanford.edu), Seventeen of them put their names to the "Manifesto for Agile Software Development" (agilemanifesto.org). The 'Manifesto' had two significant effects, it launched the 'Agile' turn in software development by offering a broad sketch of principles and a vision for the values of software development. It also played within an industry 'meme' of emancipation, inspired by other idealistic manifestos written for the software industry and beyond. Some of these interventions were intended for industry mindshare or commercial gain, others ironic or humorous (soa-manifesto.org, www.waterfallmanifesto.org, www.halfarsedagilemanifesto.org, failmanifesto.org, manifesto.softwarecraftsmanship.org, www.relisoft.com, Library Software Manifesto).
From Scrapbook Photos
Figure: Principles of Agile Software (source: agilemanifesto.org)

EXTREME PROGRAMMING
In 1999 Kent Beck introduced the world to the idea of Extreme Programming (XP) (Beck, 1999). He presented an interesting and compelling vision of the process of programming that appeared to offer substantial, almost radical, benefits if adopted or adapted into organisations.
Beck’s paper and subsequent book (Beck, 1999; Beck, 2000) provided explanations and actual cases and generated such interest that it has since taken on the appearance of a 'movement' in among professional software engineers. Such is the respect with which it is held that management and teams should take XP evaluation seriously, if only to establish a position on it on a principled basis. The culture of XP is based on the following four values: Communication, Simplicity, Feedback and Courage. In practical terms the challenges that XP address are characterised in terms of the traditional variables of project management: Cost, Time, Quality and Scope. These aspects describe the mind-set and the degrees of freedom you operate within when you work XP.

Distinguishing Characteristics of XP
The defining features of XP were set out by Beck as follows:
To learn from early tangible feedback from short development and delivery cycles; that is to develop and release often.
Incremental planning is therefore essential if learning from early release is to be fed back into the development cycle. The software development project or plan therefore needs to evolve continuously.
The schedule should be flexible to enable the team to implement new ideas, measure their cost and test their benefit, and then reset the schedule.
All tests should be written before coding. One way of understanding this is that tests are written as code is written and changed as code is changed.
All tests need to be automated and run as often as possible. What this means is that all the tests can be run 'at will' but at the very least with each build of the software.
Communication must become the very heart of development; communication can be many things but is perhaps the key practice of XP.
Design therefore needs to be reviewed continuously. XP rejects the idea that design is a one-shot up-front “design phase” that ceases prior to coding, nor is there a place for one-dimensional roles like analyst, architect or test engineer. Everyone does some design and the design is continuously evolving.
Finally, all coding needs to be collaborative. Collaborative coding implies that code is subjected to continuous to peer review and contributions through XP's emblematic process 'pair programming'. The consequence of this that responsibility is also shared. No one 'owns' the code yet everyone 'owns' the code.

XP'S FOUR VALUES
Communication Is done through source, test and code, in comments and other artifacts. It r requires real commitment, and can be in-your-face at times; it can be uncomfortable, unsettling. Communication requires engagement in all spheres, written, verbal, non-verbal, in-code, on walls. Kent's catchcry to 'embrace change' can be read as 'embrace conflict' too.

Simplicity is as aesthetic appreciation of what code and the design becomes. As a philosophical principle for design it can be used like a knife to continuously pare the program to necessary essentials. Simplicity requires eternal vigilance as is in a sense the driving dynamic behind the practice of 'refactoring'.

The very idea of feedback is integral to an XP style of working. Feedback is supposed to be real world and is exemplified by Beck's request that you bring the customer into the development environment, have the customer/user sitting and working beside you. Customers are necessary because developers are ignorant, ignorant in this case because you (the developer) are not an expert in the customer's domain and you can not know what the customer really needs, in spite of attempts to codify and capture in requirements documents. (Note: sometimes neither does the customer but that’s where eXtreme succeeds). In the same way that the customer does not know about your domain of expertise (coding).

And Courage. Right! But think again; coding, recycling, throwing it away after you have learnt how to do it right, taking an idea and writing/testing some stuff compile/debug, then using it, throw it away and start again. Thinking of programming as something miraculous, making something intangible tangible, bringing thoughts to substance in a program, thinking of programming as an inherently creative and therefore unknowable a-priori; it takes courage and perhaps a certain amount of luck to get it right.

THE VARIABLES
XP, like other systems development approaches, attempts to maximise project cost, time, scope and quality: Four variables of mixed types, uncertain (even changing) definition. If correctly modeled this problem would involve finding the maxima/minima on a complex perhaps fractal multi-dimensional surface?

Cost can increase or decrease and depends on the availability of money, people, hardware, tools. Time is generally limited and linked to estimation, other resources, need etc. Timeframe is also a concern, how long will the software be used for, is maintenance required, are people available, is the product dependent on other product release cycles. Scope should be variable, the customer can of course demand features, the goal however is to deliver on the features you really need now, leave the other stuff till later. Don’t deliver now what you can put off to a later iteration.

And Quality; the one variable that shouldn’t be treated as a variable and also the one perhaps most difficult to define. Quality is often an indicator of the success or failure of our ability to balance the dynamic interactions between cost, time and scope. Compromising quality undermines and destroys the values XP aspire's to; a craftsman strives for quality, the inherent value and appreciation of things made for use.

XP PRACTICES AND RULES
Many of these practices and rules have since become accepted as general professional practice on systems development projects. Many projects already apply some of these rules but XP aims to use ALL the rules as they reinforce each other. At least half of this list is accepted as general-programming-best-practice. Evidence of the following practices and rules defines whether a workplace is employing XP or not.
Small Releases: Start programming, check your progress against your goals, correct your direction and continue programming, repeat until you finish!
Metaphor: A simple design analogy or descriptive rubric that describes the software in a language that people from very different backgrounds can share.
Particularly useful for the customer to illustrate the problem in terms that are meaningful, e.g. “the IP packet routing and subnet bridge device” becomes “networking networks”, or “the host server proxy and daemon product” becomes a “portal”, or “the object request broker architecture for distributed applications” becomes “middleware”.
Simple Design: Focus on the real business problem, then design to meet that need, not anticipated needs or “really neat stuff” that some hypothetical future customer might find useful. You’ll probably get the first (and second) one wrong anyway so don’t invest unnecessary physical and intellectual capital into something you may throw away once, twice or three times.
Testing:  You write the code, you write the test (this is a “plural” you, see pair programming to follow).
The test is written before you code, automate the new test, run it often to boost your confidence.
On-site Customer:  Have full-time access to a customer or someone representing a customer working on the team, on-site with you as you code up the features.
Coding Standards:  Each programmer accepts the collective coding standard to support the all the source code, by making the style and formatting conform to a common standard so that the code is accessible to all past, present and future programmers that will touch the project.
No arguments about the position of curly braces or indentation, layout of functions etc.
Pair Programming: Can be roughly approximated as peer review or code review. Recall the Open Source pardigm “with enough eyeballs all problems are trivial”. The heart of XP, if you aren’t pair programming you aren’t doing XP. This (and the Planning Game) is the mark of a true XP project.
Collective Ownership: Has huge cultural and behavioural implications, this implies behaviour and practices akin to open source development. Everyone has access to the codebase, therefore everyone can (in principle) become expert in the architecture and ‘theories’ of the design of the software. But with ownership comes responsibility, instead of zones of ownership you have zones of greater or lesser expertise, ideally these average over the team, expanding over time or disappearing when members leave. The ideal outcome is to have sufficient knowledge embedded in practices, experience, memories and knowledge of the team members developing and maintaining the product.
Continuous Integration: Requires modern and usually open tools for managing source code, e.g. Subversion, CVS, git etc.
40 Hour week or Sustainable Pace:  A reasonable expectation on the part of workers and the business.
Refactoring: Coupled with attention to classic Patterns (architectural motifs), is a way to introduce simplicity of design on a continuous basis. Without refactoring bug fixes and the addition of new features or enhancements leads to increasing cruft, entropy and fragility. Refactoring removes what is termed 'cruft', it restores order and returns the software to stability.
The Planning Game: The planning game consists of periodic meetings development and the customer (or the customer’s representative) to explore the requirements and track progress. It facilitates incremental planning, reprioritisation and change. The plan and the software evolves continuously and visibly. The Planning Game involves the Business (customer) and Development (supplier) sitting at the same table. The game itself consists of three moves, exploration, commitment, and after an iteration, steer. Within an iteration the game is continued via Stand-up Meetings with the developers and the on-site customer present. The same moves can play out (exploration, commitment, and steer) but we can also follow how well the developers implement the 'user stories', verify the value and usability of completed stories, track progress and problems, and attempt recovery.
From Scrapbook Photos
Figure: The relationship between XP's key practices.

Agile work environments are supported by a host of tools and equipment. The
Soft Infrastructure requires good source control tools, test frameworks, build frameworks, email, news, messaging etc. An effective physical infrastructure includes best in class computer workstations, large (and multiple) monitors, accessible desk space, shared workspaces and personal areas, meeting rooms, whiteboards, display walls (information radiators) etc. The social environment has its own qualities, typically a strong espirit d'corps, pizza and social outings, sports at lunch and other social/work activities. The key point is that these values, rules, culture and practices support each other, weakness in one is covered by strength in another.

XP'S BENEFITS AND RISKS
XP is a distinctive attempt to shift from the general situation where decision-making authority resides in management teams and at key development milestones in Test/QA teams. It facilitates a shift in power and responsibility towards the engineers themselves in the product teams. As Beck’s opening introduction states...
"Extreme Programming turns the conventional software process sideways. Rather than planning, analyzing, and designing for the far-flung future, XP programmers do all of these activities—a little at a time—throughout development."
(Beck, 1999)
One way of reading XP is that it is an attempt by software engineers to exert or reassert a claim of power (autonomy, authority etc) over the production of the software. While Beck claims the effect of XP is limited to just the software engineering domain, we can conclude that if engineering teams adopt XP practices (and principles) then they will inevitably impact the other business divisions interacting with particular software development teams. The consequence therefore of adopting XP on even one team is to raise the possibility of ‘viral’ process occurring within the organisation, one that may alter the locations of control and decision-making throughout the organisation with unpredictable consequences. Like any alteration to the power balance within an organisation change itself is often subversive to current management knowledge and understanding. These are radical claims which, if we follow Beck's advice, may end up leading us to an unknown place and so a caveat; when attempting to adopt any new business process or methodology, members of the organisation need to carefully consider the potential impacts.

Tuesday, 20 October 2015

The Rise and Fall of Waterfall

Video link https://vimeo.com/18951935

A cartoon by Maxim Dorofeev

Seminar: Standards and quality enablers for high-tech business; 4:00pm, Tuesday 20th October 2015

The NSAI; Ireland's national resource for International Standards, standards processes, Quality and the ISO.
Kieran Cox, Education & Promotion officer with the National Standards Authority of Ireland will present on the NSAI's role in standards management, high-tech product standards and process standards including the ISO 26000 Guidance on social responsibility and ISO 9001 Quality Management Systems.

When: 4:00pm, Tuesday 20th October 2015
Where: Lecture Theatre MH201, UCD Blackrock; UCD Michael Smurfit Graduate School of Business.
Guest Speaker: Kieran Cox, NSAI Education and Promotion Officer
Contact Name: Allen Higgins

All Welcome

Enter NSAI into the UCD Library OneSearch box to bring you to the i2i system. i2i provides catalogue access for UCD Connect accounts. Members of UCD will be able to search for and access standards publications from the ISO, EN and IS (e.g. http://eu.i2.saiglobal.com/management).

Monday, 19 October 2015

Systemic Management Frameworks

An overview of SPI frameworks and standards. Organising organisations for high-tech development

INTRODUCTION: HACKERS V HUMAN WAVES

The apocryphal image of the software geeky programmer-hacker has tracked that of software engineering from the dawn of computing. The late-night hacker is the anti-thesis of management and control and coexists uneasily with the image of professional engineering. It is noteworthy that hacking culture in software, the valuing of artful hacks and elegant code predates corporate attempts to formalize software design and development in the computer industry with ‘human waves’ of engineer-programmers working on hulking great mainframes (Kidder, 1981, Levy, 2001).

Organisational structures and methodologies address the challenge of scaling software production, to cope with the demands of increasing scope and size; from small stand-alone programs to highly complex time sharing distributed systems. The software industry has flourished, establishing growing ecosystems of applications and services that have in turn demanded attempts to structure the software engineering process. Software engineering took as its paradigm, aspirationally at least, manufacturing’s process control model (Buxton and Randell, 1970, Naur and Randell, 1969, Shaw, 1996).

In the earlier discussion I suggested that life cycles were concerned with initiating, developing and concluding high-tech projects by managing time, scope, cost and quality. Design methods and techniques are tools for structuring and carrying out design work, using a particular technology or programming environment for example.

How should we organise and govern operational capabilities for many development projects in the wider organization? The question of scale arises however when the number of projects, products and teams exceeds some point. One response is for an organization to adopt an overarching management framework for governance, to structure its organizational capability (e.g. RUP, ISO9001, CMMI ). Consider the case for a single organization running a number of development projects each of which in may turn employ (potentially) different life cycle models and various design methods as needed even within the same project.

Organizational frameworks like RUP and CMMI, influenced by waterfall/SDLC, have gained acceptance where organizational scale, complexity or relatively static but demanding requirements govern the marketplace, for example in enterprise architectures where an industry model-driven architecture applies (Boehm, 2006).

STRUCTURED PROCESS PERSPECTIVE
Systems for managing software development have remained a topic of sustained interest and critique since the genesis of the burgeoning software industry. From humble origins in linear or staged development models (Boehm, 1988, Brooks Jr., 1995 (1987), Royce, 1970) we now encounter a diverse array of software management methods, systems, lifecycles and frameworks. Software process standards trace their genealogy directly from military standards established to manage defence contractors and their outputs in terms of product quality and documentation. The history and interrelations between these standard approaches is complex, international, political and intertwined, touching as it does on issues of national importance (defence systems software quality), commercial interest (licensing and intellectual property rights), academic and professional integrity (knowledge, identity, workplace organisation, prestige, etc). For a detailed history refer to Sheard's quagmire of standards (Sheard, 2001).


The argument for software design and development projects is usually cast in terms of a crisis; a crisis of codebase growth, of code complexity, of excessive rework, of costly repair programmes, of unmanaged and potentially unmanageable work where milestones and baselines would constantly slip or simply weren’t met. Process frameworks claim to address the challenge by defining the appropriate and necessary management structures for organisational systems supporting software production. Numerous templates and examples of suitable standard operating procedures (SOPs) or practices are offered. Organisations either adopt the standard form or adapt or replace them with their own versions, satisfying their own needs while anticipating that even if adapted the general form and organisation of the process framework remains constant so that the firm can benefit from ‘certification’ to the relevant standard.

The CMM (Capability Maturity Model) and ISO (International Organization for Standardization) series of quality frameworks have emerged as significant determinants in the trend towards framework unity rather than the quagmire of competing standards. The interrelations between the ‘standard’ approaches, for example ITIL, CMMI, RUP, and ISO9001 will be discussed briefly, to explore their shape and intent.

Notes: CMMI is the Capability Maturity Model Integration, TickIT is one of several software focused elaborations of the ISO9001 standard environment for quality management, RUP is the Rational Unified Process which grew out the object oriented move in software development. SEI is the Software Engineering Institute at Carnegie Mellon. RUP is the Rational Unified Process™, a trademark of the Rational Software Corporation. RUP's iconic figure is the humpback diagram: an activity/time representation of well executed projects. These are characterised by these typical curves, in appearance like surfacing humpback whales. A typical entire generic structure for a process framework would include several hundred policy documents, processes and procedures for use by an organisation.


THE CAPABILITY MATURITY MODEL INTEGRATION (CMMI)

The CMM is offered as a solution to the challenges of orderly software design and production. The solution was adopted and claimed proven in the cases of Motorola, Ericsson, the NASA-SEL, HP and Raytheon among others. The intent of the CMM is to force employees to focus on finding design and code faults much earlier in the development process. Evidently the organisations adopting CMM demonstrated the benefits of the process through large reductions in the costs spent re-working software once it had entered the field. The CMM approach also facilitated cost reduction by putatively increasing productivity and improving cost predictability.

CMMI (the CMM ‘Integration’) is described as a framework for ‘improving processes for better products’ (CMMI Product Team, 2006). Like ISO9001 the CMMI is an encompassing approach to marshalling and managing the resources of an organisation to attain (even predict) a range of expected results whenever the organisation embarks on a new project (ibid). The CMMI is considered by many to be the software industry’s best hope for a unified, integrative, adaptive, general purpose framework for software and high tech development (Sheard, 2001). CMMI enjoys a high degree of compatibility with other approaches (e.g. ISO), and is inclusive of diverse methods and lifecycles employed in real firms and organisations (Royce, 2002). However the proliferation of different ‘standard’ process models, even within the same family as happened with CMM, confuses and diverts attention from their common goals ‘improving processes for better products’. CMMI’s support (or absence of antagonism) for innovative practice is therefore an important requirement, enabling behaviours and techniques which may be more or less aligned with facets of a governing framework.

THE ARCHITECTURE OF CMMI
At the heart of the CMM’s process maturity framework is the requirement to establish demonstrable behaviour originating in key process areas. Key processes constitute (or define) what process maturity actually is, and key processes (their description and actual practice) are what enables process capability. Process capability is evidenced by process maturity and capability is in turn a predictor for process performance. Refer to CMM's Process Performance Indicators for an overview of the a maturity framework.

CMM levels.
The CMM architecture defines a measure of organisational maturity that could be applied (potentially) to any kind of production environment. Curtis asks CMM users to identify the cost drivers of their software products and operational processes. Self-assessment involves comparing the definitions and requirements of the CMM Architecture against the internal Product Development Process. The self assessment exercise points at which the organization already satisfies the requirements or definitions at levels 1, 2, 3, 4 & 5.

UNDERLYING THEORY OF CMMi
The capability maturity model commences by assuming an organisation to be an unknown quantity, at the very least any organisation may be thought to start in a raw state. At the entry point, Level 1 organisations work in an ad hoc way, inconsistently, using undisciplined processes. The move to level 2 is characterised by management addressing team and project processes. Management imposes discipline and stabilises behaviour by establishing defined processes and tracking outcomes. The next stage (level 3) replicates successful project and team processes throughout the entire organisation. A uniform global infrastructure of systems and subsystems that interface seamlessly to deliver the organisation’s output. The expectation of organisational uniformity is undoubtedly one of the most challenging and complex aspects of any organisational change project but once achieved the focus of level 4 and level 5 transitions then reverses, shifting the focus back to teams again (level 4) and then once more to individuals (level 5).

At the highest level of CMM organisational processes and change control became internalised within each individual, in a real sense they ‘are CMM.’ Trust then increases as all stakeholders recognise the benefits of reduced process variation (fewer failures and project overruns) until eventually everyone takes individual responsibility to continuously improve their own personal process. The CMM approach is not a quick fix; indeed it is neither quick nor can it be simply applied to fix an organisation as if it was a simple recipe for success. Groups and individuals need the time, resources and commitment to invest in developing their own solutions to the problems they face.

The CMM architecture is a typology of organisational caricatures, from the remedial clustered at level 1 through the most refined at level 5. Any organisation may exhibit features from all levels, for example, an overall level 1 organisation may still evince genuine attempts to introduce level 4 and 5 processes and behaviour. However in the main the CMM levels are presented as a developmental path; organisations evolve from level 1 to 5 with the benefit of management vision and commitment.

Criticisms of CMM assessment point to its very generality and comprehensiveness which may confuse its users. The situation is complicated further if the user organisation is involved in software production internally, while configuring and using externally supplied software and services. Promoters of the CMM approach take pains to contrast it and prove its superiority against other Software Process Improvement (SPI) systems including IDEAL, ISO9000-3, ISO-SPICE. It is notable unfortunately that CMM, as with other SPI change programmes, is costly in terms of resource and focus, and further that SPIs often fail to fully achieve their objectives. SPI adoption is never a simple application of a particular suite of documented systems and formalised interaction.

THE RATIONAL UNIFIED PROCESS

The following presents an overview of the Rational Unified Process™ (RUP) (Rational Software Corporation, 1999, Rational Software Corporation, 1998). The RUP has been hugely successful in large-scale firms (e.g. Ericsson, IBM) where the use of RUP style systems and processes has been a key success factor in bringing complex IT production systems under control. RUP compliments a suite of software tools Rational’s ClearCase, Purify, and Coverage so the adoption of RUP can leverage benefits from these ‘best in class’ tools to support organisational goals.

RUP Project Flow: The emblematic image of the RUP is the ‘humpback whale’ diagram, a representation of activity effort and transition over the entire lifetime of a product from development and delivery through to the maintenance process.

RUP Project of Projects: The RUP employs the idea of four main phases (inception, elaboration, construction and transition) each of which can be subdivided into a number of iterations (e.g. construction #1, construction #2, …). Evidence of progress can be measured for a project in terms of information sets which are delivered in each phase and accumulate over the entire lifetime of the project.

RUP Information Sets: The RUP Project Model is based on iterations. Each iteration is a time-box made up of different coordinated activities that begin with agreed inputs and end with the delivery of a range of outputs that resolve or address all inputs. The combined outputs comprise a product release. As the iteration progresses the project builds up a fuller set of documentation or ‘information sets’ that describe and control the product.

In a sense the product release is the information set a set of artifacts, artifacts that are not wholly defined as the software but does eventually include the software itself.

UNPACKING THE PHASE/ITERATION
At the most granular level the work of an iteration encapsulates the core activities of the SDLC. The business activities carried out in each phase are distinctive and can be characterized loosely as: planning and requirements for inception; design and coding for elaboration; design, coding and test for construction; deployment and coding for transition. Iterations can be thought of as miniature waterfall development models and a phase may be made up of many iterations. The conclusion of each phase is determined when “the defined objectives have been met, the defined artefacts have been completed and the various models have been updated.” (Rational Software Corporation, 1999) The major elements of the entire Rational Unified Process include a range of document types (requirements, designs, plans etc.).

CHALLENGES OF THE RUP
RUP phases are in turn somewhat like a larger aggregate waterfall model but like Boehm’s spiral model of software development (Boehm, 1988) the RUP provides for numerous check points and milestones against which the progress of product development can be assessed.
The RUP was always developed as a commercial package to be used with Rational Software Corporation’s supporting software design and management tool suite. The implication is that the full benefits of the approach occur when using the full Rational Enterprise Suite of products.

Overall, the RUP is an approach to risk minimising software product develop, delivery and maintenance. It is a flexible yet rigorous way to plan, staff, design, develop, test and release software in uncertain or challenging commercial environments. One of the RUP's limitations is that its full benefits seem only to be available with the proprietary offering; when the organisation employs the complete set of commercial tools, methodologies and processes provided in the Rational Enterprise Suite of products. Like the CMMI, the RUP is proprietary and essentially closed to all but the largest organisations who are prepared to invest significant resources in their adoption and further development.

THE ISO 9001 PROCESS MODEL

The ISO 9001 quality management system (ISO, 2005) is another internationally recognised framework for managing and improving an organisation’s standard operating procedures (SOPs). It is an open-ended approach applied in industries as diverse as healthcare, food manufacturing, logistics and software development. The system achieves two goals; capturing an organisation’s processes or SOPs through formal documentation, and establishing a process for refining both those processes and the ‘process of managing the processes.

PDCA learning cycle of an ISO9001 Quality Management System: ISO 9001’s PDCA approach is employed to enable organisational action, it is ISO9001's central metaphor for engaging in a process of managing organisational performance. PDCA is a cycle of analysis, decision-making, actions, and verification through feedback. The Plan-Do-Check-Act (PDCA) cycle represents the underlying 'theory' for ISO’s perspective on production and in this case software quality systems.

The plan-do-check-act (PDCA) cycle was popularised in the quality industry by W. Edwards Deming. PDCA establishes an imperative to monitor and so surveillance becomes characteristic of its on-going cyclic activities of continual process performance improvement. ISO9001 is likened to a generic organisational template that must be tailored to accommodate the specific circumstances of an industry and firm. Various QMS offerings conform with ISO9001, for example TickIT is a specialised variant of the core ISO standard; produced by the UK Board of Trade to address the field of software development. In this case prior compliance with the ISO standard is a necessary precursor stage to employing TickIT. Adoption of the 'Deming Cycle' as it has become known, is claimed to lead towards a 'Deming chain reaction', a cascading effect where improving quality improves productivity and decreases costs, after which market dynamics lead to decreasing prices. Increasing market size is followed by growth in the firm’s business thus producing exceptional returns on investment.

EXAMPLE:
ISO9001 For Success in Software Development and Support (FSSDS) - 
(Sanders and Curran, 1994)
 Let’s look at one ISO 9001 compliant framework for software development, developed by the Centre for Software Engineering at Dublin City University (Sanders and Curran, 1994). This is an approach that can enable successful compliance with the ISO 9001-3 standard; the precursors to ISO 9001:2000 which combined ISO 9001:1994, ISO 9002:1994, and ISO 9003:1994. Software Process Improvement (SPI) initiatives challenge an As-IS situation in terms of the costs of poor quality product (high defects, poor feature matches, excessive effort spent fixing and maintaining etc). SPI initiatives offer a remedy in the form of best practice, concrete processes and procedures. The FSSDS approach comes with a full suite of document templates for planning, design, reports, and procedures. In addition the FSSDS provides a listing of clearly documented key practices and relates them to associated policy, process or procedure documents (The entire generic structure includes 276 policies, processes and procedures for use by an organisation.) Finally the whole framework is cross-referenced against the relevant sections of ISO 9001 demonstrating that the FSSDS satisfies the requirements of ISO 9001 QMSs.

The stage-gate life cycle element: The FSSDS was designed to address the distinctive environment for software development and maintenance by emphasizing early defect detection and prevention. It attains this by institutionalising gating on all activity transitions. Each phase of the lifecycle is modeled on an input-output system. In essence, the stage-gate mechanism reverts the organisation of software development to a traditional waterfall/SDLC style where each project enters and transitions between separate phases over the project lifetime.

ISO style culture change model for introducing a QMS: The FSSDS approach to implementing an ISO9001 compliant quality management system (QMS) for software development and maintenance is predicated on two main aspects: a carefully managed cultural change programme, and clear control of the inputs and outputs of defined lifecycle activities 

The cultural change programme employs the PDCA model directly and the key to a successful cultural change programme is the clear demonstration of managerial leadership with top management buy-in. Introduction of SPI process change is managed through the establishment of an SPI committee. This group becomes responsible for developing, implementing, reviewing and evaluating the structures and systems of the QMS (e.g. policies, processes & procedures). Once commenced the quality programme should become self-sustaining as evidenced by regular internal audits, management reviews and incremental (documented) improvements of technical aspects of the QMS itself.
SPI initiatives are fraught, uncertain and risky projects. SPI is above all a 'cultural discipline'. Transforming organisational culture requires strong leadership, management and employee buy-in, the understanding of customers, time to learn from mistakes and the determination to persist. Organisational change involves tranformation of the inside of the organisation, its outputs and interactions with the outside world.



Monday, 12 October 2015

Examples of requirements

Interesting to see how much a design concept and/or existing context constrains the solution to a general need. Interesting to observe too how the site either confronts or avoids detailing that existing context in a substantive manner.
  1. Syracuse University (2011) Library Blog (link) and current state of their website (here)
  2. Mesh; an FP6 EU funded research and development project (link)
  3. The Social Travel App, another EU R&D project (link)
The design consulting firm founded by Alan Cooper provides a lot of guidance on applying various methods for gathering requirements and feedback. Three recent articles on usability testing, field studies and tree testing below

Managing Knowledge: It’s all in people’s heads!

"It’s all in people’s heads!"
"Software is developed or engineered, it is not manufactured in the classical sense"
(Pressman, 2000)
"Someone had to spend a hundred million to put that knowledge in my head. It didn’t come free."(Curtis et al., 1988)
In an attempt to address the question of how to structure and organize high-tech development we needed to review assumptions surrounding the idea of systems. It is apparent that in spite of over decades of computer based development and innovation the development of high tech systems remains something of a mix of art and science. The question remains; how should we (as product or technology managers) go about fact-finding, specification, solution design, implementation and maintenance in the current era? Few social or organizational laws hold in all but the most general sense and even those laws are subject to change due to the flux of generations and capabilities in the technological environment. Opportunities are brief, specific skills and knowledge go out of date; markets and platforms rise and dominate quickly only to be overtaken by other newer systems.

Evidently knowledge is a crucial resource and one that it seems necessary to understand of itself and manage somehow. Theories of knowledge may therefore offer a way of getting to grips with how to manage it in teams or organisational settings. Epistemology is the study of theories of knowledge, of understanding how we know what we know, and further, of understanding how such knowledge is justified if at all.

Object Mediated Knowledge and Learning

The Russian psychologist Vygotsky (developmental psychology) considered all learning, or as he put it ‘development,’ to involve mediating interaction, transitional objects, before knowledge can be internalised. (Kaptelinin and Nardi, 2006) p48 The interaction between our ‘self’ and an object is, in a fundamental way, a physical experience. Different objects mediate the learning experience and knowledge in fundamentally different ways. For example a spreadsheet is not a direct substitute for a physically printed list of names. Our experience learning, using and working with both is different. This is not to say that one form is more useful than another, it is just very different. The phenomena we experience with a list on a wall chart are quite different to that when a list is presented in a spreadsheet or via slides. The quality of learning, knowledge and understanding will always be subtly different (not necessarily better or worse) depending on the objects and interactions we bring into the process of learning.

Figure: Zone of proximal development

Vygotsky’s theory of knowledge and learning mediated by objects was adapted in the 1980s and applied as a method for creating thoughtful and reflective descriptions of (dynamic) human activity systems. Activity Theory (AT) as it was termed is grounded in this theory of object-mediated learning, and it balances the view of a system as an ensemble of technical artifacts with the view that it is also a social construction. Simply put AT offers a balanced presentation of objects and tools, social structure (community, rules, division of labour), the role of the individual (subject); all with the goal of effecting change in the work; developing a new object to achieve some desired outcome.

Managing development; what really happens

Even as systems development processes have expanded to include ‘human factors’ through participative design or user centred approaches (Bannon, 1998, Ehn, 1990) they barely acknowledge or recognize the practical accomplishment of the systems development process itself as a social and cooperative work form (Mackenzie, 2005, Ó Riain, 2000). While designers are involved in making others’ behind the scenes work available, to intervene with well-designed software, their own design interactions are relatively poorly understood.

What is it that distinguishes the practice of software systems development from other creative professions? Is the act and process of producing software distinctive from other forms of production? In a recent sample of 28 software producing or servicing firms in Ireland covered organisations ranging from 20 individuals to thousands (companies ranged over SMEs to government agencies and multi-national corporations), team sizes ranged from 3 to around 30 (typically from 5 to 10) and software codebase size ranged from 100k to several millions of LOC (see below).
Table: Product size versus team size; Ireland, 2010 (Higgins et al, unpublished).
TeamAndProductSize

To put this in context; consider an academic research conference with around 100 papers being presented. Each paper consisting of approximately 6,000 words, equating to perhaps 800 lines of text mapping down to about 600 sentences or fact statements. Let us assume that the knowledge content of the collected conference proceedings is roughly 100 papers x 600 fact statements per paper; equating to 60,000 lines of code equivalent. Contrast this with a fairly typical team of software engineers of anywhere from 5 to 30 people managing the active compilation and on-going design of a software product of 100K LOC. This is the software equivalent of somewhere between 150 to 200 conference papers. Basically, small software development teams are involved in a collective intellectual exercise commensurate with the output of 150+ academic researchers.

To put it in another context, have a look at the graph of codebase size visualisation informationisbeautiful.net. Interestingly codebase sizes begin to approach the numbers of lines of decoded gene DNA in organisms...

What developers actually do
"Developers solve problems at all levels between the ‘whole project’ level and the ‘one line of code’ level... we break complex problems into simpler sub problems. We use this reductionist approach to deal with problems that are too large to handle otherwise."
(Raccoon, 1995)
Raccoon proposes a simple cycle of problem definition, development, solution and stability as a motif for the entire business of producing software (below). Status quos can be thought of as known stable points, perhaps something that compiles without errors while also implementing some subset of what is eventually required. He posits that the activities of the problem solving loop actually shape the large-scale structure of development work, regardless of what the espoused methodology or lifecycle is. This based on a constitutive connection between “one line of code and the entire project” (Raccoon, 1995). Raccoon argues for the analogical use of Chaos theory as an underlying theory for systems development projects; simple self-similar processes giving rise to complex emergent behaviour.

In practice developers find themselves working within fractal problem solving loops, in which you may work on all levels; from developer the 'whole program level' right the way through to the 'one line of code' level. Developers use all their skills at all times throughout the project, skills encompassing requirements analysis, design, coding and maintenance.
Raccoon describes the process as essentially a focus on lines of code, on interpreting and problem solving. His insight is that the work of development revolves around problem solving centred on code and data. The 'line of code' means actual software code but can include a requirements statement or project tasks. As each ‘line of code’ changes its impact propagates from one end to another of a project. This brings attention back to the end-to-end nature of design and code and what it addresses. There is a continuum from technical foundations (code, architectures, frameworks, 3rd party tools) through to user visible screens and performance. Its consequence is to highlight the necessity for users, developers and technology to converge and agree for any status quo to be achieved or reach. The other consequence is that any status quo is simply an intermediate point, one of many status quos (see below).
problemsolvingloop
Figure: Relationships between risk/stability activities (adapted from Raccoon, 1995)

A consequence of the Raccoon's analysis is to claim that traditional ideas about the connections between phases of a lifecycle like waterfall or spiral are not true depictions of the state of a project (see below). He provides four life cycle archetypes ranging over the 'simple but rigid' to 'complex but adaptable'. The waterfall or SDLC are termed sequential; modified waterfall is overlapping; spiral is lingering; chaotic is all-at-once or code-and-fix.
lifecycles
Figure: Four life cycle archetypes; the relationships between analysis, design, implementation and maintenance (from Raccoon, 1995)

"Developers interleave design, implementation and testing as they work on different pieces of the project. It is not possible to isolate these phases from each other…each phase involves setting goals, carrying out the goals, and maintaining the results."
(Raccoon, 1995)
He concludes that life cycles are just our own perspectives “on the state of a project rather than any essential truth about the state of the project” (Raccoon, 1995) The basic problem solving loop (Figure 7) is self evident but it is also intended to apply to the activities at the whole program level, component level, function level and ‘one line of code level.’ Raccoon sees this as an intrinsic process happening regardless of the lifecycle phase we consider the project to be in at any time. The same basic learning loop applies also during the analysis and requirements phase, design phase, implementation phase, and maintenance. Indeed ‘phase means perspective,’ it is not a predetermined state.

Most lifecycles are simplifications of the real-world situation, although they may not acknowledge this as such, however the truth is that real development projects are complex, messy and unstable. It turns out that the very act of programming is in fact a signature or leitmotif for the whole process of developing software (Raccoon, 1995). While this is very much a developer’s view of development projects rather than a user’s or managers, the perspective aims to give a more truthful description of their roles too.

Are methods necessary?

How does systems analysis and design of software take place in practice? What is the role if any of methodologies in the organisation of development? The authors argue that
"traditional IS development methodologies are treated primarily as a necessary fiction to present an image of control or to provide a symbolic status, and are too mechanistic to be of much use in the detailed, day-to-day organization of systems developers' activities." (Nandhakumar & Avison, 1999)

The structures of organisation arise through experiences that solidify into practices over time.
"ad hoc development practices became institutionalized, forming structural properties of the team. The team members were therefore following these practices reflexively in their day-to-day activities." (Nandhakumar & Avison, 1999: 182)
The structure of methodologies is shown to be demonstrably broken across time, in the same way that the structure of life cycles have been shown to be fictions, not apparent in fact (Guindon, 1990). But if methods and life cycles are known to be fictions why are they employed? Are methods necessary and if so who uses them and why?

REFERENCES
Guindon, R. (1990) Designing the Design Process: Exploiting Opportunistic Thoughts. Human-Computer Interaction, 5, 305-344.
Nandhakumar, J. & Avison, D. E. (1999) The fiction of methodological development: a field study of information systems development. Information Technology & People, 12, 176-186.

Friday, 9 October 2015

Mapping the maze...

Is this structure an intrinsically impractical building? You might consider the heart of University College Dublin's central campus, the so-called UCD Newman Joyce Precinct, to be Ireland's own minor example of architecture that drives its users mad.

Would you like to experience what if feels like to be a lab rat in a maze? Try navigating through the Newman building's maze-like identical internal corridors, with orthogonal carbon copy junctions and stairwells, and no view of the outside world. Any building that has trackways marking its interior is a clear indication that the architects have failed to design for human (humane?) use. The corridors of the Newman are painted with colour coded tracks starting from an obscure services dock on the first floor. These coloured lines snake through the complex to ambiguous destinations like 'section EH'. The Joyce Library is no better.

That said, I have a grudging respect and possibly admiration for the structure and its designers for demonstrating that objects like this may exert agency and confound us. To such effect that even brilliant academics get lost. Is reminds me that the physical corporeal world can still kick back at us humans.

Ref: "Getting Lost in Buildings" (link) Carlson et al. (2010)


  
 
  




Wednesday, 7 October 2015

Disproving the Mythical Man-Month With DevOps?

In the article "What CIOs need to know about microservices and DevOps" Carla Rudder interviewing Anders Wallgren (link) on the topic of so-called 'micro-services', Wallgren states...
"you can actually start to beat the Mythical Man-Month (i.e. the long-standing theory that adding people to a project lowers, rather than increases, velocity)."
Hmmm, with 'micro-services' I presume he is presenting software system architecture made up of atomic independent elements? How tenable is that argument, really? Is it widely applicable or even achievable in reality? And what, if anything, is the significance of DevOps specialism, processes, operations, to his claim and assertion? Isn't DevOps culture and attitude about uptime, stability, scaleable secure deployments? Should DevOps be the place for the white heat of creative design and development?

Mature organisations have development environments that tend to an ideal arrangement, i.e. regular (daily/weekly) stable builds with customer releases occurring quarterly or less frequently. Much of the open source world achieves this level of responsivity, a flexible a mix of iterative/agile + staged releases. However, release/releasability is not the same as being able to scale ramping of new feature development.

Note:
Ref. for counter-arguments see the balanced and sane (for Slashdot) discussion on Slashdot (link)

Sunday, 4 October 2015

Life Cycle Concept

A review of the origin and evolution of the lifecycle concept as applied to software engineering. Pointing out the proliferation and diversity of life cycles and methods addressing high-tech systems development.

"Life is the state of ceaseless change and functional activity peculiar to organized matter."
(Fowler and Fowler, 1929)
The terms ‘development cycle’ and ‘life cycle,’ are synonymous in engineering and have been in common use in the fields of computing and software since the 1960s. From the early 1900s it is used next in sociology applied to the individual’s life history and is then applied to businesses and manufactured products from the 1950s onwards (OED, 2010). The earliest references to ‘life cycle’ use it to refer to the entirety of a biological life described scientifically. The life cycle describes an entire biological life (birth, maturation, death) as an abstract sequence of events and transformations. The idea of a ‘life cycle’ is a relatively modern concept. Its published use commences in the mid 1800s when the idea of ‘cycles’ with connotations of lunar, diurnal, mechanical, of recurring events and events recurring in a chronological cycle. Implicit within the definition of a life is the idea of a beginning and an ending, birth and death, the span of a life, or in the case of technology, the duration of a project or product. Ultimately the idea of life cycle is a metaphorical description of projects applied over relatively long durations. It links the idea of activity with the passage of time but emphasising the long term processes under which activities become transformed into the cohering thing that projects produce, be they a business model, a technology, a business process, service etc.

For the purpose of the following discussion I focus on life cycles as a subset of management systems for development and so propose the following definition: a life cycle is a process that describes how to start and finish a high-tech project. Applying this definition, an organization may employ different life cycles for different projects. Life cycles simply provide local guidance on how to organize a development project team and (should) generally steer clear of prescribing the operational structure of the wider organization.

LIFE CYCLE MODELS IN SYSTEMS DEVELOPMENT
The life cycle concept is much used (perhaps overused) and maligned in the informatics discipline; assuming informatics to encompass Information Systems, Computer Science, Software Engineering, programming, and computing more generally. The Systems Development Life Cycle has been described as harmful because it is a blinkered (though comforting) world-view that imposes a project management structure on system development (McCracken and Jackson, 1982). Furthermore there are a number of common and well known circumstances in which a unified staged development cycle (SDLC) is known to fail. In these situations alternate approaches like mock-ups, rapid prototypes, and end-user development are indicated (Gladden, 1982, McCracken and Jackson, 1982).

In 1991 Peter DeGrace and Leslie Stahl created an inventory of the life cycles and software engineering methods in use. Fully half of all approaches were waterfall model in one guise or another, consequently they spent half of their time critiquing waterfall (DeGrace and Stahl, 1991). But what are we talking about when we use the ‘life cycle’ term? The following table (below) summarizes DeGrace & Stahl and complements their catalogue by including more recent inventories of software engineering management/design concepts. These concepts are variously labeled as life cycles, methodologies, design methods, models, even systems themselves.

Table: Life cycles and development methods
LifecycleMethodsConcepts

DeGrace & Stahl (1991), McConnell (1996) and Avison & Fiztgerald (2006) provide detailed descriptions of the life cycles or methodologies they document. Boehm’s (2006) presentation summarizes the historical development of software engineering and possibilities for computer technology and its management into the 21st Century. We may make some generalizations based on this overview.
Over the 25 years represented by these snapshots the number of life cycles and methods has steadily increased. It is not immediately obvious whether one or other approach is simply a life cycle, or a design method, or if indeed is intended for wider application for governing organizational operations. Boehm characterized life cycles into four categories (Boehm, 1988):

  • Code and Fix
  • Stagewise and Waterfall Models
  • Transform Model (4GLs, code generators, very high level)
  • The Spiral Model
We can classify these models by considering them to lie on a spectrum between top-down (planned) or bottom-up (iterative) models. While code-and-fix is in some sense an anti-method, its presence reflects the underlying practicality of production. Often co-existing with code-and-fix is the waterfall, which as an organizational method, is perhaps antithetic to code-and-fix. Spiral, incremental, prototyping and newer agile approaches are more closely aligned to the code-and-fix strategy for reducing 'requirements risk'.

Wider operational structures are of course necessary but would instead be the focus of what we might term management ‘frameworks’ and general organizational theory (see following section). A life cycle is therefore an organizational template for managing a project and will therefore be largely concerned with addressing issues affecting scope, cost/effort, time and quality.

A further distinction is necessary to help us reduce confusion among this variety of life cycles, methods and methodologies. Life cycle should not become conflated with design method. Design methods are many and various, and sometimes mandate wider organizational structures and events, but they are ultimately techniques for ‘representing’ and manipulating design objects or a particular technology itself. For example, design methods are may be closely linked to a particular software tool or programming paradigm e.g. Z and Formal specification, functional design and structured programming, ERD and data flow with DBMS design, UML and OOAD etc. (Bell, 2000).



THE WATERFALL MODEL (AKA SDLC)
"If the only model we have is the waterfall, then we will see the world in terms of problems that can be solved using it. Our world and our field are richer than that. We need a toolbox of models." (DeGrace and Stahl, 1991)
Waterfall theories of system development still abound so we should investigate their origin and definition. The peculiar thing is the death of waterfall was heralded years ago but the idea persists (Hanna, 1995, Gladden, 1982, McCracken and Jackson, 1982). Waterfall thinking was (and perhaps still is) entrenched in industry and general thinking. In spite of numerous critiques the waterfall is still with us (Table 2). The waterfall has become an almost subconscious assumption in management, with the risk that this subliminal view of high-tech projects shapes our world-view of production and innovation more generally.

Table: Development life cycles in firms; Ireland 2010 (Higgins et al, unpublished)
IndustryLifecycles

One of the earliest descriptions of the waterfall method for software development is often attributed Winston W. Royce’s in his article “Managing the Development of Large Software Systems” (Royce, 1970). Royce drew on his own experience working as a programmer on early high-tech Department of Defense projects in the 1960s. He described the difficulties of achieving success in terms of operational state (functionality), on time, and within costs. Royce’s goal was to improve our capability to program large-scale developments, to better produce and deliver, from concept through to implementation, large computer systems. Royce presented a detailed description of the SDLC’s step-wise model for development, which he dismisses as infeasible. He terms these kinds of systems the 'grandiose approach' to software development but concluded that the grandiose approach is an "intrinsically risky development process."
grandiosemodel
Figure: A sketch of the SDLC; The Grandiose Approach (from Royce, 1970)

analysiscoding
Figure: The essential activities of software development (from Royce, 1970)

Royce identified the central challenge for large computer systems in dealing with the essential character of computing, the so-called 'essence of programming.' The work of software development centers on two main activities; analysis and coding. No other activities contribute as directly to the final product. These activities together "involve genuinely creative work which directly contributes to the final product." (Royce, 1970) However as a project becomes sufficiently large it becomes necessary to revert to the grandiose model to formally highlight other necessary activities. But the central problem with the grandiose model is that task and leakage inevitably takes place between successive process steps. Task leakage and linkage (see below) occurs complexly between all steps in the process, from the very last stage right back into the earliest. The central problem is that 'design' involves everything; analysis, coding, testing, how the product is used etc. Furthermore he claims it is futile to treat the high-tech development process as a linear flow. Simply put, a linear process model breaks down for anything other than the most trivial commodity task.
leakage
Figure: Task leakage and unexpected linkage complicates the SDLC on real systems development projects (from Royce, 1970).
In the absence of an alternative to the development lifecycle Royce identified additional necessary steps to ensure large scale software systems satisfy requirements, to ensure they are stable, usable, maintainable etc. Leakage in the development lifecycle should be dealt with explicitly through additional activities (below) rather than by hiding them “under the rug” (Brooks Jr., 1995).
roycemodel
Figure: Remedies to the SDLC (adapted from Royce, 1970)

Royce's Remedies
"Begin the design process with program designers, not analysts or programmers [so that] preliminary program design is complete before analysis begins"
(Royce, 1970)
Royce proposed 5 steps to overcome the shortcomings of the development lifecycle. The alternative is in fact an iterative interactive approach, albeit described in the terms of the grandiose development lifecycle. System design is reduced to five tenets or principles for minimizing development risk:

  1. Program design comes first
  2. Document the design
  3. Do it twice
  4. Plan, control and monitor testing
  5. Involve the customer.

These ideas presage Fred Brooks’ (Brooks Jr., 1995) own conclusions from working on the IBM System/360 and OS/360 development project. The principles also hint at what is now labeled Agile Development (see following section). They were a radical departure from centrally planned approaches of the day where analysts worked apart from designers who kept aloof from programmers and users or customers were almost entirely absent from development projects (some might argue the same conditions still hold in many organizations).
Royce was conscious that development life cycles can become a self defeating remedy to the challenge of managing high-tech projects; the cure may be worse than the disease. The extra steps and activities of the development life cycle drive up the cost and complexity of managing the development of a large system. Additional management activity further complicates systems projects and potentially drives down the probability of success while at the same time driving up cost. This introduces its own problems as customers may be reluctant to pay for the cost and complexity of additional management, and designers are often reluctant go along with the demands imposed by additional management activities all of which pull time away from software design and development. Royce concludes therefore that…
“The prime function of management is to sell these concepts to both groups and then enforce compliance on the part of development personnel.” (Royce, 1970)

A SPIRAL MODEL OF SOFTWARE DEVELOPMENT AND ENHANCEMENT
The spiral model was presented in 1988 as a general life cycle model (Boehm, 1988). Lifecycles, or to use Boehm’s label ‘software process models,’ define the order of events and activities for software development projects. The spiral model he proposed was an attempt to address the deficiencies of linear stage-wise development models. The spiral was presented as a unified life cycle model, generally useful in so far as it specifies the all necessary activities and transformations experienced over the life of a software development project. Boehm introduced the idea that a development life cycle could be designed as a process for managing risk rather than introducing risk. The spiral redefined software development as a process (below) and popularized the notion that development could be considered an on-going cycle of iterations rather than an end-to-end production line. The spiral process allows for requirements to evolve and for the project to account for its own changing environment.
spiralmodel
Figure: Spiral model of the software process (adapted from Boehm, 1988)

Delivery of a version of the evolving project is a key milestone for each cycle. The project is reviewed and could potentially be stopped at the end of each cycle of the spiral. Each review event commits the project to further exploration and taking on additional risk, which is clarified and resolved in prototypes and simulations. Working prototypes or partial versions of the product are then produced and validated. Turns through the spiral expand progressively to include more and more of the waterfall’s activities (design, code, test, implementation, operations). The spiral can be used to manage small incremental project improvements or large uncertain product development projects.

A way of understanding the relationship between a development project’s cost, completion and risk is depicted (see below). The expanding circles reflect the cumulative cost of the project; the phase or % completion of each cycle is evident by progress along each cycle or turn. Risk and its trend (increasing or decreasing) are represented as a vector from the project origin (which shifts) to the current point in the cycle.
spiralriskdisk
Figure: Spiral model 'risk, completion, cost disk'

The idea of ‘design risk’ reduces to being a factor of the people involved in a project. A low risk design element for an experienced engineer may be a challenging and risky job for an inexperience engineer. However managing the risk by reverting to the experienced engineer may be a short-term solution to what is in fact a different game running over the long term. Involving customers and users in prototype evaluation is a way of exploring requirements and further reducing risk.

By championing prototyping Boehm also provided a place for Royce’s advice to plan to ‘do it twice’ (Royce, 1970), i.e. a preliminary design and implementation that exercises all the other stages of the lifecycle. A development project undergoes a series of iterations or rounds; proceeding to the next round depends on how the preceding round went and on continuing viability of the business case driving the project.

Boehm’s spiral was the last significant theoretically driven contribution to high-tech project management up until 1999. Subsequent contributions to the burgeoning business of creating new life cycles and development methodologies were either hybrids of spiral/waterfall or techniques rather than whole project life cycles. For example Rapid Application Development (RAD) was founded on a single big idea, working prototypes. Iterative development is conceptually equivalent to spiral. Dynamic Systems Development Methodology (DSDM) was designed around the key concept of iterations. Participatory design, user driven development and Joint Application Development (JAD) give emphasis to user/customer engagement in the development process. The Rational Unified Process (RUP) gave credence to the manageability of large sets of projects and popularized the idea of automatic code generation from UML designs.

The break Boehm makes with the thinking of the day was to present a concrete actionable model for organizations to shift systems development from a batch thinking to process thinking perspective. That development was more like a process than assembly, where assembly lines move a product-in-progress from stage to stage to completion. The spiral model characterized development as an on-going cycle of expanding requirements (and risk) followed by resolving the requirement (removing risk) by creating new parts. An initially uncertain product idea gradually comes into focus over repeated cycles in an on-going process of discovery and refinement.
The process view of product development encompasses the batch view as each cycle can be considered a small batch.


CONCLUSIONS
New technologies in the form of better programming languages, more powerful computing hardware, new languages and development environments have held out the promise of improved production cycles for software and high-tech development. Unfortunately technological remedies are slow to evolve or when developed result in pushing the problem of design into other zones of the development ecosystem.

While the life cycle metaphor is certainly a useful way of characterising development activities and helps order or structure the working environment practitioners assert that the microscopic activities of programming are intrinsically bound up with each other and cannot be meaningfully separated. Cyclic or iterative process models like spiral are certainly better approximations of the underlying process of software design and production, and are therefore an improvement on earlier models.  But at a macroscopic level, the level of teams and projects and organizations, stage-wise, and even cyclic process models like spiral simply do not capture the practical reality of programming. Ultimately life cycle models are mere organizational constructs, ways of ordering projects; they do not address the detailed production of systems design.

We have noted that stage-wise development models like the SDLC and waterfall become unmanageable for well-understood reasons:
  • Software development tasks are complexly interdependent.
  • Task interdependency demands close communication and coordination between workers.
  • High-tech products are subject to uncertain and evolving requirements.
  • The high-tech environment is constantly evolving.
We therefore need other tools, theories or ways of understanding if we are to get to grips with practical activities, the details of practices, specific methods, approaches to testing, releasing, maintaining, and other aspects of the actual performance of day-to-day work with software on teams in organisations with customers, users etc.