Design, Develop, Create

Wednesday, 19 December 2012

Another take on Creativity...


A nice video on the topic of ideas & brainstorming. Note the shift to an essentialist vision of the superiority of ideas; ideas as entities that function, that can be generated, contrasted and combined. 


Critique: While not explicitly stated in the video the stance conveyed is the commonly held view that ideas are contained in individuals, the inventors that conceived them.

I would however like to caution against appealing to explanations that resort to the notion of a singular inventive genius at the centre of an innovation. I would try to avoid or at leaset be conscious of the 'essentialist turn of phrase'; that creative ideas are out there jostling for attention, that half an idea can become a full idea in combination with another half etc. Appealing to this kind of 'obvious' way of dealing with ideas is an easy naturalistic way of thinking about the creative process however it mingles the 'idea' with a kind of technical agency, that is it attributes essential qualities to the conceptual object of an idea. This then leads to a kind of teleological explanans for why the final condition of a successful innovation is achieved. Fitness, technical superiority, and a kind of Darwinian competition between ideas in networks that produces a new innovation. This way of describing ideation, while easy to understand, is not founded empirically; ideas and innovations do not have their own lives therefore employing this mode of understanding innovation  seems to me to make for bad policy and management. Why? Because it does not explain the underlying phenomena. What I observe, in the field, is a much messier, involved, uncertain, porous, negotiated and tenuous process that unfolds progressively in an un-plan-able manner. The creative process is more an artistic process than a decision making process of playful discovery, aesthetics and judgement mixed in with inspiration, serendipity and sweat. Ideas aren't atoms of meaning that can be combined into new elements or molecules. Rather, they emerge in contexts, from experience, and through interaction with others.

Monday, 12 November 2012

The five minute CIO: David Miller


Terminalfour's COO offers a view on how to bridge within the high-tech business, between a production and a business focus...

It all comes back to people [from a technical discipline] recognising that there’s a different world to the world they’re in. The best people are the ones who combine the two [business and IT]. 

http://www.siliconrepublic.com/strategy/item/29933-the-five-minute-cio-david/

Use a process that produces data


These three columns or position pieces by 'Uncle Bob' Robert C. Martin of ObjectMentor, setup the classical problem of systems development and offer a well-thought-through response. Consider that Bob wrote these in the days prior to our wider awareness of practice-oriented approaches that were just then gaining ground such as XP, SCRUM and the Agile Manifesto. However, to paraphrase from Martin's Engineering Notebook on IID (Martin, 1999)
"Don’t let these articles mislead you.  If you follow the advice above and begin developing projects in an iterative and incremental way, bluebirds will not fill your sky.  Schedules will still be missed, there will still be bugs, problems, and mis-aligned expectations.  Software is, after all, software; and software is hard. However, what you will be doing is using a process that produces data; whereas waterfall produces none. With that data, managers can try to manage the project."

The articles:

http://www.objectmentor.com/resources/articles/IIDI.pdf
http://www.objectmentor.com/resources/articles/IIDII.pdf
http://www.objectmentor.com/resources/articles/IIDIII.pdf

Friday, 9 November 2012

Ship Wars@ Google Waterloo


Ship Wars is a competition in which participants code their own intergalactic crafts in the programming language of their choice, and then battle against each other in a virtual environment.

Take a look at these development environments...

http://goo.gl/JaQyd and http://goo.gl/RJjOK

Wednesday, 7 November 2012

Simplified Grade Descriptor

Simplified grade descriptor for grading standard.

A
The report is complete and covers all important topics.
Appropriate significance is attached to the information presented.
There is a compelling logic to the report that reveals clear insight and understanding of the issues.
Analytical techniques used are appropriate and correctly deployed.
The analysis is convincing, complete and enables creative insight.
The report is written in a clear, lucid, thoughtful and integrated manner-with complete grammatical accuracy and appropriate transitions.

B
The report is complete and covers all important topics.
Appropriate significance is attached to the information presented.
There is a clear logic to the report that reveals insight.
Analytical techniques used are appropriate and correctly deployed.
The analysis is convincing, complete and enables clear insight.
The report is written in a clear, lucid, and thoughtful manner-with a high degree of grammatical.

C
The report is substantially complete, but an important aspect of the topic is not addressed.
The report used information in a way that was inappropriate. There is a clear logic to the report.
Analytical techniques are deployed appropriately.
The analysis is clear and the authors draw clear, but not comprehensive conclusions for their analyses.
The report is written in a clear, lucid and thoughtful manner, with a good degree of grammatical accuracy.

D
The report is incomplete, with important aspects not addressed.
The report frequently used information that was substantially inappropriate or inappropriately deployed.
The report’s analysis is incomplete and authors fail to draw relevant conclusions.
The report is poorly written.

E/F
The report is substantially incomplete.
Whatever information provided is used inappropriately.
There is little analysis and the report is inconclusive.
The report is poorly written and presented.

Further reading
See the UCD registry for a more complete outline grade descriptor (pdf file link).
See grading in the module curriculum for conversions between grade points (gp), gp values, and marks (pdf file link)

Kanban objects and interaction

看板
In Japanese; Kanban: Literally, a ‘watch over – board’: a billboard, poster or sign. The first character (reading "kan") combines the primitive elements of heavenly/above and eyes/see to convey watch over or oversee. The second character (reading "ban") combines the elements of wood with bending/resistance meaning which together are taken to be 'board'.

A time-lapse video of a physical kanban/scrum board being used by the Vodafone Web Team in Copenhagen, Denmark.
The underlying mechanism for Scrum is based on kanban which originated in the Toyota Product System. We see these Kanban boards in many workplaces. Kanban is just a board but it becomes a focal point, a social/organisational device. The operation of a Kanban is based on two principles: Pull system, and Visibility.

The other essential aspect of a Kanban in software development is design collaboration; collective involvement in design decisions. Kanban and collaborative design are complementary practices. They act to flatten hierarchy, and enable communication. But 'flat' and democratic, while reducing propensity for individuals to ‘dominate’ also reduces the opportunity to for them to ‘hide’. Agile teams can be very tricky to run as they bring issues of power, control, reputation, face, failing and succeeding into the public sphere of work.

Tuesday, 6 November 2012

Are there organisational archetypes for high-tech firms?

Alex reminded me of this funny take on organisational charts emphasising the influence of the Owner/Founder/CEO in IT companies. Amazon appears as a classical top down hierarchy with no inter-communication among peers. In Google's case its two founders: Larry Page, Sergey Brin,  plus one (Eric Schmidt I presume) appear to be cloned throughout a tiered organisational with multiple lines of communication among all levels. Facebook is presented as a mesh with no layering and isolated local pockets of communication. Microsoft as a network of hierarchies with each subdivision at odds with all of the others. Oracle as a legal firm with a small engineering division attached to it and both divisions reporting directly to Larry Ellison the CEO. And Apple as a blob of individuals, each one under the direct supervision of the then CEO Steve Jobs, suggesting a minimum of delegation.

http://usingapple.com/2011/06/funny-organizational-chart-for-apple-facebook-google-amazon-microsoft-oracle/

Consider relating the ideas behind these depictions to Steve Sawyer's social archetypes of software development teams... The sequential model of task/role separation seeks to address the challenge of control, the group model fulfils the desire for intercommunication where task/role separation is infeasible, and the network model resolves task/role specialisation by establishing responsibilities specific to the production being performed. We might consider the possibility too perhaps that each archetype is a remedy for the problems arising from over dependence on one of the others.

Monday, 5 November 2012

Exercise: Table Label, aka Marshmallow Tower Challenge

Allocate at approximately 1 hour to run the exercise. 10" setup and briefing. 30" experiment. 5" extra time. 15" debriefing. You will need a large space with scattered desks to accommodate the exercise.

Overview
  • A design/build challenge is set.
  • Each group will employ a ‘thinking aloud’ protocol as they run the experiment. The builder/designers comment aloud to highlight ideas, key transitions or changes in their thinking about the problem.
  • One person will act as the researcher, capturing a time-record of the designers’ comments or activity at any moment. The researcher role is not allowed to take part in the design and construction.
  • Change the person in the researcher role every 5 minutes to give all team members an opportunity to contribute to the design and construction.
  • Output: A ‘Design Activity Graph’ recording design/build activity over time, for example:
    • Scenario thinking
    • Requirement thinking
    • High level solution thinking/building
    • Medium level solution thinking/building
    • Low level solution thinking/building
    • Key ideas.
    • Testing or Review.
guindonActivityChart

For an alternate take on this activity see Peter Skillman's 'Marshmallow Challenge.'

Practical Aim: "As a teach I want to see ‘group labels’ for each table ‘over the sea of heads’ in a classroom so that I can call on groups to respond and encourage for class participation."

Knowledge Aim: To assess the different activities people engage in in open-ended problem solving design/build work.

Materials
A pack of sticks, some plasticine, some rubber bands, and an index card. A tape measure.
A sheet of graph paper to capture the team's graph.

Competitive dimension/evaluation: 
Which group can construct the most useable table label!
The tutor will need a ruler to measure and compare the height of the table labels.

Reflection
Ask each group to classify the activities they underwent (perhaps over 4 or 6 distinct kinds of activity)
Ask each group to estimate how much time they spent on each activity.
Ask the groups to reflect on how they won (or lost!) and to reflect on the contributions their different experience, backgrounds, disciplines made to the solution.
Were there collaboration problems?
Were transitory objects used?
Were conflicts resolved?
What roles were evident?
What is the impact of time pressure?
Can you identify who is responsible for the design?
What evidence of design work is available (diagrams, prototypes, experimental trials)?
What would you expect to happen if the exercise was performed again and again?
Where/when does the design occur?
Was the creative aspect to this exercise essential?
Was your design planned or accidental?
...

Wednesday, 17 October 2012

Design Demand: Understanding 'need' through the design process

Seminar Invitation:

When? Oct 23, 2012 from 06:30 PM to 07:30 PM
Where? Room Q107, The Collaborative Space, 1st floor, the Quinn Building, UCD Belfield. 
Google Map reference for directions: http://goo.gl/maps/iyhoh.
Contact Name: Allen Higgins

Henry Poskitt from frontend.com poses the question "what is design?"

There are two contrasting views on the design of objects for use; one that a successful design disappears from view; the other, that design is evident in its outward character be it aesthetic, arresting, bold, pleasing etc. Although the aspiration to produce good design is pretty well embedded in the language of digital development what does it mean, really? And how do users really fare? Particularly users outside of the one-standard-deviation-from-the-bell-curve of the normal distribution?

Siobhán Long and Karl O'Keeffe from Enable Ireland Disability Services will also be on hand to field questions dealing with how users with different needs and abilities evaluate the systems that designers produce.

Tuesday, 9 October 2012

Readings: Architecture and Agility



Foote, B. & Yoder, J. (2000) Big Ball of Mud. IN HARRISON, N., FOOTE, B. & ROHNERT, H. (Eds.) Pattern languages of program design 4. Addison Wesley.

Beck, K. (1999) Embracing Change with Extreme Programming. Computer, 8.




Read the articles and post a thoughtful observation or question on your own blog!

Wednesday, 3 October 2012

Innovation is a hobby of mine...


"Innovation and disruption, overused and in danger of losing their meaning" Steve Vinoski on what every developer absolutely needs to know about engineering, business and organisations. How is it that technically inferior products can win? Having products all along the life cycle is what you need to do but what very few actually pull off.
Listen to the intro then forward to 17'. From 34' on the role of management.

Miscellaneous
Apollo Computer Inc
Sun Microsystems
Disruptive innovation (sustaining, disruption, over-serving)
Clayton Christensen
Steve Vinoski's blog
Orbix
Orbacus
NoSQL
Oracle NoSQL Database
TALC (also see Geoffory Moore's Crossing the Chasm) 

Monday, 1 October 2012

Readings: Social Research and Coding Techniques

Read chapters 8, 9 & 10 of Strauss, A. & Corbin, J. (1998) Basics of qualitative research: techniques and procedures for developing grounded theory, Thousand Oaks, California, USA, Sage Publications, Inc.

Read chapter 1 of Ragin, C. C. (1994) Constructing Social Research: The Unity and Diversity of Method, Pine Forge Press.



In the same way that the constructs of the social sciences are constructs of the second degree, that is "constructs of the constructs made by the actors on the social scene, whose behavior the social scientist has to observe and to explain in accordance with the procedural rules of his science" (Schutz, 1954) so too the constructs of systems development are constructs of the social actors involved in development, either directly or indirectly (e.g. developers, management, customers, critiques).

Both the research methods for gathering field data, data on customers or users, and the methods or interpreting data, that is data analysis and theory induction, are crucial tools for the business analyst, for the developing requirements, for understanding and interpreting how systems are used and how they can be further developed.

Qualitative research methods are therefore crucial tools for gathering requirements, for trying out designs and their implications 'in use', to reveal unintended uses or consequences arising from new systems (like the Cobra effect), or to suggest gaps that might be addressed.

Reference:
Schutz, A. (1954) Concept and Theory Formation in the Social Sciences. The Journal of Philosopjhy, LI, 257-67.

Read the chapters and provide a thoughtful observation or question (post to the comments section below).

Tuesday, 25 September 2012

Readings: Agile critique and comparison

Cusumano, M. A. (2007) Extreme Programming Compared with Microsoft-Style Iterative Development. Communications of the ACM, 50, 15-18.

Williams, L., Brown, G., Meltzer, A. Nagappan, N., (2010) Scrum + Engineering Practices: Experiences of Three Microsoft Teams. International Symposium on Empirical Software Engineering and Measurement. (link)

Kruchten, P. (2007) Voyage in the Agile Memeplex. ACM Queue, 5, 38-44.



In 1999 the world of software engineering was disrupted by the emergence of agile methods, first Extreme Programming, then the Agile Manifesto followed by Kanban, SCRUM and others. All created as reactions to the then prevailing consensus if not hegemony of stage-wise (aka Waterfall), intending to upend the management heavy methods then prevailing in industry. Today, the worm has turned. The current dominance of "Agile" (with a capital A) creates the impression of a new hegemony; that we should all be Agile, the managers are SCRUM masters, programmers turn backlogs into features, that everything is done in iterations, releasing continuously, designing rapidly, working in Sprints, etc etc. (Higgins,)



Readings: Creativity & Teams


Curtis, B., Krasner, H. & Iscoe, N. (1988) A Field Study of the Software Design Process for Large Systems. Communications of the ACM, 31, 1268-87.

Hargadon, A. B. & Bechky, B. A. (2006) When Collections of Creatives Become Creative Collectives: A field study of problem solving at work. Organization Science, 17, 484-500.

Sawyer, S. (2004) Software development teams. Communications of the ACM, 47, 95 - 99.



Read the articles and post a thoughtful observation or question on your own blog!


Saturday, 22 September 2012

Maintenance (SDLC)

MAINTENANCE AND THE DYNAMICS OF 'USE': On-going development of products in-use.
Maintenance, often turned support, is a crucial activity for linking the experiences of users/customers with the product delivery organization. We consider perspectives on high tech maintenance from bug fixing through to design focused activities.

THE CHALLENGES OF MAINTENANCE
Both soft and physical goods need to be maintained over their economic lifetimes, and the time spent is maintenance is many multiples of the time spent in initial development. It also turns out that usability and scope, which are a key drivers of customer value and usefulness for software (1998, Varian et al., 2004), also drives the generation of multiple versions. A single product codebase can be used to generate multiple versions of the same underlying architecture for the same release date. Adding new features, perfecting and adapting the product continuously increases the scope of a product. In addition the work of maintaining the software also generates new products, subsequent versions and revisions incorporating new capabilities, fixes etc (Figure below). Multiple versions are inevitable, they're part-and-parcel of software, an inherent potential and inevitable consequence of releasing applications based on changing code.

Figure: SDLC as interrelated activities

If we look at Eason’s depiction of an idealized systems development process we can imagine both the user and the technology co-evolving over time as learning is acquired from one and the other. Eason’s hypothesizes that users (and therefore organizations) learn, but they also teach developers how the technology system may evolve over time.
“The exploitation of information technology requires a major form of organizational and individual learning… The exploitation of the capabilities of information technology can only be achieved by a progressive, planned form of evolutionary growth.” (Eason, 1988)
The evolutionary development of systems grows, from limited basic functionality towards more sophisticated and capable systems over time. Consequently maintenance tools and maintenance thinking has begun to permeate throughout the whole product experience.

Designed change (even for corrective work) is a change to the product, and so the economic assumption that a delivered software product is a finished good is false. The practical reality guarantees that a high tech system will inevitably undergo further change. (Swanson, 1976; Swanson, 1989; Poole et al., 2001) High tech systems undergoing maintenance are often regarded as a ‘mutable mobile’, technology that evolves and changes in use (Law & Singleton, 2005) the idiom of maintenance is employed even though software does not wear out or degrade. Maintenance work is difficult and messy; patches must satisfy new demands without breaking existing installations and work as before (only better).

One aspect of maintenance work that is generally held by programmers is that it is better for your career to work on next generation technology, rather than being stuck on bug fixing or maintaining old versions within the straight jacket constraints of compatibility and legacy codebases (Ó Riain, 2000). Maintenance jobs are therefore often outsourced to low cost locations or shunted into the background noise of the workplace, and so developers often shun the work of maintaining a venerable ‘old version’ as they jockey for assignment to new product projects.

ORGANISING DELIVERY AND MAINTENANCE
Eason (Eason, 1988) describes five main implementation strategies for delivery/deployment, graded according to how radically the system changes (Figure below). From revolution to evolutionary change, imposing likewise a burden on the user to adapt, from difficult to easy. Leonard-Barton (1988) described this same process of high tech systems implementation as a process of mutual adaptation, of gradual convergence of systems functionality and performance over time.

Figure: Implementation Strategies (Eason, 1988)

Two product delivery paradigms dominate high tech development projects: single shot implementations on the one hand and continuous process systems on the other. The two can be likened to the difference in manufacturing between batch based and process centric production. A batch model constructs production as an assembly process; the finished product is built up over time by combining prescribed ingredients or materials in set amounts in sequence. The process model is also recipe driven; however a process based production exercise is continuous. The pharmaceutical industry employs the batch/lot style production model extensively. The food processing and drinks industry blends batch with process control, inputs and variables are controlled over time to produce a continuous stream of end product. In both manufacturing batch and process production the overall design vision is captured in the plan or recipe, a set of instructions to be followed to construct a well-specified finished product. Control and management is focused on reproducing the design to exacting precision at the lowest possible cost over and over again. Both manufacturing models attempt to ensure the production of the product conforms with the known design efficiently, accurately, and cheaply. However neither the batch nor the process models encapsulate learning effects. They are both static production models, mechanistic rather than evolving and highly appropriate in settings where producing the same goods in volume to high quality standards (reproducing the original).

We can see however the influence of these models on classical interpretations of the systems development life cycle. Delivery occurs after an exhaustive up-front design process that concludes with the production of the first copy/version of the system being delivered in the first user installation. The technical design process dominates the early stages of development after which the delivered system imposes learning demands on organizations and users if they are to reap the benefits of the new system. The practical reality is however somewhat different.

We already know however that implementation of high tech systems does not usually coincide with the final delivery of a system. Delivery is in fact one of the most contentious periods of a project’s life. Delivery crystallizes all the anticipated promises of a new system into a single ‘moment of truth.’ The moment of truth is multiple and inconclusive, involving each user and use moment, user interaction and system interaction. Use crystallizes the user’s transition from one system to the next. The transition may be from an existing system to an upgraded system, entailing minor changes in use, appearance and performance. Transition may be more radical, from an existing systems context to a markedly different one that demands difficult and radical changes by users as they attempt to adapt the new tool to use. Radical adaptation may involve new user behavior, knowledge, tasks, and skills, it may also negate existing behavior, knowledge, tasks, and skills. Delivery may also be into a new market, it may displace existing systems, it may even co-exist with existing and competitor systems, perhaps interoperate with them at some level.

MUTUAL ADAPTATION AS A METAPHOR FOR DEVELOPMENT/MAINTENANCE
While new product development projects are the hallmark of the knowledge economy, it is a commonplace acknowledged in the industry that innovations are never fully designed top-down nor introduced in one shot. High tech systems are often developed by being ‘tried out’ through prototyping and tinkering. Eric Von Hippel (2005) traced the history of selected technology innovations and arrived at a pragmatic realization that products continue to be developed even when they leave the confines of a laboratory or engineering shop. He develops the concepts of ‘lead user’ and ‘innovation communities’ and concludes that innovation is a process of co-production shared between the producer and the consumer of a new product. Innovation might therefore be thought of as maintenance; a collective and intrinsically social phenomenon resulting from the fluidity of systems undergoing cycles of design, delivery, learning through use that feeds back into further design, delivery and learning.

Proactive support in the form of ‘digital assistance’ has been built into high-tech systems through help menus, user guides, and tutorials. Automated issue reporting is also used to send reports and diagnostic traces (e.g. state blogs, configuration settings, memory dumps) directly to the producer when failures occur. Automated online updating and service pack distribution is also employed as a means of keeping customers installations up-to-date. Diagnostic and investigated analysis of user-click-streams may also be available for the producer to analyse and respond to actual user/customer behaviour. At the most fundamental level however, support activities must first link customer details with a description of the problem.

User issues fall into three main categories: corrective, perfective, and adaptive (Swanson, 1976). Corrective issues are the classic ‘bugs,’ performance and operational failures. Perfective issues address incremental improvements and performance enhancements. Adaptive issues are concerned with responding to changes in the operating environment thereby introducing new or altered functionality. Only one of these categories can really be considered to address failure. The other two, perfective and adaptive, imply that software maintenance is necessarily a design activity. The problem with software is the complex interdependencies within itself, its surrounding technologies and tools, and the environment it operates within. Therefore, changing software usually generates unexpected side effects.

Over time modern issue tracking systems have themselves evolved into general-purpose incident tracking and reporting environments. These development system applications (e.g. JIRA, MANTIS) are often used therefore as both incident repositories and as planning tools. In this way the maintenance process itself has, over time, evolved from being an end-of-life risk management and repair tracking process into a direct link between the user/customer and the development organization. Maintenance has therefore become a crucial source of feedback and a key driver for new product requirements. Issue tracking systems expanded to include the database of features under development and the workflows surrounding issues have been adapted to manage development itself.

We can conclude that software maintenance and new product innovation projects are more closely related than is commonly accepted. The activities of maintenance and support are important sites for the innovation of technologies in development, at least as important as the work of new product development.

Tuesday, 18 September 2012

Evaluation (SDLC)

VALUING, SIZING AND SOURCING THE HIGH-TECH PRODUCT
Evaluation is the process of making the case for high tech decisions based on the benefits and costs associated with a project or product features. Assessing the value and cost of features for development is considered either a simple problem of ascribing value-for-money, or an obscure process, part inspiration part politics where decisions are made behind closed doors.

INTRODUCTION
How do we evaluate high-tech objects and the objectives for systems development projects? Organizations desire to control and direct their destinies. Organizational technology strategies need therefore the support of tools and methods as aides for making investment decisions (Powell, 1992). We need to be able to answer the following questions:

  • How should we go about evaluating high tech use and investment decisions?
  • How useful are the various approaches and what if anything do they ignore.
  • When we select and value high tech features and product how do we go about making those decisions?
  • What different approaches are available to help us evaluate choices between different products, services, features, and suppliers?
Evaluation is the process by which we decide whether or not to purchase or commit ourselves to something. Evaluation activities are by definition decisive periods in the life of any high tech project. Evaluation is often considered to be made on overwhelmingly rational, economic criteria, however it may also be an emotional, impulsive or political decision (Bannister and Remenyi, 2000). This is a plethora of tools available to make optimal financial decisions based on the premise that significant aspects of the system can be monetized. But there are also tools that help us reveal unquantifiable aspects and soft factors, to facilitate the formation of qualitative decisions.

UNDERSTANDING EVALUATION
All decisions arise from a process of evaluation, either explicit or implied. The process of valuing an option by balancing its benefits against its costs. Furthermore decisions arise throughout the development life cycle as and when options are identified. Formally the SDLC describes evaluation as a separate phase and activity, practically however evaluation takes place continuously, albeit with a shift in frequency, formality or emphasis.
Having gathered user requirements by looking at and observing behaviour in the field how do we analyze, judge and identify significant patterns or benefits for inclusion in new developments?

VALUE AND COST
When a high tech investment delivers value in the form of payouts over time, financial tools like ROI and NPV can be used as aides for the decision making process; to invest or not to invest. Improved financial performance is an important criteria for judging an IT funding opportunity. Payouts may take the form of estimable cost savings or additional periodic revenue. While financial performance measures are not always assessable or relevant to all investment decisions or project commitments, they should be created (and their assumptions made explicit) wherever possible as one of a range of inputs into the decision making process.

Financial measures are often the easiest to create and maintain for organizational decision-making as they readily incorporate different assumptions as factors change.

ROI
The return on investment (ROI) is simply the ratio of an investment payout divided by the initial investment. The ROI represents the interest rate over the period considered.

Payback Period
Another metric for evaluating investment decisions is the Payback period. The Payback period is the time taken for an investment to be repaid, i.e. the investment divided by the revenue for each period.

Net Present Value
The Net Present Value (NPV) method takes account of interest rates (or the cost of money) in the investment model. The Present Value (or cost) of an investment is the difference between the investment and the present value of any future net revenues or savings for the period ‘n’ discounted by the interest rate ‘i’. The Net Present Value (NPV) calculation summates the value of an investment decision in terms of the present value of all its future returns.
As with Payback there is usually a simple break-even point for any particular interest rate beyond which future values (payments or annuities) result in a net positive return. NPV is a good way of differentiating between investment alternatives however the assumptions built into the model should be made explicit. For example: payouts do not always occur at the end of the period, interest rates may change, inflation may need to be considered, payments may not materialize or they may be greater than expected.


Internal Rate of Return
Having calculated the NPV of the investment from compounded monthly cash flows for a particular interest rate it becomes evident that there may be an interest rate at which the NPV of the investment model becomes zero. This is known as the Internal rate of return (IRR). The IRR is a calculation of the effective interest rate at which the present value of an investment’s costs equals the present value of all the investment’s anticipated returns.
The IRR can be used to calculate both a cut-off interest figure for determining whether or not to proceed with a particular investment (a threshold rate or break-even point) and the effective payout (NPV) at a particular interest rate. An organization may define an internal cost of capital figure (a hurdle rate) that is higher than market money rates. If the IRR of a project is projected to be below the hurdle rate then it may be rejected in favour of another project with a higher IRR.

Discussion on financial rationality
Investment decisions imply the application of money, but also time, resources, attention, and effort to address opportunities and challenges in the operating environment. An often unexpected consequence of acting on the basis of a rational decision making process is that action alters the tableaux of factors which in turn reveal new opportunities (or challenges) that must needs alter the bases on which earlier decisions were made. Some strategic technology decisions appear obvious; an organization needs a website, an email system, electronic invoicing, accounts and banking. Individual workers need computers, email, phones, shared calendars, file storage etc. This is because technology systems have expanded to constitute the basic operational infrastructure of the modern organization and (inter)networked citizen. How should we therefore characterize the dimensions along which high tech systems are evaluated? High tech investment decisions have been classified into two distinct dimensions: technology scope, and IT strategic objectives (Ross and Beath, 2002). Combining these two dimensions Ross and Beath the identified areas of application for organizations’ high tech investment decisions (Figure below). Technology scope includes categories such as shared infrastructure (with global systemic impact) through to stand-alone business solutions and specialized applications impacting single departments and operational divisions. IT strategic objectives differ in terms of horizon than scope; from long-term transformational growth (or survival) to short-term profitability and incremental gain. Both scope and strategic dimensions highlight the organizational dependence on high tech systems and both suggest that purely financial justifications are not always practical or desirable.

Investment in shared infrastructure illustrates the case; physical implementation and deployment costs of new IT and hardware may be quantifiable, but broader diffuse costs and benefits arise through less intangible aspects such as lost or gained productivity, new opportunities and improved capabilities.

JUSTIFYING DECISIONS
Evaluation is a decision making process. How do we decide what systems to use (develop or acquire, install and operate) in our organizations? We think of the process of making a decision as the process of evaluation. There are two contrasting dimensions to evaluation processes both of which need to be considered if evaluations and the decisions that result are to deliver their anticipated benefits: quantitative methods - often financial models, and subjective methods addressing intangible and non-financial aspects.

Each decision is in a real sense an investment decision for the organization. Identifying who fills what role in the decision making process is a prerequisite and each actor then draws on various methods and tools to make their case. However the decision maker cannot rely on pure fiat or role-power to arrive at the best decision while at the same time achieving consensus and buy-in. Decision makers engage in convincing behaviour drawing on a mix of objective and subjective resources as evidence supporting their decision.

The justification of projects across the range of IT investment types must-needs differ as the cost and benefits differ in terms of quantifiability, attainability, size, scale, risk and payout. We should therefore have a palette of tools to aide evaluation and decision-making. Ross and Beath (2002) make the case for “a deliberate rationale that says success comes from using multiple approaches to justifying IT investments.” Powell (1992) presents a classification of the range of evaluation methods . Evaluation methods are broadly: objective or subjective. Objective methods are quantifiable, monetized, parameterized, aggregates etc. Subjective methods are non-quantitative, attitudinal, empirical, anecdotal, case or problem based. Quantitative methods include financial instruments and rule based approach. Multi-criteria and Decision Support System approaches cover cybernetic or AI type systems that use advanced heuristics or rule systems to arrive at recommendations. Simulations are parameterized system models that can be used to assess different scenarios based on varying initial conditions and events.
Table: Evaluation methods. Adapted from Zakierski's classification of evaluation methods (Powell, 1992)

It is noteworthy however that many of these evaluation methods are in fact hybrid approaches, incorporating both subjective and objective inputs and criteria e.g. Value analysis (Keen, 1981).

In accountancy, measurement and evaluation are considered to be separate, involving different techniques and processes. Furthermore the evaluation process is expected to balance both quantitative and qualitative inputs. Banister and Remenyi (2000) argue the evidence suggests that high tech evaluations and investment decisions are made rationally, but not formulaically. This is in part because what can be measured is limited and processes of evaluation involves the issue of ‘value’ more generally, not simply in monetary terms. Investment decisions must involve, they argue, the synthesis of both conscious and unconscious factors.
“To be successful management decision making requires a least rationality plus instinct.” (Bannister and Remenyi, 2000)
In practice, decision making is strongly subjective, while grounded in evidence it also requires wisdom and judgement, an ability that decision makers acquire over time and in actual situations through experience, techniques, empathy with users, deep knowledge of the market, desires and politics. A crucial stage of the decision making process is the process of problem formation and articulation, both of which reducing problems to their core elements and interpretation “the methods of interpretation of data which use non-structured approaches to both understanding and decision making.” (Bannister and Remenyi, 2000) ‘Hermeneutic application’ as they describe it is the process of translating perceived value into a decision that addresses a real problem or investment opportunity. It is necessary because the issue of ‘value’ often remains undetermined, it may be (variously): price, effectiveness, satisfaction, market share, use, usability, efficiency, economic performance, productivity, speed, throughput, etc.

DICUSSION
When should we consciously employ an evaluation approach? Evaluations are made implicitly or explicitly any time we reach a decision point. Recognizing and identifying the decision point may appear obvious but is in fact often unclear at the time. Evaluation and decisions are made whenever an unexpected problem is encountered, if further resources are required to explore emerging areas of uncertainty, if another feature is identified or becomes a ‘must have.’
Evaluation methods may be categorized further as ex-ante versus ex-post. Ex-ante methods are aides to determining project viability before the project has commenced; they are exploratory forecasting tools and their outputs are therefore speculative. Ex-post methods are summative/evaluative approaches to assessing end results; they are therefore of limited value for early stage project viability assessment.

Systems development life cycles bring high tech product project decisions and therefore evaluation into focus in different ways. Stage-gate models concentrate decision making at each stage-gate transition. Agile models explicate decision-making by formalizing the responsibilities of different roles on the project and their interactions on an on-going basis. Both extremes aim to highlight the following: decision points, the person responsible for asking for (and therefore estimating) resources, and the person responsible for stating and clarifying what is needed (scope and requirements). Indeed, formalizing the separation of role ownership (between requirements, estimation) and responsibilities (between value and delivery) is one of the key benefits of any life cycle.

Is the decision already made for us? As the high tech and IT sectors mature so too do we see the gradual stabilization of software, services and devices that constitute the assemblage of tools and systems of our modern internetworked lives. Nicholas Carr (2005) predicts that we are witnessing the inevitable shift of computing, from an organizational resource (and competence) into a background infrastructure. Several factors are driving the dynamic. Scale efficiency of development; specialist teams best develop complex feature rich usable systems. Scale efficiency of delivery; global service uptime, latency and storage performance is best delivered by organizations with global presence and specialist competencies in server farms, grid and cloud computing. The green agenda reaps savings by shifting computer power consumption from relatively inefficient desktops and office servers into energy efficient data centres. Carr’s point is that general purpose computing is gradually shifting towards a ‘utility’ model therefore the era of corporate computing is effectively dead. The consequence is that the commoditization of software and services, things that are currently thought of as ‘in-house’ offerings like: email, file storage, messaging, processing power etc. The implication of this trend is to change the way we view high tech and IT projects, their evaluation and delivery to our organizations. The decision is no longer one of build versus buy (run and operate) but ‘rent.’

CONCLUSIONS
Requirements and evaluation are crucial activities in the overall process, with the decisive moment surrounding evaluation – valuing and costing product features and projects. But the work of systems development presents complex issues. There are inevitable intrinsic inequalities and asymmetries between the actors involved: product owners, developers, users, customers, organizations, business, and other groups. Interaction is often characterized by processes of persuading others; persuading and convincing those involved in producing, consuming and managing the development process.

REFERENCES

Economic Aspects of Digital Production

SOME (SIMPLE) ECONOMICS FOR DIGITAL MEDIA
One of the central problems in developing high tech systems is that there appear to be unavoidable trade-offs between managing the scale and connectedness of emerging high tech systems. Everything depends on everything else, even well bounded tasks are complicated by unexpected dependencies on hardware or other technologies. What do we need to know in order to identify, describe, and address these various complications and difficulties adequately as they arise? We will begin the process of understanding the problem domain and approaches to addressing its difficulties by illustrating economic aspects of information goods, some work aspects of software engineering (digital production), and the classic dimensions of project management.

New media and information industries are refining if not redefining our knowledge of the economics of markets, products, services and production. Broadly the challenges involve information goods, high tech systems and bases or markets of user/consumers. However, while the business models, technology and material foundations of these new ‘goods’ are constantly changing, the principles of economics do not. There are many dimensions along which information goods and systems are different from purely material products and services. Information goods are ‘experience goods’ (‘consumed’ by experiencing or operating), they are subject to the economics of attention (if you are not paying for the product you are the product), and the technology itself is associated with pronounced production side scale, product feature scale, and greater potentials for user ‘lock-in,’ ‘switching costs,’ and ‘network externalities’(Shapiro and Varian, 1998). For the purpose of this section we will focus on the economic case for production side scale economies of software. The same logic applies to other digital media and products with digital media components.

Unlike many physical goods, information-rich products have some distinctive economic qualities and characteristics. Like all information-rich goods (products like computer hardware, books, newspapers, film, television) the initial design and production of the first copy of a software product demands a huge up front investment in development before there can be a payout. Unlike physical goods manufacturing, the mass reproduction of a digital or information rich good is a simple trivial act of copying. For pure digital goods there are vanishingly small incremental costs in terms of the energy used, storage space and time taken to duplicate. In software manufacturing (if such a term can even apply in the current era) the development costs far outweighs the reproduction costs. Development costs dominate the economic cost characteristics of software.

The following presentation is adapted from Oz Shy’s book, The Economics of Network Industries (Shy, 2001). The argument goes as follows; that as sales/consumption of the product increase the average cost of product approaches the cost of producing and delivering the next unit (the marginal cost). In the case of a purely digital good (software, information, media, etc) the marginal cost of production is very small and often carried by other parties (e.g. the broadband service). Applying the logic of cost based pricing model to your product suggests a strategy of effectively giving it away.

The total cost of production at a particular level (TC) is the sum of the sunk R&D costs plus the cost of producing and shipping ‘q’ units. By definition the total cost of production at a particular level (q) is the sum of the cost of R&D (cost of developing, testing and releasing software) plus the cost (μ) of shipping one copy to the customer.

1. TC(q) = θ + μq

If we define the average cost (AC) of production of a product as the cumulative total cost of production at a particular production level divided by ‘q,’ the quantity produced (ideally also sold) at that level.

2. AC(q) = TC(q)/q

The average cost becomes:

3. AC(q) = θ/q + μ

The marginal cost at a particular product level, or additional cost as a result in a small increase in the production level is the incremental additional cost divided by the change in quantity produced.

4. MC(q) = ∆TC(q)/∆q

And in the limit (the differential wrt q of equation 1).

5. MC(q) = μ

A graphical analysis (below) of average and marginal software production cost as functions of quantity demonstrates that the average and marginal costs converge at high output levels (Shy, 2001).


Figure: Cost and price characteristics of software (adapted from Shy, 2001)

The implication of this analysis is that for every price you set there exists a minimal level of sales for which any additional sale will result in a profit. One conclusion from this argument is that ‘cost based pricing’ is not a viable strategy for software because there is no unique break-even point (Shapiro & Varian, 1998). In effect the logic follows that the more units you sell the lower you can set your price. The logic of cost based pricing suggests you should charge very very little, or give software products away (if there is a large potential market for them). So software markets are subject to huge economies of scale.
“the more you produce the lower your average cost of production.”
(Shapiro and Varian, 1998)

HIGH TECH PRODUCTS AND PLATFORMS AS ECONOMIC SYSTEMS
Previously we characterised the dominant aspect of high tech products to be: their intrinsic complexity and propensity to change. Both aspects lend high tech products to exhibit ‘systemness’ or systematicity within their environments. For example software itself may be both a product and a platform. To illustrate; interdependencies arise whenever a software program makes functions accessible via an API (Application Programming Interface). APIs allow other programs to use the first program. The consequence is that a combination of the two programs allows us to accomplish something new that we couldn’t do with the separate programs. These effects are termed complementarities and it gives rise to system-like effects in the computing environment and in the market for high tech products and services (Shapiro and Varian, 1998).

Complementarity and Combinatorial Innovation
Digital goods are amenable to exhibit complementarity and produce novel utility through combinatorial innovation. When goods that are complements are produced the combination of two products become more desirable and valuable to users than the products alone.

Furthermore we can show that the economic effect of complements dictate that
“aggregate industry profit is higher when firms produce compatible components than when they produce incompatible components.” (Shy, 2001)
The reason being that the sunk cost of R&D can be averaged over a larger market; and larger markets are generally better for all firms, even competitors regardless of their market share.
“the firm with smaller market share under compatibility earns a higher profit under compatibility.” (Shy, 2001)
This is because the market itself is generally larger, thus the marketing strategy question ‘do you want a large piece of a small pie, or a small piece of a much larger pie?’ Why is this relevant? It is relevant because it is one of the dynamics that drives change in the operating environment of organizations. Synergies in Internet services and platforms have driven constantly expanding integration and adaptation, change and innovation. The internet boom of the 90s through to today is largely a consequence of 'recombinant growth' or combinatorial innovation of general purpose technologies (Varian et al., 2004). The idea of combinatorial innovation accounts in part for the clustering of waves of invention that appear whenever some new technology becomes successful. The ubiquity of one program can act in turn as a platform for other programs; for example the mutual complementarities between Twitter, Bit.ly, and Facebook. Much of what is termed Web 2.0 computing can be thought of as leveraging complementarities of different technologies that in turn creates clusters of innovation.

Compatible Products are Driven in Turn by Market Standards
Markets incorporating complements and compatible products welcome technological standards (Varian et al., 2004). Standards are desirable because they facilitate complements and compatibility. Open standards are better because of the free availability of technical rules and protocols necessary to access a market. However even a closed or proprietary standard is preferable to none as it provides an ordering influence, providing rules or structures that establish and regulate aspects such as interoperability or quality. Network effects arise from the utility consumers gain from combinations of complementary products (Shapiro and Varian, 1998, Shy, 2001, Varian et al., 2004).

The very simplest network effect can be illustrated by the example of fax machines. The first purchaser of a fax machine has no one to send a fax to. A second fax machine bought by the first buyer’s friend allows them to send faxes to each other, which is somewhat useful. However if there are thousands of fax machines, in firms, government agencies, kiosks and people’s homes then the fax machine becomes more useful to everyone. As the market becomes larger the usefulness, or utility, of fax machines as a class of technology becomes greater.
The principle applies equally to single categories of networked technology like fax machines as it does to families of technologies that can interoperate. Network externalities arise between automobiles and MP3 players if auto manufacturers install audio jacks or USB ports to connect the car’s sound system with the MP3 player. The utility of both cars and MP3 players increases. Standards and openness drive further growth and innovation (and lock-in and switching costs etc). Standards enable software markets that in turn enable hardware sales that enable software etc. all enabled by a standard.

Software has a unique role as the preeminent enabling technology for hardware and has unexpectedly led to software becoming the platform itself. Software – operating systems or execution environments like browsers and browser-based ecosystems like Facebook – enables developers to achieve a degree of independence from the hardware. Such platform software becomes essentially a new type of standard that may itself be open or proprietary and the same economic models dealing with complementary goods and compatibility apply. Therefore the same kinds of innovation clustering producing waves of combinatorial innovation, can be seen to occur with successful platforms.

A software platform benefits from the variety of add-in software written for the platform and this in turn generates a virtuous cycle of value growth and further innovation as products are re-combined and used in novel ways. In summary the economic characteristics and market logic of software products drive them towards interdependency with other software, standards play a huge role in enabling this (closed or open). The whole context of software production exists within an ecosystem of different products and services, which are in effect environments or platforms themselves and these arguments explain in part the ever-expanding ubiquity of software within technological systems.

DISCUSSION
The various engineering professions (civil, mechanical, chemical etc.) typically separate design work from production work, treating production via either the project management perspective for once-off style constructions, or via the process control perspective for managing operational environments. However software software production (design and development) has proven difficult if not impossible to control via predominantly construction perspectives or as manufacturing processes, why is this? Why shouldn't software engineer lend itself to the kinds of management instruments that proved so successful in the classical sense of Fordist production? Why isn't software like more like civil engineering for example?

Well, the digital economy is subject to some interesting essential and intrinsic characteristics that, while not absent in physical goods markets, occur to greater and lesser extent in comparison to physical goods. In the case of digital production the process of manufacturing the end product becomes a trivial exercise of electronic duplication with marginal costs of manufacturing additional copies being effectively zero. Therefore the production cost characteristics of software and many high tech goods shift to a focus on the process, effort and cost of producing the first unit. Software is costly to develop but cheap to reproduce. Multiple copies can be produced at a constant or near zero per-unit cost. There are no natural capacity limits on producing additional copies. The costs of software production are dominated by the sunk cost of R&D. Once the first copy is created the sunk costs are, 'sunk'! Software production costs will therefore be dominated by employee/human costs (salaries and servicing the working environment) rather than material costs (computers).

This initial analysis seems to suggest that software development efforts should be treated like stand-alone projects, i.e. time bounded design and development of a finished product. This is indeed a characteristic of many industry settings, e.g. for device/hardware software in telecommunications, for robotics, for mission critical systems in aviation and aerospace, for critical infrastructure such as energy distribution and core or internet backbones.

Software design and development produces little if any substantial material assets or residues. Software production models should therefore emphasize design activities rather than manufacturing activities. Software R&D (the cost of developing, testing and releasing software) is a human knowledge intensive activity. The consequence therefore is that while a software firm’s strategic advantage is manifest in its products, its competitive capability is bound up in its employees’ design knowledge and experience.

But software and high-tech yields a new kind of cornucopia, a wealth of value that is becoming more significant and more freely available. Software begets software and systems support other systems. The whole technological infrastructure of microprocessor led, computer driven, software and high-tech device innovation has kept producing value and benefits for organisations, markets, and society at large for 50 years or more. The fact that it continues to evolve and is still implicated in societal transformation suggests it will continue for a while longer.

REFERENCES


Thursday, 13 September 2012

Implementation (SDLC)

USE-PRODUCTION-INTERACTION
The heart of a systems development production process is the work of implementation; designing, coding, testing, usability, scaling, architecture, refactoring. Its flip-side is the system in use, the feedback of users, usability in practice, unexpected uses, the goals users actually achieve by using the system, their met and unmet needs, how they obtain value from its use. In some sense the problems of production, or organizing teams to develop and maintain complex interdependent and interrelated digital systems is largely solved. Production poses a relatively well-known domain of problems and we have a variety of possible solutions available to address the challenges of intrinsic complexity and task interdependence, of scale and size of production, products and markets. What is less well understood is the domain beyond engineering; of the dynamics between customers, users, producers and the market, what we term ‘systems.’

Producing implementations for high-tech ambitions. ‘Implementation’ is the catch-all term for those production activities following at the end of an up-front requirements analysis, evaluation, and ‘design’ process (Bødker et al., 2004, Gregory and Richard L., 1963, Avison and Fitzgerald, 2006). Under this view of ‘implementation’ as the catchall for design, architecture, coding, testing, refinement, optimization, packaging and finishing a high tech project.

IMPLEMENTATION: DESIGN, TEST, AND DELIVERY
In the (traditional) view of systems development the SDLC brackets everything to do with concrete product production under the banner ‘implementation.’ (Figure below) Implementation covers product design, development, test and delivery. It appears strange that such wide-ranging and yet central activities of the SDLC should be relegated to what appears at face value one quarter of the lifecycle.

Figure: SDLC as interrelated activities

I might argue on this basis alone that the SDLC perspective on implementation is too broad (indeed dismissive of ‘production’) to be of much practical use. Let us however focus on contemporary views of implementation in high tech product life cycles.

Implementation has two faces, a technological facet and a social facet. Implementation covers everything dealing with the concrete realization of a product, everything that is hinted at during the more abstract phases or activities of requirements analysis, evaluation/design and maintenance (these comments must of course by qualified by your own working definition of the SDLC). On the technological side implementation deals with design, architecture, feature functionality, deployment, installation etc.; on the social side implementation deals with feature acceptance, usability, scalability. What is then does implementation encompass? Implementation may be viewed as construction (production). Implementation also often covers as rollout or delivery, and a third meaning of implementation is that surrounding organizational change management, in particular change supporting ERP implementations. In the case of ERP implementation the technology system is often quite static, a finished product, however flexibility available in how the product is ‘configured’ to deliver functionality. ERP configuration is therefore a more limited kind of systems development that may or many not work well within the institutional constraints of a particular organization. One way of thinking about implementation is as a problem of ‘introduction,’ something taking place in the conversational interactions surrounding analysis, design, coding, and test activities.
“The roll-out is where theory meets practice, and it is here that any hidden failures in the earlier stages appear” (Boddy et al., 2008)
(Boddy et al., 2008)
Accordingly the byword for an implementation initiative is ‘order.’ A project should rollout in an orderly controlled way. However large-scale rollouts of technology are notoriously difficult and range over technological challenges and social/organizational challenges, for example;
“ERP implementation is an organisational change process, rather than the replacement of a piece of technology. It impacts strategy, structure, people, culture, decision-making and many other aspects of the company.” (Boddy et al., 2008)
Implementation is therefore often characterized as a project management problem rather than a problem extending and impacting activities prior to and following production. In this guise implementation is a matter of project execution, separate from the ex-ante (up-front) process and separate from the ex-post (delivery) process. Such implementation projects more often than not necessitate further analysis, evaluation, and design alongside the work of coding, configuring and testing a new system.

MUTUAL ADAPTATION
Regression life cycles and agile methods have reworked the relationship between the activities of the SDLC. The Rational Unified process and methods like SCRUM anticipate all activities and phases will occur at the same time. Both overcome the chaotic consequence of ‘doing everything at once,’ by mandating highly structured roles and interactions, many mediated through distinctive techniques like ‘the planning game,’ ‘planning poker,’ ‘the on-site customer,’ ‘refactoring,’ ‘regular releases,’ ‘unit testing,’ etc. The big message for test and design work is that you can’t design without testing, and testing in all its guises is one of the strongest drivers for design.

The greatest test, and opportunity, for a new technology is when it is removed from the laboratory into the user environment. Implementation is the process of:
“mutual adaptation that occurs between technology and user environment as developers and users strive to wring productivity increases from the innovation.” (Leonard-Barton, 1988)
Implementation is therefore a natural extension of the invention process albeit the process takes place within user environments. The dynamic can be thought of as a kind of convergence towards an ideal end goal. However acknowledging the concept of equifinality (Leonard-Barton, 1988), our end goal may simply be the first solution that works from among a universe of possible solutions.

Implementation in the user environment generates learning that redefines our understanding of technology-in-use and therefore draws us back in to new prototyping, testing, feasibility, problem solving and idea generation. Likewise, technology implementation in the user environment generates new learning bout possibilities in user and corporate performance. Technology interaction enables possibilities for redefining tasks and roles, business function and business model. Learning through implementation is a balancing of tension between narratives of technologically driven change and user resistance. Instead Leonard-Barton offers the idea of continuous ‘re-invention’ to interpret this tensions, of learning through implementation that feeds in turn back into technology and corporate performance thereby enabling the productive (though unpredictable) dynamic of mutual adaptation (Leonard-Barton, 1988).

While most current presentations of the technology development dynamic now include user involvement they persist in characterising innovation as a flow from idea generation through to production. However including deploying in the user environment and user involvement within an on-going cycle of releases and updates incorporates the impact of learning that occurs. Mutual adaptation is a constant in the field of technologically mediated innovation and if recognized may be harnessed as a productive dynamic to drive both social and technologically oriented aspects of systems development. The implication for organizations involved in systems development is to “break down the firm separation of development, test and operations.” (Hamilton, 2007)

Kongregate Games (case)

This case is adapted from Nicholas Lovell’s game publishing guide (2010).

You run a small Flash Game company that releases its games to run on Kongregate’s game portal. The revenue model offered by being hosted on Kongregate’s portal is ad-funded based on how often the game is played online. The company development team has four people: 2 programmers, and 2 designers with responsibility for art assets, models, audio and video content. The Ad-funded revenue model is summarized in the table below.
Table: An Ad-funded revenue model on Kongregate for Flash games (Lovell, 2010)


Under this model, and assuming a minimum return from portal operator to the developer of just 25% of Ad revenue (best case may be up to 50%) – assumption of just two Ad impressions per game play yielding a CMP from advertisers of 1 euro – developer revenue ranges from €5 per month to €500 per month for each game as monthly plays vary and impressions range from 20,000 to 2,000,000 per month (Table below). CPM: A figure used to express the advertiser’s cost per thousand impressions. CPM=Gross impressions/1,000. In this case we assume Kongregate and advertisers have agreed a CPM ‘cost’ rate of €1. Note that Lovell’s figures are based on a CPM of £1 (GBP).
Table: Game Ad-revenue projections for three cases of plays (from Lovell, 2011)


Lovell suggests that the revenue figures for an Ad-funded Flash game served via a specialist game portal like Kongregate are not impressive.
“Even a widely successful game, getting 1 million plays a month, which would be a huge achievement, would only generate [€500] a month in revenue for the developer.” (Lovell, 2010)
Strategic Business Evaluation: To Integrate with the Portal API (or not)?
The development team are keen to increase the company’s revenue stream and have decided to consider the case for integrating their Flash game with Kongregate’s API for leaderboards and challenges (refer to the earlier statement for Ad-funded revenue on Kongregate). Integrating with Kongregate’s API offers the developers an additional 10% share of the Ad-revenue from Kongregate. How do they assess the business case for API integration with the current game (noting that it could become a part of all future games too)? The team estimates it will cost them 20 days of a developer’s time to code up, test, and roll out integration between the portal API and their own Flash template engine. Given that they have a programmer ‘day’ cost of €200/day the investment cost for portal integration, developed over 20 days comes to €4,000. The developers estimated best case investment cost and best-case additional cash flow as follows:
  • Development cost (initial investment) €4,000
  • Additional monthly revenue for 24 months (best Case III) €200
Questions:
  1. What is the simple ROI for each case?
  2. What is the simple Payback period for each case?
  3. Which business case holds up over 2 years with a short-term interest rate of i=5%? 
  4. Finally, should the development team invest in integration?

REFERENCES
Games Brief - The Business of Games (www.gamesbrief.com)
Lovell, N. (2010) How to Publish a Game, GAMESbrief.
www.kongregate.com: "Reach millions of real gamers with your MMO, Flash, or social game. Make more money."

Build the right thing (Interaction Design)

THE DESIGN PROCESS
What is good design? Bill Moggridge states that:
“good design has always been concerned with the whole experience of interaction” (Moggridge, 1999)
Outwardly design is concerned with aesthetics, experience, the experience of using a product, of interacting with an object, product, service, or system of products and service. Inwardly design is also concerned with cost of materials, complexity of assembly, maintainability of modules and the whole product, lifetime, cost of operation, manufacture, distribution, delivery and return systems. The inward and outward aspects of design are tightly interrelated but further complicated (beneficially it turns out) by user involvement in the development process. User involvement in development is now recognized as one of the key success factors for high tech design and systems implementation (Leonard-Barton, 1988, Kraft and Bansler, 1994, Bødker et al., 2004, Grudin and Pruitt, 2002). User involvement is beneficial, in part at least, because both user understanding and design objects can be adapted throughout the development process (Leonard-Barton, 1988).

The quest for design should be tempered by the various problems 'improvements' produce. The search for an optimal solution is often an unnecessary diversion. Indeed an optimal solution will typically optimize according to a narrower set of criteria than is practical or desirable in the general situation. As Hamilton comments on designing and deploying internet-scale services “simple and nearly stupid is almost always better in a high-scale service” (Hamilton, 2007). In Hamilton’s case he recommends optimizations should not even be considered unless they offer order of magnitude or more performance improvements.

The ultimate measure of success for high tech design is for the product to become a seamless aspect of the user environment; to become simply, a tool for use, ready-to-hand.
“We need to be able to rely on an infrastructure that is smoothly engineered for seamless connectivity so that technology is not noticeable.” (Moggridge, 1999)
Put another way, design succeeds when it disappears from perception.

DESIGN QUALITIES
Good design ‘lends itself to use.’ With physical objects the designer works within the constraints (and possibilities) of materials and space. The user’s embodied capability and capacity influences the size, shape, and apperance of a ‘use’ object. Physical design works with material affordance and constraints. Designers make use of experiential and cognitive cues such as ‘mapping’ and ‘feedback’ to achieve their goals (Norman, 2002). These approaches work because users form mental models or theories of the underlying mechanisms employed in mechanical objects. Indeed users actively look for such cues when confronted by a different or a new object for use. The effectiveness of cues, to translate designed performance into viable user mental models, translates in turn into effective object interaction; 'good design lends itself to use'. Good design is evident by the availability of a ‘clear mental model’ (Moggridge, 2006) or metaphor for a system. An effective mental model builds seamlessly into a coherent consistent ‘system image’ (Norman, 2002). A compelling system image is another strong indicator for successful system use. However digital media, virtual goods and computer based high tech systems pose a unique set of problems as a consequence of the break between an individual's knowledge of the physical world (intuitive, embodied, physical and temporal) and the computational world of digital objects.
“What do you get when you cross a computer with a camera? Answer: A computer! (Cooper, 2004)
Microprocessor based goods and computer mediated virtual environments can made perform in apparently arbitrary or idiosyncratic ways, what Alan Cooper terms ‘riddles for the information age.’ (Cooper, 2004) In essence, by crossing computers with conventional physical products subsequent hybrid products work more like computers than their physical product forebears . In the past physical-mechanical elements often constrained design implementation whereas digital designs can in general overcome the constraints of electro-mechanical mechanisms. This break is both empowering and problematic. Empowering because it enables the designer to achieve things impossible with physical-mechanical elements alone, but problematic because while the 'back-end' digital design may conform with an architectural view of the technology (is architecture simply another way of saying the developer's implementation model) the outward appearance and behaviour available to users may be manifest in quite different ways. Mental model thinking can be problematic because, while the design implementation model may be self-consistent and behave logically according to its own rules, the implementation rules will appear be obscure, be overly detailed, or unintuitively linked to performance.

This break between implementation model and the user’s mental model is significant and necessitates a new language for describing and designing digital systems. While digital systems must obey their own (necessary) rules, the presentation of a system to the user should be designed with the user in mind. Taking his cue from physical goods design Don Norman suggests that a well designed microprocessor or computer-based system should still present its possibilities in an intuitive way (Norman, 2002). It should give the user feedback, allow the user to correct performance and offer a coherent ‘mental model’ to enable the user to understand and learn the product through use (Cooper et al., 2007, Norman, 2002).


The design of digital interaction can be thought of as spanning four dimensions (Moggridge, 2006). One dimensional linear or textual representations such as text, consoles, voice prompts etc. Interactions building on two dimensional visual or graphical renderings; layouts that juxtapose graphical elements or that depend on spatial selection and use/interaction in a two dimensional field . Third dimensional fields that make use of the third spatial axis depth , where depth is actually employed rather than simply mimicked through perspectival representation (e.g. as a backdrop to essentially 2D interaction). The forth dimension is most often thought of as time , meaningful temporal sequences and flows of interaction (rather than simply consuming a recording or animation). Temporal interaction may be applied to the preceding dimensions and involve complex interaction choreographies that are built up over time to achieve some goal.


  • 1D interactions are employed by command line driven computing environments.
  • 2D interactions are employed by typical applications and PC operating systems.
  • 3D interactions are employed in immersive gaming environments.
  • 4D interactions may be mode shifts in application interface, queries applied to data, different application states.

Build the thing right (SDLC)

THE VERY IDEA OF SYSTEMS DEVELOPMENT
While the idea of the SDLC (Systems Development Lifecycle) is firmly embedded in the Information System field, there is no single concrete principled formulation of the SDLC ‘sui generis.’ It is notable that the earliest formulations of systems development (Table 1: (Gregory and Richard L., 1963)) resonate strongly with current presentations (Valacich et al., 2009). Gregory and Richard (1963) described the four phases or stages involved in creating a new information system (Table below).
From Scrapbook Photos
Figure: Management-information-systems design approach (Gregory & Richard, 1963)

All formulations of the SDLC are derivative of other lifecycles described and used in practice prior to the various distillations of the SDLC. In spite of claims to the contrary there is no single authoritative well-understood methodology for managing the development of information systems. Each methodology is either the product of a particular group of people working in their specific work contexts, or the output of an academic or practitioner attempt to construct a generalizable description of development processes. The systems development life-cycle is a stage-wise representation of activities commencing with the most general description of some product to be designed and refined over stages into a completed good (Figure below).
From Scrapbook Photos
Figure: Systems Development Life Cycle

The following table (Table below) summarises the conceptual stages of the SDLC. It is readily apparent that the systems development life cycle is synonymous with the waterfall model and that the waterfall provides us with many of the original concepts that comprise most if not all of the features of frameworks used to control and manage the production of high tech goods.
Table: Stages of the SDLC (adapted from Avison and Fitzgerald, 1995)
From Scrapbook Photos

DISCUSSION: THE PRACTICAL REALITIES OF DEVELOPMENT
The SDLC is the original prototype of the life cycle. Linear, serial, stage-gate or milestone development life cycles are employed in product disciplines and in strategic models of competitive innovation (Schilling, 2005, Tidd et al., 2001, Trott, 2005). Life cycles applied in other industries and occupations overlap with the work of high-tech design and development and influence in turn how systems development is seen to be structured. Like the SDLC the product marketing life cycle covers: initial concept, to development, to market maturity and end-of-life. However while life cycle archetypes represent the relationships between analysis, design, implementation and maintenance, they do rarely describe their practical performance and accomplishment.

The work of developing, configuring, and servicing systems occurs within activities and processes such as service provision, project execution, product development and maintenance. These activities are located in time and place and so the day-to-day, week-to-week flux of production takes on the appearance of regularity, of a common pattern to process of creating and managing high tech objects. Systems development can be shaped with the aide of a life cycle model. Life cycle is simply a way of describing the relationships between work processes constituting the provision, development and delivery of a product or service. Having however taken a critical perspective on the SDLC and life cycle concepts generally I wish to explore and explain the value and need for their activities, albeit activities that often overlap, are 'out of sequence' and occur in haphazard and emergent fashion. The following sections analyse the generic characteristics of the core activities of systems development; summarised here as Requirements, Evaluation, Implementation, and Maintenance (below).

Figure: The SDLC as a tetrad of inter-related activity.

REFERENCES

Tuesday, 11 September 2012

Outsourcing


Henry Ford’s Model T was the emblem of modern manufacturing systems characterised by suppliers and integrators working together to create value.
Industrial production, contracting, subcontracting and contracting out have been defining features of modern organisational forms since the industrial revolution and perhaps prior.
Two main organisational forms have prevailed in the modern era:
  1. Vertical integration, managing and owning the whole value chain process from procurement of raw materials through to production of end product.
  2. Horizontal specialisation, focusing on crafting/creating/delivering excellence at once core stage of the process of production before passing the processed good onto another stage.
  3. Fordist manufacturing created conditions for interfaces between different tasks, activity, input, output, or stages of transformation making up the manufacturing process.
  4. Japanese Kanban system is one extreme of layered specialisation from many small suppliers coming together under the umbrella of the main supplier/contractor/manufacturer in the manufacturing environment.
  5. Inter-firm information/data process specialisation was enabled by EDI (Electronic Data Interchange) standardisation initiatives from the 1970s through to the turn of the century, now continuing under the aegis of XML and newer standards.
EDI enabled ‘e’ interfaces to be constructed between firms in a similar way to the input/output models of staged manufacturing.
Along the way it demonstrated the overcoming of geographical, spatial and temporal barriers to data exchange.
The modern global supply chain is an extreme case whereby a process’s implementation is facilitated by data exchange between a diverse array of firms thereby creating the very possibility of an integrated supply chain.


As an outsourcing destination Ireland has lost appeal throughout the last decade.
Rising costs and competition from developing countries has eroded many of the advantages that Ireland once held.
Consequently Ireland has itself become a net consumer of outsourcing services.
A driver of this trend is the steady erosion in competitiveness in Ireland at country level, a trend which has been in place since the mid-1990s.
The Irish Central bank quarterly bulletin January 2010 provides harmonised competitveness indicators (HCIs) for the Irish economy. Cost driven deterioration in Irish competitiveness has been partially compensated for by increases in productivity but only it appears by shifting lower cost lower value added activities and processes offshore.
The picture for IT outsourcing is however less clear as Irish based offices of multinationals move up the value chain.
Like all mature markets Irish firms and multinationals based in Ireland often outsource organisational function activities to local or international-based outsourcing providers; for example traditional areas like payroll, accounting, finance, legal, HR, purchasing, and logistics, but also such as marketing focused on SEO (search engine optimisation), web development, website hosting, IT services like e-mail and spam filtering, virtualised storage, and telephony services.
Core or primary value processes may also be outsourced but at a higher risk or for reasons other than cost reduction alone.
Whether providers of outsourced services based in Ireland themselves source their activities offshore or not is a matter for their own operations.


Claims for the size of Ireland destined outsourcing activities and Ireland generated outsourcing vary, range widely between 100s of millions and billions.
Helpdesk and international call centre operations is one area where firms still see value in Irish based operations, particularly where multilingual skills and addressing the European market are important requirements.
In 2003, the value of the outsourcing market in Ireland passed €209 million ($234 million).
Irish banks have outsourced considerable operational activities to third-party providers (e.g. Bank of Ireland's multi-year deal with HP followed by the switch to IBM).
The public sector in Ireland also has long experience with outsourcing services particularly IT (e.g. The Irish Revenue Commissioners and Accenture).
Regardless of the provisioning destination (whether onshore or offshore) the trend is for organisations to increase the investment in outsourcing projects.
Even so firms experience with the outsourcing phenomenon is mixed as expectations to deliver higher levels of service grow and priorities change from simple cost reduction towards valued added.
Regardless of the experience with individual projects outsourcing is likely to remain a popular option with over half CIOs in Irish firms having budgets cut in 2009, cost saving will remain a huge driver or outsourcing initiatives.


HP Video Podcast: Be "on the business" for strategic IT and outsourcing
Tim Hynes, IT Director Europe, Middle East & Africa, Microsoft
HPVideoBlog_01

Why Global Sourcing?
I argue that the sourcing phenomenon is an intrinsic feature of human societies that is amplified by scientific advance, manufacturing innovation, technology more generally, and accelerated in the modern era of computer based infrastructures, high-tech products and services.

What organizational activities and products are amenable to sourcing beyond the traditional boundaries of organizations? And if activities and products can be sourced beyond the boundaries of the organisation what models or modes can be used?

Outsourcing isn't a business fad, it is a fundamental part of modern industrial production. Capital based manufacturing and production of goods and services is predicated on the basic idea of a division of labour. Specialised stages of manufacture, in other words a supply or value chain exist when skilled work is applied to some material, goods or activity to add value until an end point when the good or service is consumed. All industrial and professional specialisation represents therefore a kind of outsoucing. No one organisation, firm or individual has within its power the totality of knowledge, skills, resources, effort and time to produce everything we need or desire. Sourcing has therefore been and remains an intrinsic aspect work (labour and production) in society, from the most rural to the most metropolitan.

What therefore is sourcing? Consider the following definition:
“Sourcing is the act through which work is contracted or delegated to an external or internal entity that could be physically located anywhere. Sourcing encompasses various in-sourcing and outsourcing arrangements such as offshore outsourcing, captive offshoring, nearshoring and onshorning.” (Oshri et. al, 2009)
In light of the prominence and pervasiveness of inter-firm sourcing what are the advantages and disadvantages of different sourcing modes and how are they justified and applied in historical and contemporary settings? The current situation is never completely estranged from its historical contexts. Historical trends in global sourcing lead in to current topics and help to explain how local conditions have evolved.

For one reason or another various sourcing modes have proved more successful in particular industries and in particular locations. The relationship between technology trends and the emergence of expanding arrays of options around sourcing of product components and services offer one set of explanations, explanations such as the irresistible imperative of technology driven change or particular organisational structures. Other ways of understanding the success of sourcing through uncertain contextual conditions and processes of emerging knowledge adapting to and taking advantage of unique situations and knowledge.

An interpretation of global sourcing discourse that managers can use effectively should be more than the straight application of technological recipes, formulas, methods, rules, and organisational templates. Reflective actors will always seek to identify the interests involved, to be aware of who benefits (or looses) in order to juxtapose and evaluate among the various strategic decisions between in-house and outsourced delivery. Sourcing initiatives may proceed smoothly but if not what remedial measures can be employed addressing the organizational and technological issues relating to global sourcing?

The reflective manager has a broad palette of concepts and frameworks for interpreting and deciding sourcing cases. However this area of organisational operations is constantly evolving and changing and so the manager must be adept at identifying emerging trends in sourcing relationships that are likely to be important in the future with implications for current situations. In this way involved actors can merge theory with context, against a historical backdrop, extrapolate and justify the implications of changing sourcing arrangements in complex inter-organizational relationships.

Case: Bank of Ireland Outsourcing 2000-2011
Irish banks have, in the past, outsourced considerable operational activities to third-party providers. Bank of Ireland's multi-year deal with HP followed by the switch to IBM exemplifies one particular case of the benefits and risks of adopting a deep outsourcing strategy in a digital 'information' industry.

(24 February 2003: article-link) BOI license desktop and server software from Microsoft.
(4 April 2003: article-link) BOI announce 7 year deal with HP for IT services worth ~500M, over 500 bank employees to be transferred to HP.
(2 July 2003: article-link) BOI announce multi-million deal for banking software products.
(3 November 2010: article-link) BOI announce 5 year deal with IBM for IT services worth ~500M,


References
Oshri, I., Kotlarsky, J. & Willcocks, L. P. (2009) The Handbook of Global Outsourcing and Offshoring, Palgrave Macmillan.