08 Jun In the lecture notes we have discussed the Capability Maturity Model (CMM) and the Capability.
Question
Laureate Online Education
© All rights reserved, 2000 – 2010, The Software Quality Assurance module, in all its parts: syllabus, guidelines, lectures, discussion questions, technical notes, images and any additional material is copyrighted by Laureate Online Education B.V.
SQA: Software Quality Assurance
Lecture 8: Maturity models and future SQA prospects
8.1 Capability Maturity Model (CMM)
During seminar 3, we have discussed the ISO 9000 family of standards, and in particular ISO 9001, when we have seen that one of the fundamental principles of quality management is the process-oriented approach. A process was defined as a “set of interrelated or interacting activities, which transforms inputs into outputs” (ISO, 2008). To be able to assess the suitability and effectiveness of a process, we need a structured approach which takes into account the various factors involved in process definition, management and optimisation. One such model is the Capability Maturity Model (CMM) developed at the Carnegie Melon
Software Engineering Institute (SEI). “The CMM was designed to help developers select process-improvement strategies by determining their current process maturity and identifying the issues most critical to improving their software quality and process. “ (Paulk et al, 1993). The main difference between ISO 9001 and CMM consists in the fact that ISO 9001 centers on the minimal acceptable quality level for software processes, while CMM confers a framework for assessing the maturity of software processes and for continuous process improvement.
The main concept in CMM is the dichotomy between immature and mature software organisations, and organisation practices. An immature organisation is seen as one in which software is developed in an unstructured way, usually in the absence of set processes. Or, if processes exist, they are not defined properly and/or are not followed rigorously. The management style is usually based on reacting to a situation rather than controlling it, budgets are exceeded most of the time because they are not based on realistic estimates, and product functionality is compromised in the quest for meeting unrealistic schedules. On the other hand, a mature organisation has well defined operational processes which are followed through routinely, according to pre-defined plans of action and within a well established infrastructure. Management communicates effectively, and prices and schedules are based on successful past experience, are realistically set and are usually met.
Definitions:
• “A software process is a set of activities, methods, practices, and transformations that people use to develop and maintain software and
1
associated products (project plans, design documents, code, test cases, user manuals, and so on).”
• “Software process capability is the range of expected results that can be achieved by following a software process. “
• “Software process performance represents the actual results achieved by following a software process.“
• “Software process maturity is the extent to which a specific process is explicitly defined, managed, measured, controlled and effective.”
(Paulk et al, 1993)
Maturity levels:
Continuous process improvement is seen as an evolutionary series of tightly controlled events, which are based on small increments rather than a big revolutionary episode. CMM provides a framework for organising the continuous improvement increments in the form of five maturity levels, as depicted below.
Figure 8.1: Maturity framework with five levels, each one the foundation of the next (Paulk et al, 1993)
Level 1: Initial
Organisations that are at the initial level do not provide a stable environment for developing and maintaining software. They do not have well defined processes or if they do they revert to a ‘coding and testing’ approach when facing a crisis.
Their success is usually due to exceptionally good management and experienced developers. However, if any of the major contributors leave during the lifetime of
2
a project, it can have a destabilising effect with negative repercussions on the overall outcome of the project.
Level 2: Repeatable
Organisations which are at level two have established policies and procedures that can be repeated for a variety of projects, and the planning and management of new projects is based on previous experience. Costs, schedules and the evolution of product functionality are monitored continuously and managers are capable of identifying problems as they arise, and solutions follow predefined guidelines. The overall process can be defined as ‘disciplined’ due to the fact that it operates within a stable environment in which success is being repeated.
Level 3: Defined
Level 3 organisations have well defined and well documented software processes, both in terms of software engineering as well as managerial procedures, and the processes are incorporated into a consistent, organisation-wide framework. A dedicated group is responsible for the software development process, and their effort is supported by organisation-led training programmes meant to ensure that their knowledge and skills are adequate for the needs of the project. Processes are defined in terms of inputs, readiness criteria, guidelines and procedures for carrying out activities, verification procedures (for example, team reviews and testing), completion criteria and required outputs. The software process is characterised as standard and consistent, costs and schedules are kept under control and product functionality is monitored under well defined quality criteria.
Level 4: Managed
Level 4 organisations are capable of setting and meeting quantitative quality goals both for products and processes, and outcomes are measured with the aid of well defined instruments. The organisation uses a database for storing and analysing the data, and products and processes are constantly monitored and controlled in such a way that they fall within well defined boundary values. Risks are carefully identified and adequately managed. The software-process capability is defined as quantifiable and predictable because it operates within measurable limits. When exceptional circumstances occur, management is capable to control them by acting correctly and according to the situation.
Level 5: Optimising
At this final level, the efforts of the whole organisation are centered on continuous process improvement. The organisation has the capability both to identify weaknesses and to improve the process accordingly, with the main aim of preventing defects in its software products. Data is used to perform cost-benefit analysis in relation to new technologies and innovations that can be used to improve the process. Project teams focus their effort in eliminating defects, analysing their causes and documenting their experience in order to prevent such defects from re-occurring in the future. The software-process capability can
3
be defined as continuously improving because the organisation is consistent in its effort to improve its process capability and in doing so it improves its products and processes constantly. The use of improvements in technology and processes is planned and incorporated into everyday activities.
Key process areas define the major matters of concern on which the organisation needs to focus its efforts in order to improve its software processes. Each level has a series of key process areas associated with it, which are organised into five sets of common features, defined by key practices, as shown in the figure below.
Figure 8.2: CMM architecture (Paulk et al, 1993)
Each key process area has a series of activities associated with it which, if performed together, lead towards achieving a set of goals which are essential for improving the process capability (Bamberger, 97). Although the strategy for achieving these goals may vary from project to project, all the goals associated with a key area must be fulfilled in order to achieve an overall, positive outcome.
Dealing adequately with the key process areas is of vital importance for allowing the organisation to move up on the maturity level scale. Figure 23.2: The CMM model levels and key process areas (KPAs) (Galin, 2004, page 486).
4
Skipping levels is not an option because each one represents a necessary foundation for the next one up. Processes that are introduced without the required foundation usually fail when they are needed mostly, that is, when the system is under stress. Organisations should focus on the benefits that they can draw from improving their processes, and also in adapting the CMM framework to their actual needs. CMM represents a set of guidelines and should be regarded as an aid and not as a ‘silver bullet’.
8.2 Capability maturity model integration (CMMI)
The successful development and application of software CMM (also known as
CMM-SW) has been followed by other domain specific CMMs such as:
• System Engineering CMM (SE-CMM), which deals with product development from analysis of requirements down to planning production lines and their operation.
• Trusted CMM (T-CMM) for software systems that require advanced SQA.
• System Security Engineering CMM (SSE-CMM) for security products that incorporate specialist security aspects and concerns.
• People CMM (P-CMM) which deals with human resource planning issues in software engineering projects, etc. (Galin, 2004).
All these specialist CMMs vary not only in their application discipline but also in their structure and definition of maturity. Since more than one CMM can be applied by the same organisation, the necessity for integrating them has emerged and it has been embodied within the overall CMMI approach. The main
CMMI model structure is given below.
Figure 8.3: CMMI model structure (Hefner and Draper, 2004)
5
Like CMM, CMMI includes 5 capability levels and an extended number of key process areas (25 instead of 18). Each process area includes specific objectives (goals), practices and procedures (Kasse, 2004). As it stands at the moment,
CMMI is both a framework for integrating various CMM models and also a ‘product suite’ which consists of the totality of models, training materials and appraisal methods associated with the framework. The components are organized into groups called constellations, for example: CMMI for development (CMMI-DEV), CMMI for services (CMMI-SVC), etc. (Software Engineering
Institute, 2007). Some of the main practical aspects of applying CMMI are being covered in Goldenson ( 2003) and West (2004).
8.3 The Testing Maturity Model (TMM)
TMM is a historical model, in which testing is seen as evolving from being
“Debugging-Orientated” (an activity to remove bugs) to “Demonstration-Orientated” (when it helps demonstrate that the software system meets the specification), later on “Evaluation-Orientated” (that is, integrated into the software life-cycle) and finally “Prevention-Orientated” (the main goal is to prevent faults occurring from requirements down to implementation; review activities are an integral part of test planning, test design and product evaluation) (Burnstein et al, 1996). TMM is based on the assumption that most organisations would go through these stages and as such it is an assessment-based model rather than an improvement model. TMM has two major components:
- The Maturity Model
- The Assessment Model
Figure 8.4: Maturity levels and goals of the Testing Maturity Model TMM (Jacobs et al, 2000)
6
The maturity model is based on a five levels and associated goals structure, like
CMM. The maturity goals represent areas that need to be addressed in order to be able to attain higher testing maturity levels. The similarity with CMM is based on the assumption that testing maturity evolves concomitantly with the overall software capability maturity.
TMM level 1 (Initial) is characterised by an ad-hoc testing approach, which follows coding and whose primary role is to help get rid of faults, and to demonstrate that the software system works.
TMM level 2 (Phase Definition) is characterized by a separation between testing and debugging. Testing is still performed on a post-execution basis but it becomes a planned activity whose goal is to demonstrate that the software system meets the specification.
TMM level 3 (Integration) has testing integrated throughout the software life cycle. The testing objectives are based on the user requirements specification, which are used for test case design and for defining the success criteria. . At organisation level, testing is considered a professional activity which requires dedicated resources and specialist training programmes.
TMM level 4 (Management and Measurement) regards testing as a measured and quantified process. Review procedures are part of all the life cycle phases and are considered as testing and quality control activities. Defects are logged and testing components are being reused.
TMM level 5 (Optimisation, Defect Prevention and Quality Control) incorporates testing into the overall management structure within which testing costs and effectiveness are continuously monitored, and mechanisms are being developed to assist its continuous improvement. Defect prevention and quality control are part of the development life cycle and the testing process is based on statistical sampling and reliability (Burnstein, 1996).
TMM has many advantages, including a highly conceptual model that can be adapted to a large spectrum of business environments and the fact that it follows closely CMM. Its main weakness is that, unlike CMM, it is much more succinctly documented, which can leave room for misunderstanding.
TMM has been used as the basis for a variety of improved testing models, among which some of the best known are the Test Process Improvement Model (Andersin, 2004) and the Software Testing Maturity Model (SW-TMM) (Huynh,
2002).
7
8.4 Future SQA prospects: challenges and capabilities
The future challenges envisaged for SQA follow some of the more general software trends, that is:
• “Growing complexity and size of software packages
• Growing integration and interface requirements
• Shorter project schedules
• Growing intolerance of defective software products. “ (Galin, 2004)
Growing complexity and size of software packages is due to a variety of factors, not least the cheap availability of increased hardware power. This has lead to the integration of complex algorithms, an increased number of inputs and outputs, as well as improved accuracy and decreased reaction times.
Growing integration and interface requirements due to increased collaboration of the information system and supporting services at organisation level (for example, Enterprise Resource Planning, Customers Relations Management, etc), as well as increased integration capability at software and hardware level.
Shorter project schedules due to the increased use of COTS packages and integrated development environments, which support software re-use. At the same time, competition-driven development has led to tighter schedules for new version releases, with decreased time available for review and testing procedures.
Growing intolerance of defective software products due to increased user awareness and user dependence on software products for everyday activities. As a consequence, the importance of SQA activities is constantly increasing, with quality requirements such as increased reliability and security gaining more predominant roles.
At the same time, there is plenty of scope for improving SQA as a consequence of general software engineering advances such as:
• “Expanded use of CASE tools
• Expanded use of professional standards
• Extended use of automated testing
• Expanded software reuse.” (Galin, 2004)
Expanded use of CASE tools and integrated software development environmentsis helping to support the effort of large software development teams, enable the automation of some of the software development activities, including some of the documentation process, as well as providing support for the maintenance activities.
8
Expanded use of professional standards has had and continues to have a profound effect on the efficiency of the methods and techniques used for developing software products, as well as improved working procedures at organisation level.
Extended use of automated testing has led to improved performance due to an increase in performance accuracy, shorter integration and regression testing times and reusable testing components. Other benefits of automated testing include increased reliability, re-usability and a comprehensive approach to testing that covers every application component.
Expanded software reuse offers the added benefit of incorporating software components that have been tried and tested, which reduces the development and testing efforts and increases standardization, leading to better system integration and a more modular approach.
Summary
In this final seminar we have analysed the main software capability maturity models (CMM, CMMI and TMM) and their role within the overall process of continuous quality improvement. We have concluded with a general appraisal of the main challenges envisaged for SQA as well as some of the major opportunities.
Reading Assignment:read Chapter 23, sections 23.4 to 23. 6 (page 485 – 498)and the Epilogue (page 570 – 576 ) from the textbook. Total: 21 pages.
References:
Please note that many of the articles can be found in the UoL eLibrary.
Andersin, J., 2004, TPI – a model for Test Process Improvement, Seminar on Quality Models for Software Engineering, University of Helsinki, Department of Computer Science, 2004, [online] available from http://www.cs.helsinki.fi/u/paakki/Andersin.pdf
Bamberger, J., 1997, Essence of the Capability Maturity Model, IEEE Computer, Software Realities, 1997, vol:30, issue:6, p. 112 -113
Burnstein I., Suwanassart T. and Carlson C., 1996, Developing a Testing Maturity Model. Part 1. Crosstalk, Journal of Defense Software Engineering, 9, no. 8, 21–24, [online] available from http://www.stsc.hill.af.mil/crosstalk/1996/08/developi.asp
Burnstein I., Suwanassart T., and Carlson C., 1996, Developing a Testing
9
Maturity Model. Part 2. Crosstalk, Journal of Defense Software Engineering,
9, no. 9, 19–26, [online] available from http://www.improveqs.nl/pdf/Crosstalk%20TMM%20part%202.pdf
Galin, D, 2004, Software Quality Assurance: From Theory to Implementation, Addison-Wesley, SBN-13: 9780201709452
Goldenson, D.R et al, 2003, Measurement and Analysis in Capability Maturity
Model Integration Models and Software Process Improvement, CROSSTALK The Journal of Defense Software Engineering, July 2003, p. 20 – 24, [online] available from http://www.sei.cmu.edu/library/assets/goldenson-crosstalk.pdf
Hefner, R. and Draper, G., 2004, Applying CMMI® Generic Practices
with Good Judgment, CMMI Technology Conference, 17-20 November 2003.
Huynh, D., 2002, Software Testing Maturity Model SM (SW-TMMSM), [online] available from http://www.cs.umd.edu/~atif/Teaching/Fall2002/StudentSlides/Duy.pdf
ISO, 2008, Introduction and support package: Guidance on the concept and use of the process approach for management systems, 2008, [online] http://www.iso.org/iso/iso_catalogue/management_standards/iso_9000_iso_1400 0/iso_9001_2008/concept_and_use_of_the_process_approach_for_managemen t_systems.htm
Johnson, D.L. and Brodman, J.C, 2000, Applying CMM project planning practices to diverse environments, IEEE Software, Volume 17, Issue 4, July-Aug 2000, Page(s): 79-88
Kasse, T, 2004, Practical Insight into CMMI, Artech House Computing
Library, 2004
Paulk, M.C.; Curtis, B.; Chrissis, M.B.; Weber, C.V., 1993, Capability Maturity Model, Version 1.1, IEEE Software, Volume 10, Issue 4, July 1993 Page(s):18 – 27
Software Engineering Institute, Carnegie Mellon, 2007, Capability Maturity
Model Integration, Version 1.2 Overview, [online] available from http://www.sei.cmu.edu/library/assets/cmmi-overview071.pdf
Tian, J, 2005, Software Quality Engineering: Testing, Quality Assurance and Quantifiable Improvement, John Wiley and Sons, ISBN: 978-0-471-71345-6
West, M., 2004, Real Process Improvement Using the CMMI, Taylor &
Francis, 2004
Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.
About Writedemy
We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.
How It Works
To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Are there Discounts?
All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.
