
A brief introduction to Software Development and Quality Assurance Management
Steven C. Shaffer
Copyright, 2014. All rights reserved.
About the author
Steven C. Shaffer is an Associate Teaching Professor and former senior researcher at Penn State University Park, where he has taught computer science, AI, software engineering and database management systems for over a decade. Prior to joining the faculty at Penn State, Dr. Shaffer spent twenty years in industry as a software and quality assurance engineer and data base administrator. Dr. Shaffer was one of the first people in the country to utilize a relational database for commercial applications. He achieved the designations of Certified Systems Specialist in 1989 and Certified Software Quality Engineer in 1995. He also founded Decision Associates Inc. which specialized in pharmaceutical software applications; he sold the company to pursue his doctorate and enter the world of academia.
Introduction
This brief introduction to software quality assurance is meant to fill the gap between having no information and the massive tomes that one will encounter when performing a search for the topic on Amazon. My goal with this short book is to give you some basic information that you can use to decide what other information you might need; it’s meant to be useful either as an executive summary or as an introductory text for students. Toward that end, I try to keep the explanations as brief as possible while still maintaining accuracy.
What is software engineering?
At its best, software engineering is a systematic approach to appropriately solving problems with computer software. It is systematic because, although there is some “art” or skill involved, we still need some structure to our approach in order to keep control over the results. It needs to be appropriate because we want to make sure that we don’t use an approach that is too big for the size of the problem (e.g. an Oracle database to store your friends phone numbers); however we also want to make sure that we don’t underestimate or trivialize the problem. Software engineering is about solving problems, not (necessarily) using the “coolest” technology. Finally, we have to stay on track by constantly checking back to see if we are still solving the original problem (avoiding “scope creep”).
Keep in mind that not all problems are best solved through the use of computers (e.g., “Should I take a job with a big company or a start-up?”). Even if a problem can be aided with a computer, remember that computerizing inefficient processes just makes for faster inefficiencies. Sometimes a systems analysis step is included with software engineering, sometimes not. Systems analysis is a formal process of analyzing how an organization works and what process enhancements (computerized or not) can be applied to the organization to make it run smoother. Analysis is breaking something (a problem or task) down into its component parts, whereas synthesis is putting them back together again; once the analysis is complete, the next step is to put the pieces together into a cohesive whole. Even where the use of computers is appropriate, most problems can’t be solved by throwing computers at them – you have to think of a solution then implement it with software. There is an old adage that, to the man with a hammer, everything starts to look like a nail. It’s important not to give into this tendency with computerization.
“Bugs”
“Bug” is a euphemism for the result of an error on the part of the designer / coder, they are not autonomous entities which crawl into your code by themselves! Try only to use the word “bug” in quotes, unless you are in entomology class! The Institute for Electrical and Electronic Engineering (IEEE) differentiates three different kinds of software mishaps: An error is the mistake the human makes, such as programming a loop improperly; a fault is what occurs when a human makes a mistake, such as trying to divide by zero; a failure is any departure from required system behavior, such as having your interface “freeze” and thus needing to reboot your computer. In the end, every “bug” is simply a human mistake; we like to try to moderate that notion by couching it is less direct language, but in the end, if there is a “bug” in a system, someone put it there.
Bad software has a number of negative consequences: at its most benign, bad software aggravates people, as when an online shopping cart makes you re-enter an entire screen of data because you mis-entered one part. Bad software also wastes time, causing a loss of productivity; how often have you been on the phone with customer service and had them tell you how slowly their systems are responding? This not only slows down the organizational staff (requiring more staff to be hired), but it also aggravates customers, possibly leading to lost sales. Even worse, if your bad software is mission-critical, it might result in a lawsuit for breech of contract. In a few cases, bad software has put companies out of business. In the worst case, bad software has been known to kill people, as in the famous case of the Therac-25 radiation dosing machine.
If you would like current examples of the consequences of bad software, do a search for “ACM risks forum” from your favorite search engine.
What is good software?
It’s not as easy to define “good” software as you might think. One possible measure is fitness for purpose, meaning that the software works effectively for what it was designed to do. Another way of measuring software might be market value, especially in a situation where someone is looking to invest in the company that makes the software. Some people look at the quality of the software as a reflection of the quality of the processes used to create it. Engineering-oriented people might look at software quality as conformance to specification. One might even begin to wonder if software quality is definable at all.
Although there may not be universal acceptance of a single definition of quality software, there are some measures of software quality that are generally accepted. Correctness is an example; if a program fails to add up a column of numbers correctly, for example, then usually it is not considered to be high quality. Reliability is another generally accepted quality measure; for example, software that is highly accurate but only runs once in awhile would usually not be considered good. Efficiency is a word which is often used, although sometimes used incorrectly; it is the measure of how many resources are used in order to complete a task — in the modern world of cheap computing power and cheap RAM, efficiency is not paid as much attention as it was in the past (except on very processes which utilize big data or run thousands of times per second). Another measure, usability, is the ability of the target audience to obtain the value of the software due to its interface design.
The measures used above are fairly common, and can be looked at as external to the process of software development. There are also internal measures of software quality; these are measures used by software engineers that may not be visible to the “user” public, and are often referred to as metrics. One such metric is integrity, which may sound strange as a measure of software; however, this usually refers to the internal consistency of a program’s design and/or method of coding. Programs with this sort of integrity are easier to maintain, which brings us to another internal metric: maintainability, which is just a measure of how easy is it to open up the source code and make a change without causing some other part of the system to break. Portability is a measure of how easy it is to move the software from one platform (e.g., from Windows to Mac). One can also consider reusability, which is a measure of how well the program is coded with respect to re-using certain modules or pieces of code (thus reducing current or later development time). One can also consider interoperability, which measures the ability of the program to “play nice” with the other programs running on the system. For example, if a word processing program used its own clipboard space and did not allow the data contained in it to be pasted to other programs on the computer, the program would be consider less interoperable.
Testability is a metric often used by software quality specialists and is a measure of how easy it is to test various parts of the system. As a contrast, imagine the computer Deep Thought from the book The Hitchhiker’s Guide to the Galaxy; this computer was designed to answer the Ultimate Question of Life, the Universe, and Everything. After millions of years of calculation, it spit out the number 42. Whether correct or not, this system is pretty hard to test. We will be returning to the notion of testability throughout this book.
Return on investment
For management, return on investment (ROI) may be the single most important aspect of the quality of a software system. Conceptually, this is the amount of value (money) either generated or saved divided by the cost of developing the software. If the result is above 1.0, then the product has paid for itself. However, there are as many ways to calculate ROI as there are consulting accountants, because ROI can include many aspects of the overall business environment. With regard to the numerator, this can include such items as increased sales, lower employee turnover, reduced time spent on routine tasks, totally new capabilities, and totally new markets. For the denominator, this might include cost of development, cost of new equipment, training, and lost time.
Types of software
Software can come in various shapes and sizes, and one way to categorize the differences is as a series of continuums. For example custom software versus commercial off-the-shelf (COTS) software. Some software is custom built from nothing but ones and zeroes, but this is very rare; usually there is something upon which the development is based, including the operating system, the language environment, callable libraries, etc. On the opposite end of the spectrum, COTS software is distributed in the same state to everyone who wants it; some customization may be allowed through selections in a menu, etc., but generally these allow for small variations in how the software operates (e.g., displaying the time in military time or not). The broad spectrum of difference in this dimension opens up a market for developers called system integrators, who will take various systems and customize their use for a particular purpose.
Another dimension of software type is human interactive verses embedded software. Embedded software is the type of software used to control components in, for example, military and environmental control systems. Since we have not yet (as far as we know) developed Skynet (an autonomous digital intelligence), in the end all software involves human interaction. For example, a video game will need to interact somehow with the video monitor, but this is so many layers deep that no one would consider a video game to be an embedded system. However, a video controller which interacts primarily with the operating system and the video hardware might be seen as an embedded system — it it primarily an aspect of how the humans break the project down, and what they consider the scope of the development.
Another category of software is single user verses multi-user, although single-user systems are fast becoming an artifact of a bygone age. Consider a word processor: in its basic use, this is a single user system; however, modern word processors allow for real-time interaction among several users connected online.
The changing landscape
As the old saying goes, nothing is permanent except change. This is clearly true of the software environment. Technologies and user expectations change quickly; however, at its core all software is the same (the proverbial ones and zeroes), and this means that there are general tools and techniques of software quality engineering that have been developed and can be utilized on diverse projects.
The Software Development Lifecycle
In this chapter we will cover the software development lifecycle (SDLC), which is the series of steps that software developers go through (or perhaps ought to go through) to develop software. Through the decades various approaches to the SDLC have been developed, and each has its positives and negatives.
Within the context of this discussion, a process is a series of steps involving activities, constraints, and resources. Having a process to do something is an advantage, especially if you want consistency of the output. For example, once you have mastered a certain recipe, you might always be sure to make the dish the same way in order to have the best, most consistent outcome. However, this only works if you already know how to make the dish. If you don’t know, typically you will have to experiment.
A similar thing is true with developing software. Some projects are just obvious variants of other software that we may already have developed; for example, creating a data entry screen is considered fairly straight-forward (at least, once you have done it once or twice). The first time the United States sent men to the moon, that software had never before been written; however, other software for unmanned missions, satellites, etc. had been written, so it wasn’t the case that the programmers had to start from scratch. When cell phones were invented, new software had to written for these devices, perhaps from scratch; as this technology became more mature, the basic aspects of the software became standardized, and developers could work on adding features, etc.
Thus, within any particular domain (e.g., moon launches or cell phones), as experience is gained, certain aspects of development can be turned into a repeatable process. It’s important to capture prior experiences (both good and bad) in order to be able to eventually systematize most of the development process.
Software development as an abstraction
Software development involves abstraction, which is “the act of considering something as a general quality or characteristic, apart from concrete realities, specific objects, or actual instances” (http://dictionary.reference.com/browse/abstraction). Software systems are typically abstracted into an interaction between objects, events and activities. An object is a component of the system, which might be students, automobiles or cans of mushroom soup. An event is something that happens within the software, for example registration, receiving parts, or receiving an order. An activity is something that the system (or the user) is trying to do, such as printing a roster, assembling a car, or shipping a crate of soup.
When designing software, it’s important to define the environment and the boundaries. The environment includes aspects that can effect the system but are outside of the control of the system, e.g. a power outage, a larger than usual volume of orders, parts are not received on time, or other computer systems (e.g., the Internet). Its important to identify those elements of the system that are on the boundary between the environment and the system itself, as these are the areas that are likely to cause problems in specification. For example, is it part of your software project to sense if the network goes down? By carefully specifying the boundaries of your project, you can significantly constrain your list of “things to do.”
There is no way for a human to keep all of the details of a large software project in his/her head at one time. The only way to manage the size and complexity of a large software project is to develop levels of abstraction define at a high level how the pieces interact, then “drill down” to the details of each piece. This is the most common method of using abstraction in software development, allowing the process to be reproduced across multiple projects.
The big question is whether or not software development is a reproducible process. It’s common to use house building as an analogy for software engineering; however, unlike houses, each software project is unique, because if the project is the same as a previous project, then you would just copy the files (as with commercial off-the-shelf software) and not develop anything at all. So, each software project is unique, but the question becomes: how unique? In the simplest way, two projects can be unique only cosmetically, for example when two web sites are the same except for the colors, the name of the company, and the images used. At some point people then realize that it is possible to abstract out the text and images and turn these into data. A different type of reuse might be with something like a scheduling application; scheduling nurses, for example, may be different from scheduling police officers, but how different is it? Sometimes it’s easiest (at first) to simply copy the nurse scheduling system and modify it to make a police-scheduling system; but then what will you do when you need to schedule shift workers in a manufacturing plant? At some point a decision needs to be made regarding making the program generic (and data-driven) versus making a different version for each potential domain. Making the program generic requires making it more abstract, which can initially be harder to do, but in the long run may make the product useful to more people without customization.
Another version of abstraction used in software development is the use of design patterns, which are methods of solving certain common programming problems.
That software development involves abstractions was recognized early on in the history of software development, and when abstractions are employed, it is often possible to extract a reproducible process (as when manufacturing something like pencils). So, as far back as 1970, people tried to abstract out the steps of developing software; the first such model was called the waterfall model. This approach, which was adopted by the military, requires that each step (for example, detailed design) be completely finished before the next step (for example, development) begins. Whereas this seems to make sense, because each step clearly feeds into the next step, there are two main problems with this approach: (1) it takes too long to complete the development, and (2) by the time the development is completed, the requirements are likely to have changed.
Based on the experience with the waterfall model, various versions of spiral approaches were developed. In this approach, the steps (analysis, design, development, testing) are done in shorter bursts with the understanding that the team will cycle back and either fix issues or add more functionality. This approach allows the team to solve the “big hit” items first, then work to fine-tune the product on successive iterations. This approach is scalable in that the magnitude of the spiral (the length of time between iterations) can be scaled to the project; large projects might have 6-month iterations, small project may have monthly iterations.
The success of the spiral approach led software engineers to extend the concept of iteration in development even further, resulting in several methods which together constitute lightweight development models. These models, which vary in the details, use an even smaller development iteration than spiral, sometimes releasing new versions weekly or even daily. The advantage of these approaches is fast time-to-market and sticking tightly to what the users (who act as consultants on the project) want. There is some danger that software built in this fashion may result in software errors (“bugs”) or be hard to maintain; some of the lightweight approaches (e.g., Xtreme programming) work hard to avoid these problems by requiring the developers to fix (refactor) problems with code as they work through the development cycles. It’s also probably not a good approach with high-risk projects, such as nuclear power plant operations or strategic defense systems. Also, these approaches tend to rely heavily on testing (which is good), but may also result in “coding to the test” (which is bad). Lastly, these approaches tend to devalue documentation that may cause problems when staff turns over.
Using any of these development processes involves overhead, which is resources (usually time) spent not specifically on the software itself. This drain of resources usually manifests itself in meetings, which may be necessary sometimes, but are often poorly run. Alternatives to meetings are email and smaller meetings (perhaps one-on-one), depending on the purpose. As the number of people on a development team increases, communication lines between the team members increases exponentially; this makes it very difficult to organize how information is disseminated in a project. The use of software contracts (discussed later in this book) can enable a large project to be handled more like a collection of smaller projects, thus reducing information overhead.
Stages in the software development life cycle
The SDLC tracks and manages the steps necessary to create and maintain software from initial “conception” until the software is “retired” (removed from production use). By studying past projects, practitioners try to identify the combination of factors that make good software projects so that those factors can be reused. However, what makes a “good” software project? Remember that there are many possible definitions of what good software is, so it’s not a surprise that there are many possible ways to see what makes a good software development project.
One positive aspect of a good software development project is reduced cycle time; this means trying to minimize the time between getting the user requirements and delivering the product for production use. Another valuable aspect of a project is reduced development cost (although within some level of quality control). We also like to reduce project risk, which is the chance that the project will fail, for example by not completing on time (or at all) or by resulting in some catastrophic failure (such as killing someone).
There are some standard steps or stages to the SDLC; however, depending on the development approach used, some of these may be abbreviated or skipped altogether. However, it’s instructive to know what each of the stages means. Requirements analysis is the stage of the SDLC where users and stakeholders are interviewed to obtain their view of the needs of the project. If there is more than one stakeholder, often there will be competing and even contradictory requirements, and thus this stage of the SDLC requires a tolerance for ambiguity. Next, the requirements definition stage combines the user requirements into a single, cohesive, document that should optimally be signed and agreed to by each stakeholder. This stage of the SDLC requires considerable “people skills” and perhaps strong political and negotiation abilities. The next stage is high-level design, the goal of which is to turn the requirements definition into a design which, when implemented, will satisfy the needs of the project. It is here where prototyping can be of use, both to test technical issues of the design and also to check with the stakeholders regarding workflow and user interface design. Detailed design is the step that takes the high-level design and “drills down” into the details, such as data table and field names, module or class naming conventions, etc. Only after all of this has been completed (at least in the “standard” SDLC model) does program development begin, where the actual production code will be developed.
Throughout all of the above steps, there should be sufficient documentation from each stage so that answers to questions, once resolved, are available to everyone on the project. The best way to manage this is online, using either one of the commercial document management systems or perhaps a homegrown wiki (see Wikipedia for more about wikis).
Once development has begun, testing can also begin. Unit testing is testing of relatively small pieces of code, and is usually performed by the programmer him- or herself. Integration testing is used to see how several smaller pieces of code (often written by more than one person) work together; a project leader or other representative of the development group often does this. At this point, a system walkthrough might also be performed, wherein several software engineers go step-by-step through the developed code, having the programmer narrate what happens in each section of the code. Either additionally to or instead of the walk-through, a code review might be undertaken; this is where another software engineer steps through the code him- or herself, doing a “silent walkthrough” so-to-speak. Any questions or issues are brought to the original developer for rectification. The next stage of testing is system testing, where the entire program is executed, usually by the end users, but this is a good place for professional testers to be utilized.
Once all of the testing is completed, it’s time for system delivery, which includes installation (if necessary), training, and support. After these are completed and the system is up and running, it enters maintenance where errors are fixed and enhancements are developed, as determined by the stakeholders.
Prototypes
A prototype is an incomplete version of a final product that is used either to test a design concept or to check with a user community regarding the details of a user interface. Creating a prototype before embarking on full-scale development can save a lot of time and money by avoiding unworkable designs and developmental blind alleys. Any aspect of the system that is high-risk or high-uncertainty is a good candidate for a prototype.
The SDLC and quality assurance
There’s an old saying that you can’t inspect quality into a process, which is a way to remember that quality assurance needs to be integrated within the SDLC from the beginning of the project; it’s pointless to postpone it until the software is about to be released (that is, QA is not just testing). Although more details will be given later, it’s a good idea to keep in mind the following two aspects of any step of the SDLC: verification asks: does this program do the job right? whereas validation asks: does it do the right job? For example, you may develop an absolutely perfectly operating accounts payable system, but what if an accounts receivable system is what is needed? In this example, the program would pass verification, but not validation. Although this is an egregious example, the problem is more common than you might think, especially if the people developing the software tests are the same people that developed the programs themselves. The reason for this is that the very misconceptions that were used to develop the code are the same ones that are used to develop the test cases. This is why it is never a good idea to allow the programmers to be the final testers of their code (more on this later).
Every step in the SDLC should have a well-defined, measurable deliverable. By making sure that the deliverables are measurable, we differentiate them from wishes. “The system should be fast” is a wish. “Switching from one page to another should never take more than 2 seconds” is measurable, and therefore useful.
Like any good engineering discipline, software quality assurance professionals have attempted to come up with metrics, which are methods of measuring “goodness” or “badness” of a software system. The idea behind software metrics is to hypothesize a relationship between code and some external measurable attribute (e.g., total cost of development) and show that the aspect of the code is a good indicator of the external attribute. We also need to be able to rank things in order of their “goodness” in order to be able to determine when one program is “better” than another. This gives us a rational basis for progress in software development. Good software metrics are hard to come by because programming is essentially a human activity, and as such it is difficult to contrive repeatable experiments wherein the important variables are isolated. Many attributes of programs (e.g., understandability) have to do with what is essentially a subjective appraisal, and not every combination of desirable properties results in a desirable whole.
Some notably bad software metrics are the number of compiler runs and the number of lines of code developed. These are left over from a bygone era when computer time was more expensive than developer time, and when code reuse was very uncommon. In modern software development, it’s possible that a 100-line program which uses well-vetted library functions will be of higher quality than a 10,000 line program written from “scratch”. Of course, this is not always the case, as some library routines may be less optimal for a specific purpose than a custom-written solution. There is an intuitive sense to what make one program more complex than another, and software complexity measures try to quantify that. However, we may be trying to measure something that we dont really fully understand (i.e., the act of programming).
One software metric which has survived the test of time is cyclomatic complexity, which is the number of paths (due to conditionals, loops, etc.) through a piece of code. Program complexity usually resides within a few overly complex procedures. This is clear to any code jockey: a large proportion of most programs is scaffolding; code that is there just to navigate to the “meat” of the processing. (In a well-structured program, about half of the code is scaffolding.) Often, where an inordinately complex piece of code is encountered, it is due to the programmer missing a level of abstraction and is a good candidate for remediation.
Requirements / Specifications
Requirements and specifications are similar and related. Both are lists of features or capabilities of the system being developed. The difference is that requirements are usually derived from the end users or stakeholders, whereas specifications are usually created by software professionals, based on the requirements of the user. Whether you have requirements from a user/stakeholder community or specifications written by a professional usually depends on the nature of the project you are on. For example, a Department of Defense project will usually have a formal process of requirements gathering, whereas in a startup company the “specs” might amount to “we ‘specs’ it to be done by Friday.”
However, it’s important to have requirements or specifications because if you dont know what its supposed to do, how will you know when its working? And, perhaps even more important, how will you know when you’re finished? Some 10%-20% of projects fail due to incomplete or constantly changing requirements, and many of the other reasons projects fail can be traced back to problems with specifications. Also, it’s important to recognize that specifications and requirements are subject to verification & validation just like the other steps in the software development life cycle.
A good requirements document will contain more than just a few screen shots and a two-paragraph description of what the product is supposed to do. Instead, it should contain the following information. The features and functions of the system is what people usually think of when they think of requirements; that is, a list of what the software will do once it is implemented. The physical environment in which the system will run might be anything from a toaster to an iPhone to a warehouse-sized operations center with detailed climate control. Interfaces to other systems include modules plugged into other systems, use of a database management systems, and of course, the Internet. HCI (human-computer interfaces usually need to be defined, unless the program will exist only as an embedded system. Security standards need to be identified and addressed. It’s also important to determine the expected inputs and outputs of the system, and to at least sketch out how those inputs will be turned into the outputs. For example, the inputs may be debits and credits, and the outputs might be a balance sheet and income statement.
In addition to the above, a good requirements document will also consist of a list of “softer” aspects of the project. What is the quality assurance plan? Who will work on the project (specific people or job classifications). Where will they work? What development equipment will be necessary? If this system will replace an old one, how will the transition be managed?
Requirements are subject to verification and validation, just like any other aspect of a software development. You can verify requirements by doing the following: (1) Make sure that all references (for example, between data elements) are connected (no “broken links”); (2) Make sure that each statement is consistent with the others in the document (e.g., if something is called “X” somewhere, make sure it isnt called “Y” or “x” somewhere else); (3) Make sure that each statement in the document is measurable (requirements, not wishes).
Requirements are often transmitted in written language (e.g., English), but might also be expressed using mathematical languages, diagrams, prototypes or pseudo-code which looks something like a programming language, but without the detailed syntactic aspects. Sometimes a formal specification approach is used; currently the most popular is the Unified Modeling Language (UML). The problem with any such formal specification method is that they are at least as hard to learn and require the same attention to detail as the programming languages themselves, so the person doing the specifying might as well just start coding!
One of the challenges of requirements gathering is to take diverse, even contradictory, desires from the end-users and turn these into a coherent list of specifications. Often this will require negotiations with the end-users and also consultation with other stakeholders, e.g. upper management of the organization. In the end, you want to make sure your requirements are complete, consistent, plausible and correct. If not, it’s unlikely that your users will be happy with the results of your project. However, it’s very difficult to obtain requirements from non-software professionals for several reasons: (1) many people try to avoid the responsibility for ever making any decisions at all, (2) the users are afraid theyll forget something, so they postpone “finalizing” the document essentially forever, (3) the users dont really know what they want, or (4) the users know that what they really want is impossible, but they dont want to admit it, so they keep making statements that sound like requirements but arent. Software developers tend to be people who do not like ambiguity, and so the task of obtaining requirements is, for them, one of the definitions of hell. It is here that someone schooled in an MIS (Management Information Systems) or IST (Information Sciences and Technology) program may be of use. These people can act as a bridge or interpreter between the end-users and the software developers. However, this means that this job requires a strong knowledge of both the technology and the domain itself.
System Design
Once a requirements or specifications are available, the system design process begins. System design is a creative process of transforming the presented problem into a solution. Designs come in all flavors and levels; from a conceptual design to a detailed technical design of an algorithm. However, all designs have the following attributes: Boundaries: What is within the system, and what is outside the system? Entities: What are the actors within the system? These could be people, multiple computers, and/or equipment. Attributes: These are what distinguishes one entity from another. For example, two users might be differentiated by their user ids, or two servers by their IP addresses. Relationships: These are the (possibly dynamic) connections between entities in the system. For example, a user might have the relationship “admin” on one computer, but not on another.
The house-building metaphor is perhaps overused with regard to software engineering, but since it is useful and somewhat traditional, we will include it here. When building a house, the requirements or specifications might include a list such as 4 bedrooms, 3 baths, deck, etc. Having these, there are various levels of design that must take place. The rendering (drawing of the house) would be analogous to high level design. Creating the floor plan would equate to a decomposition (see below) of the high-level design into a somewhat lower level design. Drawing the blueprints, including such things as where the plumbing connects to the sewer is similar to detailed design in software development. Sometimes the differences in types of design is bifurcated into conceptual design (“what will it do?”) and technical design (“how will it do it?”)
Design is an ill-defined task; we almost never know everything about a system until after we build it (and maybe not even then!). This is why many of the classic sw development models fail, and why such approaches as iterative and extreme have become more popular.
Decomposition is the breaking down of something into its constituent aspects, and is one of the most basic aspects of software development. Modular decomposition breaks the larger program into smaller modules that are pieces of code designed to implement one aspect of the overall system. Data-oriented decomposition looks at the entire system as a series of read/write steps that use and update a data source. Event-oriented decomposition sees the overall system as a sequence of events, sub-events, and sub-sub-events. Object-oriented decomposition sees the entire system as a series of interacting objects, each with defined tasks and duties.
You can take any system and break it down into arbitrarily course or fine sub-parts. These parts may be called systems, subsystems, modules, functions, components, etc. For now, we will refer to these generically as modules. A module is well defined if there are no unnecessary inputs.
Keep in mind, though, that in software development, if we keep breaking tasks down indefinitely, that’s how long it will take to deliver the software!
Program design
Somewhere in the netherworld between system design and coding lives the program design. Sometimes it is embedded in the detailed system design, sometimes its on its own. The program design often falls naturally out of the system design; that is, the system design pushes the program design in a certain direction. Thus you need a unified approach from system design, through program design, and into coding. It’s a good idea to avoid mixing your metaphors, such as if your system design is event-driven, your program design data-driven, and your coding object-oriented, you will find yourself between a rock and a hard place without a paddle. This will make communication difficult and development much harder than it has to be.
Most modern software would be impossibly complex if you had to write it using nothing but 1s and 0s. Thus, virtually all software is developed using the concept of layering, which is the building up of software from smaller parts (the complement of decomposition). What we need to develop is software with well-defined inputs, processes and outputs that we can use and rely on as the “building blocks” of the systems we develop. For example, almost all development relies on compliers or interpreters; where would our projects be if these were flawed? (Sometimes, of course, compilers do contain flaws, since they are nothing but software written by humans. Usually, though, these flaws are found and removed quickly.)
There is a well-known result from psychology that people have a working memory capacity of 7 +/- 2 “chunks” of information. Thus, if your designs (programs, data, menus, etc.) require the users (programmers, etc.) to have to juggle more than 7 things in their minds at once, your design will not be successful. In addition, we want to minimize the cyclomatic complexity of a program, which is the number of paths (due to conditionals, loops, etc.) through a piece of code.
Its important to realize that all of the easy software has already been built, and no one gets paid to develop Programming 101 kinds of programs. All real-world software development involves interfacing with other software artifacts (operating systems, database management systems, print drivers, etc.), and almost always requires balancing such factors as: hardware resources, speed, storage, usability, cost, maintainability, etc.
Two concepts that help to understand what good design is are coupling and cohesion. Coupling is a measure of how much one module is dependent on another; for example, if one module is changed, what are the chances that this will affect another part of the system? Cohesion is a measure of how much a systems components all work toward the same overall goal. We want to minimize coupling and maximize cohesion; by doing so we develop systems with design integrity, which means that the system as designed will do everything that is needed and nothing that isn’t.
One effective method for developing high-quality systems is to use design by contract that looks at each interface between any module as a contractual arrangement. After the contract is set up, any developer should be able to write the code to conform to the contract. This forces rigor in the development process, and is a half-step toward program proving. The problem, as always, is trying to get people to be specific. Poorly worded design contracts are like poorly worded legal contracts: so full of potential interpretations as to be mostly useless.
An aspect of program design that is often skipped is exception handling, which is a definition of how the software will react in the case of something going wrong. This could be something like a dropped Internet connection, a database integrity violation, or bad user input. Professional software checks every function for successful completion before continuing. Well designed and coded systems will always handle errors effectively.
Testing
As with any field, there is specific terminology used within software quality assurance and testing. A production system is the system currently being run live by the end users. A development system is a mirror of the production system, with some current changes that may eventually be released into the next version of production. You always want to avoid testing software in production; it can cause data corruption and end user annoyance. A regression test is a test performed on the changed software to determine that only the things you wanted to have changed; another way of thinking of this is that a regression test helps to ensure that you didn’t break something while fixing something else. A delta is a change that happened between production version ‘n’ and production version ‘n+1’. Deltas are usually managed by source code control systems, which are software products that allow you to keep track of changes to the source code and manage the delivery of production versions.
The quasi-humorous definitions of alpha, beta and release version is: Alpha test: Too buggy to be released to the paying public. Beta test: Still too buggy. Release version: Alternate pronunciation of “Beta test.” The real meanings of these terms are that an alpha test is done in-house (in the organization that developed it), whereas a beta test is testing by trustworthy customers (usually long-term customers or system integrators). The release version is the version sent out to all of your users. A pilot test is where a new system is used, but the results are not relied upon. This is closely related to a parallel test, where a new system will be run while the old system is as well, and the results are compared.
Reliability is a measurement of how likely a user is to obtain an answer from a system; this is often confused with accuracy which is a measure of how the correctness of the answers. An old example of this is that a watch which is always five minutes slow is reliable, but a watch which has stopped is more accurate (because it’s right twice a day). Mean time to failure (MTF) is the average amount of time a system runs until it stops running due to an error. Usually MTF is used for hardware; for example, a disk drive might have a MTF of 300,000 hours. When developing mission-critical systems, it’s important to identify the MTF for the system as a whole.
There are several types of testing that are (or should be) performed on software products. Unit testing is testing a function (method, etc.) or a small program to be imported or included into a larger context; the developer usually does this test. Integration Testing involves testing the interfaces between a module (function, class, etc.) and other pieces of the system; a project leader often does this, unless a separate testing team has been identified. Note that, with approaches such as data-centric design, each module is linked directly into the whole system, and there is no hierarchy. System Testing is often performed by a separate testing department and/or testing specialist. Depending on the development life cycle approach, this could be done often or only once or twice a year, and is usually connected to release dates. Acceptance testing is usually done by the user community, and the idea is to give the users a “warm and fuzzy feeling” that the system does what they need it to do. It does not take the place of system testing!
The test set given above is almost universally accepted; however, there are other kinds of tests that are often called for. A stress test is a test of the system while pushing its design constraints; for example, a system designed for 100 concurrent users might be run with 200 concurrent users to see how it will perform. A volume test is similar to a stress test, but instead might test to see how the system responds if the amount of data grows significantly larger than expected; for example, what happens if the server disks start to fill up? These tests might be combined with a benchmark test, where the new software is tested against an older version or perhaps a competitor’s product.
Configuration tests are used to see how the system works given the documented configuration (e.g., operating system version, etc.) A compatibility test will show if the software is compatible with other installed software; for example, will your Java program run on an Android Tablet? Timing tests are important if your software interacts in real time with other software systems. Human factors tests are meant to measure the usability of a system; for example, how hard (or easy) is it for a user to choose control-alt-delete on a certain keyboard? Some kinds of systems (e.g., cloud server farms) will need to go through environmental tests to see how the system behaves when, for example, the air conditioning goes out.
Keep in mind that the goal of testing is not to prove that the system works, which is impossible. The goal of testing is instead to uncover new errors with thorough, brutal efficiency; this is done by executing test cases that are designed to make the software fail. It’s very easy to come up with test cases that show the software is working; however this tells you almost nothing of value. Testing involves making sure that the internal processes work, which means that you have to invent situations that are likely to make the software act in strange ways. One example which makes the software break is worth thousands of examples where it works.
It’s also important to recognize the difference between critical, marginal and minor errors. For example, an error that causes entire databases to be deleted would probably be considered catastrophic; an error that caused the most recent record to be deleted once every 100,000 entries might be considered marginal, and an error that caused the text to turn green on the screen for no good reason might be considered minor. Of course, what is minor to one person or organization might be considered catastrophic to another.
Many development organizations use statistical error prediction as the basis of their software quality approach. This is based on the fundamental assumption that there will always be software errors in any non-trivial computer program. In my opinion, assuming this is the same thing as condoning it. All of these models operate on the same basic question: How many errors do you want in your software, Mr. Customer? The answer, of course, is zero. Assuming that errors will always be there is unnecessary and unproductive. There are approaches to software development that reduce the opportunity for errors to zero; these are quite technical, and I discuss them in a forthcoming book.
Keeping track of the testing of a large software product is as complicated as the development itself. So, just as there are documentation steps for developing software, there are steps for documenting software tests. The test plan, is just like any other project plan and would include PERT or GANNT charts, milestones, etc. (If you are unfamiliar with these, pick up any reference on project management.) A test specification describes what the point of each test is and how it will be structured. The test script is a step-by-step procedure to be performed; this is important because if any errors are found, it’s important to be able to recreate them. Finally, the test analysis document summarizes the results of the testing.
Security
System security is a specialty area all its own, but there are several key important concepts that everyone should know. Security is not just technological security (like running an anti-virus program); you must also keep in mind physical security (e.g., restricting access to the server room), and security policies (such as changing passwords frequently and shredding paperwork). Physical and policy security are crucial, since many system break-ins happen in a very low-tech manner. For example, an infiltrator can simply call someone from your company and, pretending to be from the IT department, ask for the user’s login credentials.
Within the realm of technological security, it’s important to differentiate between authentication and authorization. Authentication is demonstrating who you are, by means of knowing something (e.g., your mother’s maiden name), having something (such as a registration code), or being something (e.g., your thumbprint is in the database). Sometimes a system will use two-factor authorization where the user will need to provide two authenticating elements (such as a password and the answer to a challenge question). Authorization involves what access each authenticated user provides; given that I have authenticated myself, what am I allowed to access?
Confidentiality ensures that, once two people are authenticated, that communications between them stay just between them; for example, you might use an encrypted messaging system. Data integrity means that the data that is transferred or stored can be read or retrieved in its original state, as opposed to coming back corrupted in some way. Accountability is enforced by making sure any system access is traceable to the source; this is often done by maintaining a system access log. This is somewhat related to non-repudiation, which is any method used to ensure that someone who does something (say, order an item) can not later claim that she or he did not do so.
All of these aspects of security have to be considered from the first project meeting. Just like with any other system requirement, the level of security needs to be defined and built into the system from the beginning. It isn’t appropriate to try to add security to a system after the fact.
Productivity factors
Typically, software engineers are in short supply, so management will want to make sure that they are being used effectively. As with many fields, experience is a critical factor; however, we are all aware of the conundrum of “how does someone get experience if you only hire experienced people?” A good approach is to take your junior software developers and apprentice them to your senior people; off-load the less demanding aspects of the development to the junior folks, allowing them to gain experience, while saving the harder problems for the senior, experienced, people.
Another aspect of productivity is the percentage of the day spent on productive work. You should look for ways to reduce the time your staff spends in useless meetings and let them get to work.
Keep in mind that the more people that are on a team, the less productive it will be. This is due to the exponential increase in the number of lines of communication as the number of team members increases. Consider breaking large teams into smaller teams which interact mostly on a design by contract basis.
Software reuse is the “Holy Grail” of software engineering. The idea is that we will develop each functional piece of software once, then never have to develop it again. This makes sense, but isnt as easy as it sounds. Sometimes its faster to write the feature yourself than to look it up in an archive and figure out how it works. Also, making the component generic takes time away from completing the project at hand. Sometimes the component you’re trying to use isnt well written. And what if you need it to do something it doesnt do? Of course, without software reuse we wouldn’t have the Internet, Google, Netflix or I-phones — no one organization could develop all the required supporting software from scratch. However, you need to be aware of the quality of the software upon which you base your projects; if the underlying software contains errors, so will your product.
Because of the issues and complexity of software development, many in-house IT departments develop a “distract and delay” approach to user requests. As a result, stakeholders will start looking for ways to circumvent their IT people and solve the problem themselves (“There’s an app for that”). In the end, though, this approach leads to further trouble for the IT departments because they end up having to integrate the application into their systems, often resulting in security issues. For these reasons, it’s usually better to jump in and offer to help (or guide) a user community with their choices of software.
Conclusion
Software development and quality assurance management is a large subject, with many sub-specialties, and where there is no substitute for experience. However, it’s useful and important for people in ancillary fields to understand the basics of the field, and this is the purpose of this short text. As stated at the outset, there are many reference books available with many, many details, and these details are always changing due to the nature of the field. For those who are interested to enter this field, I suggest that you consider obtaining the designation of Certified Software Quality Engineer from the American Society for Quality (ASQ) at http://www.asq.org.