Reliability is an important non-functional requirement for most software products so a software requirements specification (SRS) should contain a reliability requirement, and most do. But, one of our indicators of the quality of a ‘good’ requirement is that it is testable, so it is reasonable to ask whether the reliability requirements in a SRS are testable as written.
Tests for functional requirements are usually binary. The product either supports the requirement or it does not and therefore either passes or fails the test. Testing for reliability is not so straightforward. It may be difficult to say, in a binary way, that the product does or does not meet the reliability requirements.
For the users of a system it is the reliability of the system as a whole that is meaningful but for analysts and testers it is important to separate the software requirements from the hardware requirements as there are some significant differences.
There is no question that any piece of hardware will eventually wear out and fail so hardware product reliability can be stated and measured in terms of time to failure or time between failures. The random component of the reliability forecast comes from the fact that identical pieces of hardware will operate for various lengths of time before failure.
Software reliability can be a more difficult concept to grasp. A software product will fail under certain conditions, with certain inputs, and given the same inputs and conditions will fail every time until the cause of the failure is corrected. So, the reliability of a software product is more about the random discovery of faults resulting from various inputs with the system in various states. Although for small and simple systems it may be theoretically possible to test every combination of states and inputs, for a system of any size and complexity this is not feasible. The random nature of the fault discovery process means we must use probabilities when we refer to software reliability requirements and testing.
Reliability test results should be stated in terms of measurements. Measurements are taken during testing when we are collecting and analyzing data about the performance of the software. In other words, we are tracking the occurrence of failures during testing. But, a reliability requirement is a prediction or forecast of the performance of the product in the future. What this means is that while we can measure the number of failures per hour or per transaction in a system test environment we can only provide an estimate of the actual performance of the system in a future production environment.
Reliability is usually defined as the probability that a product will operate without failure for a specified number of uses (transactions) or for a specified period of time. To be truly testable a requirement for software reliability should be stated as a forecast and the test results should indicate the confidence level associated with the forecast that the product will meet the requirements.