Research commissioned by Compuware has revealed that 83% of manufacturing organisations do not numerically quantify the risk of failure when going live with a new application. That such a high number of organisations lack the ability to quantify, and therefore mitigate risk, as well as gauge software quality is concerning given that 79% of respondents stated that the cost of poor software quality to the business was between 100,000 and 500,000 per year. A further 21% indicated that poorly performing applications can cost them anything up to 5 million a year. The research was conducted by Vanson and Bourne who interviewed 100 IT professionals across the
Manufacturers are currently operating in what can only be described as a challenging environment. Their ability to compete would be significantly undermined by major IT failures and therefore it is surprising to see that so few manufacturers quantify the risk of application failure before going live with an application, commented Sarah Saltzman, Technology Support Manager, Compuware. Risk should be measured and monitored throughout the development of an application, so that organisations can make go live decisions based on the amount of risk they are prepared to take. For example, with
48% of IT directors said that when testing an application, business sponsors provided them with no clear guidance about which parts of the application were critical to the business. The result is that testing teams cannot develop a strategy aimed at testing the higher risk elements of the application. This was reflected in the results, which found that 66% tried to test as much of the application as possible in an attempt to get 100% reliability. In total only 34% prioritised their testing on the business critical parts of the application.
It is understandable that such a large proportion of companies are testing the whole application. In todays manufacturing environment machines are largely run by IT applications. If they grind to a halt due to an application defect, you cant produce goods. Hence its critical to iron out defects. However, time constraints and budgets mean that it is not always practical to test the whole application in order to eliminate all elements of risk, continued Saltzman. What organisations need to do is prioritise their testing efforts by thoroughly testing the parts of the application which hold the biggest risk. For example, if there is a part of an application that is central to the activities of workers on the shop floor, it is critical to ensure it does not fail as you risk huge productivity losses, whereas there will be other areas of the application, which are not so critical. Adopting a risk-based approach enables testing teams to identify the less critical parts of the application where defects will be more acceptable, rather than taking a blanket approach and saying that a certain amount of defects can be tolerated without knowing which part of the application, and more importantly, what business processes, they might affect.
Risk-based testing enables the IT department to accurately asses the risk of an application failing and the impact such failure would have on the business. This assessment can then be fed back to senior management, therefore enabling them to evaluate whether or not an application should go live at a specified time and make an informed decision on the information that the testing team has presented to them.