What is the difference between a cloud risk analysis and a product risk analysis? I’ve tried to create this list to show waht the differences are:
- The result of a cloud risk analysis is a 3D model of the risks. It gives insight in the damages and chance of failure per characteristic, object part and layer.
- The larger amount of stakeholders, like for IT the Enterprise architects, owner of the cloud layer, 3rd party service suppliers, and for the business Marketing and end services users.
- Within clouds, a service is the relevant object part as a part of a business process. Functionality, for example, is no longer formed by a number of subsystems but by services. The characteristic functionality can be subdivided into the various services and the totality of the object parts is the business process. The same reasoning applies to the remaining characteristics. To get a complete overview of all services and business processes, which fall within the scope of the cloud project, the object parts are arranged by characteristic in a table.
- Agreements on what are and what aren’t standard services (step 3). These standard services are not tested separately, but only in the end-to-end test. The 3rd party service supplier can be enforced to comply with a Statement of Work (SoW) where the expected quality of the service is agreed upon. The use of Quality Gates can help in getting transparency in the quality of the service.
- Functional testing is of lesser importance. As their supplier approves the functional requirements of the standard services, functionality is of lesser risk. But non-functional requirements are not sufficiently allocated in the tests of the supplier. Integration of the standard services in the cloud has the priority of test, for example performance, security and integration testing. Non-functional requirements should get a higher risk class compared with functional requirements.
- Chain risks are always determined in a cloud project, as a cloud consists of multiple layers they should always be tested at least once in an end-to-end test.
- Because of the greater complexity and dependence of standard services the risk classes of High, Medium and Low are not always sufficient. A more empirical method of risk classes is preferred, like for example numbers.
This post is a continuation of my first post about a lean MTP. Last week I had my second two days in filling in the test strategy and finalising my MTP. But first the response I got of my project leader about the first concept I send him. He most of all missed the DSDM principle in there. When I read the document last week I realised he was right. I didn’t look enough at the DSDM products I had to produce instead of the normal test products. I realised I had it all in the document, but not in DSDM terms (I did have iterations stated in the testplan, but I don’t which yet).
Test products that are different from ‘normal’ are:
- Business and Technical Testing Strategy part of the mastertestplan;
- Test cases, scripts and charters are for the business process are part of the Business Testing Suite;
- Test cases, scripts and charters are for the technical flow are part of the Technical Testing Suite, and
- Test manager is a Technical Coordinator QA (not a product, but sits better here).
When I adjusted this in the document I had some interview sessions with users, functional service management, an analyst, the project leader ICT and Content and application management. I had to hold separate interviews because of the time challenge and not coinciding agendas of these persons. I asked everybody to think negative about where they think the application could fail (where they saw risks). On each answer I asked them what could go wrong (the risk of failure) and how bad this was. And what parts of the application of process should be tested and why (stating test goals).
The users, f and project leader Content were most interested in the process within and the performance of the application. They were also interested that the application shows the correct content in the application. The performance part was weird, because it wasn’t stated in the requirements. IT (project leader) and analyst had the greatest risks with the type of viewer that was to be integrated with the software. One viewer created higher risks for the program because nobody could work with it yet (it was new technology to them). Application Management didn’t have that many risks. Their main priority was to have a normal development cycle on the different environments (Development, Test, Acceptance and Production).
As of now I haven’t had much time to integrate the vision of the development team. Their team leader’s agenda hasn’t allowed us to talk, let alone summarise the risks.
After these interviews I tried to sort them out into a product risks analyses (PRA). I did:
- At first I stated the test goals (including priority and quality characteristics for the test goals) and I matched them with everybody.
- Then I sorted the results into the different processes and object parts of the product (they were stated in the documentation).
- After this I matched the test goals with the different processes and their characteristics;
- Next was the match between object parts and characteristics;
- Per characteristic I assessed the damage for the processes;
- Per characteristic I assessed the risk of failure for the object parts;
- At last I integrated these last two to pick the risk classes.
Normally I have a workshop to do this PRA and we all take these steps in the workshop as a group. It’s easier this way to get a consensus, but due lack of time I had to improvise this option. After I processed the PRA I mailed it to everybody involved with a manual on how to read it. Some reacted on it and I needed to re-asses some values.
With this I could create my test strategy! And finalise my ‘lean mastertestplan’!