The final list includes many items added by Kobi Halperin as appeared here.
Preparations and format:
01. Were all the requirements identified and listed while preparing the TD?
02. Does the traceability matrix exist as a part of the document or is it referenced from the TD?
03. Is every requirement covered by at least one test group in the TD?
04. Was the TD written by the template?
05. Was the TD written by the guidelines in the Test Approach document?
06. Are all the test groups and test cases uniquely identified?
07. Is the naming convention used consistent throughout the document?
08. Was the Doc Revision Change Control filled properly?
09. Are the Testing Gear properly identified?
a. Tool specifications.
b. Scripts and debug tools.
c. Information that should be found in logs.
10. Are some of the tests defined as being done not in a clean environment to identify integration and other parts influences?
11. Are all related documents mentioned and linked?
12. Are there Definitions & Abbreviations for all the terminology in the document?
13. Are all the topics that will be tested and not be tested defined in the document?
14. Is there any reference to documentation tests (Help Windows, User Manual & Installation Manual)?
15. Are there any TBD issues?
16. Was the Speller run? Are all needed values updated (e.g. TOC)?
17. Does the document include diversity (users, ports, slots etc)?
Test groups:
18. Were all the test groups and test cases defined in accordance with system impacts (Installability, Integrity, Security, Performance, Maintainability, Serviceability, Update/Migration, Documentation, Usability, Standards, Reliability, Requirements, Capability?)
19. Were all the test groups and test cases defined in accordance with triggers (Functional, Startup/Restart, Recovery/Exception, Redundancy Failover, Hardware configuration, Software configuration?)
20. Does every test group contain at least one test case?
21. Does each test group test one specific requirement?
22. Is every test group verified under different conditions/inputs/scenarios?
23. Is the testing method described clearly for each test group (to the level that will enable the reviewer to understand how the test cases will be executed and how the results will be evaluated?)
24. Does the test method for each test group include criteria for evaluating test results?
Test Cases:
25. Are the test cases classified according to applicable test types (Positive Test Case, Negative Test Case - about 30%, Boundaries Test Case, End Cases)?
26. Are there test cases designed to detect problems between the tested feature and other features/sub-systems of the system?
27. Does each test case include the purpose of the test?
28. Are the test cases for the same test group testing the same requirement under different conditions?
29. Are the test groups and test cases designed in bug revealing orientation (we are testing to find bugs and not to show that it works?)
30. Do the test cases have priority based on the product (e.g. complexity) and version (e.g. feature X as a milestone for a customer)?
31. Do the test groups include separate test cases for positive, and negative, boundary checks?
32. Do the test groups include separate test cases for end cases?
a. Were test cases with special input values (empty file, empty message, etc) defined?
b. Were test cases defined for testing functionality under abnormal conditions (no communication between two components, temporarily no connection to the database, etc)?
c. Were test cases with illegal input values defined?
33. Were complicated and not only trivial scenarios defined for test case execution (for example, several users performing actions or contradictory actions, like one user deposing a message, while the subscriber is deleted, or depositing a message while another message is deposited and causes exceeding the message quota)?
34. Are the test objectives described clearly for each test group?
35. Was test results evaluation defined (at least considered) by more than one method (for example, visual observation of user’s handset, activity log record and, relevant log file record?)
36. Does every test case have preparations and setup (either stated implicitly or referenced to the test group or whole TD preparations and setup?)
37. Are issues of error handling addressed?
38. Were test cases with default input values defined?
39. Were the test cases dependency eliminated (can each test case run independently and not dependent on the execution of other test cases)?
40. For test cases that have specific preparations and setup, was a return to initial conditions defined upon the end of execution (to eliminate dependency)?
41. Were test cases defined in order to check data integrity (data loss in case of errors, failures?)
42. Were test cases defined in order to check data consistency (in case of flow errors, for example?)
43. Were test cases defined to check rollback in case of flow error?
44. Were test groups and test cases that are relevant only for specific versions identified and marked as such (the idea is to have one TD for a sub-system/service/feature for all versions and test cases that are not relevant for this version can be easily identified and not executed?)
45. Are the expected test results for each test case specific and unambiguous (you understand exactly what to expect from the test results and if everything works correctly, you will always expect the same results?)
46. Where we use the same set of steps (e.g. checking text fields): is it written only once with reference?
47. Where we use the same set of parameters (e.g. checking configuration files): is it written only once with reference?
48. Do all test cases include running time estimation (before the first run it is a rough estimation, which will be updated if needed after the first run)?
Test Steps:
49. Are the actions (steps) in the test procedure specific and unambiguous?
50. Do the steps include only relevant actions for test execution and are not mixed with test preparations and setup?
51. Are the test procedures brief (5-10 steps. If not, you are probably testing several things at once?)
52. Wherever a range of values (including in the test gear) can be used - is it included in the tests?
Comments
Post a Comment