Tuesday, 13 December 2011

Merry Christmas to all of YOU

Fundamental of Test Process

The fundamental test process consists of the following main activities :

o Test planning and control
o Test analysis an d design
o Test implementation and execution
o Evaluating exit criteria and reporting
o Test closure activities

Although logically sequential, thee activities in the process may overlap or take place concurrently. Tailoring these main activities within the cont ext of the system and thee project is unusually required.

Test Planning and Control
Test planning is the activity of defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.

Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, the testing activities should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.

Test Analysis and Design
Test analysis and design is the activity during which general testing objectives are transformed into tangible test conditions and test cases.
The test analysis and design activity has the following major tasks:
  1. Reviewing the test basis (such as requirements, software integrity level (risk level), risk analysis reports, architecture, design, interface specifications)
  2. Evaluating testability of the test basis and test objects
  3. Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure of thee software
  4. Designing and prioritizing high level test cases
  5. Identifying necessary test data to support the test conditions and test cases
  6. Designing the test environment setup and identifying any required infrastructure and tools
  7. Creating bi-directional traceability between test basis and test cases.

Test Implementation on and Execution
Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:


  1. Finalizing, implementing and prioritizing test cases (including the identification of test data)
  2. Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts
  3. Creating test suites from the test procedures for efficient test execution
  4. Verifying that the test environment has been set up correctly
  5. Verifying and updating bi-directional traceability between the test basis and test cases
  6. Executing test procedures eye there manually or by using test execution tools, according to the planned sequence
  7. Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and test ware
  8. Comparing actual results with expected results
  9. Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g., a defect in the cooed, in specie feed test data, in the test document, or a mistake in the way the test was executed)
  10. Repeating test activities as a result of action taken for each discrepancy, for example, re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged are as of the software or that defect fixing did not uncover other defects (regression testing)

Evaluating Exit Criteria and Reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:


  1. Checking test logs against the exit criteria specified inn test planning
  2. Assessing if more tests are needed or if the exit criteria specified should be changed
  3. Writing a test summary report for stakeholders

Test Closure Activities
Test closure activities collect data from completed test activities to consolidate experience, test-ware, facts and numbers. Test closure activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled ), a milestone has been achieved, or a maintenance release has been completed.


  1. Checking which planned deliverables have been delivered
  2. Closing incident reports or raising change records for any that remain open
  3. Documenting the acceptance of the system
  4. Finalizing and archiving test-ware, the test environment and the test infrastructure for later reuse
  5. Handing over the test-ware too the maintenance organization
  6. Analyzing lessons learned to determine changes needed for future releases and projects
  7. Using the information gathered to improve test maturity

Monday, 12 December 2011

Seven Software Testing Principles

Principle 1 – Testing shows presence of defects
Testing can show that defects are e present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.


Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.


Principle 3 – Early testing
To find defects early , testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.


Principle 4 – Defect clustering
Testing effort shall be focused proportionally to the expected and late r observed defect density of modules. A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of tithe operational failures.


Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.


Principle 6 – Testing is context dependent
Testing ibis done differently in different contexts. For example, safety-critical software is tested
differently from an e--commerce site.


Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfil the users’ needs and expectations.