Monday, 17 September 2012

How to Export Data (Test cases) from EXCEL sheet to HP Quality Center 11

PART-1: How to Upload Data (Test cases) from EXCEL sheet to HP QUICK Center 11.0
Solution: To upload the data successfully from Excel sheet to HP QC, please follow the below mentioned steps,
1.      Invoke HP QC URL (e.g. http://test001:8080/qcbin/) that installed on you’re the server.












2.      Click on Add-Ins link displayed on above page and then click on “HP Quality Center Connectivity link displayed on new window (see below)

















3.      After clicking on above mentioned link, new window pops up












4.      Click on “Download Add-in link. Download and install HP Quality center Connectivity Add-in.









5.  Now install MS Excel Add-in by clicking on “More HP ALM Add-inspresent on below link






















6.    Install MS Excel Add-in by clicking on above link.

7.    After installing MS Excel Add-in, Re-open MS Excel if it’s already open otherwise open new Excel with Test data (Test cases) that you want to export to HP QC.

8.    Now you can see new Add-in “Export to HP ALM” under Add-Ins tab.










9.    Click on “Export to HP ALM” tab, and new HP window pops up (which ask you to enter HPALM server URL).
















10.  Enter valid user credentials (user id and password) and click on next.















11.  Select valid Domain and Project name and click on next















12.  Select valid type of data and click on next















13.  Type a new map name and click on next















14.  Do correct mapping of excel sheet’s column with HP ALM and click on Export









15.  System displays success message and you can see test cases should be uploaded to HP QC under Test Plan link.



Tuesday, 6 March 2012

Complete process to install oracle 11g

Step 1: First you must find out which Operating system do you have on ur system (32 bit or 64 bit Window).
Step 2: If you 've relevant oracle 11g setup then start installation otherwise click on the below link to install it from Oracle site,
Do the following changes in regedit if your system is 64 bit,
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC\MTxOCI     and
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSDTC\MTxOCI
are correct:
a. OracleOciLib = oci.dll
b. OracleSqlLib = orasql11.dll   (old: SQLLib80.dll)
c. OracleXaLib = oraclient11.dll  (old: xa80.dll)  

 
Step 3: Update TNSNAMES files according to your system DB config,
By default, TNSNames file is under below location,
C:\app\vinod\product\11.2.0\client_1\Network\Admin\
Settingup Environment variables
you should set the TNS_ADMIN environment variable to point to where ever you store your tnsnames.ora file


Adding DNS name in following tabs in Microsoft DBC Admin wizards
                - USER DSN
                -SYSTEM DSN

Adding editing TNSNAMes files,  then do the ODBC setting and automatically TNS names that you edited in TNSNames file, it populates right under TNS service name dropdown list.

Thats the whole process of installlation. Hope it will work for you ...
Cheers.

Tuesday, 13 December 2011

Merry Christmas to all of YOU

Fundamental of Test Process

The fundamental test process consists of the following main activities :

o Test planning and control
o Test analysis an d design
o Test implementation and execution
o Evaluating exit criteria and reporting
o Test closure activities

Although logically sequential, thee activities in the process may overlap or take place concurrently. Tailoring these main activities within the cont ext of the system and thee project is unusually required.

Test Planning and Control
Test planning is the activity of defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.

Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, the testing activities should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.

Test Analysis and Design
Test analysis and design is the activity during which general testing objectives are transformed into tangible test conditions and test cases.
The test analysis and design activity has the following major tasks:
  1. Reviewing the test basis (such as requirements, software integrity level (risk level), risk analysis reports, architecture, design, interface specifications)
  2. Evaluating testability of the test basis and test objects
  3. Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure of thee software
  4. Designing and prioritizing high level test cases
  5. Identifying necessary test data to support the test conditions and test cases
  6. Designing the test environment setup and identifying any required infrastructure and tools
  7. Creating bi-directional traceability between test basis and test cases.

Test Implementation on and Execution
Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:


  1. Finalizing, implementing and prioritizing test cases (including the identification of test data)
  2. Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts
  3. Creating test suites from the test procedures for efficient test execution
  4. Verifying that the test environment has been set up correctly
  5. Verifying and updating bi-directional traceability between the test basis and test cases
  6. Executing test procedures eye there manually or by using test execution tools, according to the planned sequence
  7. Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and test ware
  8. Comparing actual results with expected results
  9. Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g., a defect in the cooed, in specie feed test data, in the test document, or a mistake in the way the test was executed)
  10. Repeating test activities as a result of action taken for each discrepancy, for example, re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged are as of the software or that defect fixing did not uncover other defects (regression testing)

Evaluating Exit Criteria and Reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:


  1. Checking test logs against the exit criteria specified inn test planning
  2. Assessing if more tests are needed or if the exit criteria specified should be changed
  3. Writing a test summary report for stakeholders

Test Closure Activities
Test closure activities collect data from completed test activities to consolidate experience, test-ware, facts and numbers. Test closure activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled ), a milestone has been achieved, or a maintenance release has been completed.


  1. Checking which planned deliverables have been delivered
  2. Closing incident reports or raising change records for any that remain open
  3. Documenting the acceptance of the system
  4. Finalizing and archiving test-ware, the test environment and the test infrastructure for later reuse
  5. Handing over the test-ware too the maintenance organization
  6. Analyzing lessons learned to determine changes needed for future releases and projects
  7. Using the information gathered to improve test maturity

Monday, 12 December 2011

Seven Software Testing Principles

Principle 1 – Testing shows presence of defects
Testing can show that defects are e present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.


Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.


Principle 3 – Early testing
To find defects early , testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.


Principle 4 – Defect clustering
Testing effort shall be focused proportionally to the expected and late r observed defect density of modules. A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of tithe operational failures.


Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.


Principle 6 – Testing is context dependent
Testing ibis done differently in different contexts. For example, safety-critical software is tested
differently from an e--commerce site.


Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfil the users’ needs and expectations.

Wednesday, 30 November 2011

VBScript Keywords

empty:
Used to indicate an uninitialized variable value. A variable value is uninitialized when it is first created and no value is assigned to it, or when a variable value is explicitly set to empty.
Example:
dim x ‘the variable x is uninitialized!
x=”ff” ‘the variable x is NOT uninitialized anymore
x=empty ‘the variable x is uninitialized!
Note: This is not the same as Null!!
isEmpty:
Used to test if a variable is uninitialized.
Example: If (isEmpty(x)) ‘is x uninitialized?
nothing Used to indicate an uninitialized object value, or to disassociate an object variable from an object to release system resources.
Example: set myObject=nothing
is nothing:
Used to test if a value is an initialized object.
Example: If (myObject Is Nothing) ‘is it unset?
Note: If you compare a value to Nothing, you will not get the right result! Example: If (myObject = Nothing) ‘always false!
null:
Used to indicate that a variable contains no valid data.
One way to think of Null is that someone has explicitly set the value to “invalid”, unlike Empty where the value is “not set”.
Note: This is not the same as Empty or Nothing!!
Example: x=Null ‘x contains no valid data
isNull:
Used to test if a value contains invalid data.
Example: if (isNull(x)) ‘is x invalid?
true:
Used to indicate a Boolean condition that is correct (true has a value of -1)
false:
Used to indicate a Boolean condition that is not correct (false has a value of 0)

Dictionary Method

Object that stores data key, item pairs.
A Dictionary object is the equivalent of a PERL associative array. Items can be any form of data, and are stored in the array. Each item is associated with a unique key. The key is used to retrieve an individual item and is usually a integer or a string, but can be anything except an array. The following code illustrates how to create a Dictionary object: [VBScript] Dim d ‘ Create a variable. Set d = CreateObject(“Scripting.Dictionary”) d.Add “a”, “Athens” ‘ Add some keys and items. d.Add “b”, “Belgrade” d.Add “c”, “Cairo” …
Methods
Add Method (Dictionary) | Exists Method | Items Method | Keys Method | Remove Method | RemoveAll Method
Add method : Adds a key and item pair to a Dictionary object Exists : Returns true if a specified key exists in the Dictionary object, false if it does not.
Items: Returns an array containing all the items of the dictionary object.
Keys: Returns an array containing all the keys of the dictionary object.
Remove: Remove the specified item from the dictionary object.
RemoveAll: Remove all items from the Dictionary object.
Count:Returns the number of items in a collection or Dictionary object. Read-only.
Item: Sets or returns an item for a specified key in a Dictionary object
Key: Sets a key in a Dictionary object.
Example: 
Function KeyExistsDemo
Dim d, msg ‘ Create some variables.
Set d = CreateObject(“Scripting.Dictionary”)
d.Add “a”, “Athens” ‘ Add some keys and items.
d.Add “b”, “Belgrade”
d.Add “c”, “Cairo”
d.Remove(“c”)
d.RemoveAll
DicDemo = d.Item(“a”) ‘ Return associate item.
Exists
If d.Exists(“c”) Then
msg = “Specified key exists.”
Else
msg = “Specified key doesn’t exist.”
End If
KeyExistsDemo = msg
items: 

     a = d.Items   ' Get the items.
For i = 0 To d.Count -1 ‘ Iterate the array.
s = s & a(i) & "<BR>" ' Create return string.

Next

DicDemo = s


Keys: 

    a = d.Keys   ' Get the keys.
For i = 0 To d.Count -1 ‘ Iterate the array.
s = s & a(i) & "<BR>" ' Return results.

   Next

   DicDemo = s
End Function