Monday, June 22, 2009

VB

What is VBScript?

VBScript is a subset of Visual Basic 4.0 language. It was developed by Microsoft to provide more processing power to Web pages. VBScript can be used to write both server side and client side scripting. (If you already know Visual Basic or Visual Basic for Applications (VBA), VBScript will be very familiar. Even if you do not know Visual Basic, once you learn VBScript, you are on your way to programming with the whole family of Visual Basic languages.)

Data types

VBScript supports only one data type called 'Variant'. The variant data type is a special kind of data type that can contain different kinds of information. It is the default data type returned by all functions in VBScript. A variant behaves as a number when it is used in a numeric context and as a string when used in a string context. It is possible to make numbers behave as strings by enclosing them within quotes.

Variables

A variable is a placeholder that refers to a memory location that stores program information that may change at run time. A variable is referred to by its name for accessing the value stored or to modify its value.

Variable Declaration

Variables in VBScript can be declared in three ways:

  1. Dim Statement
  2. Public Statement
  3. Private Statement

For example:
Dim No_Passenger
Multiple variables can be declared by separating each variable name with a comma. For example:
Dim Top, Left, Bottom, Right
You can also declare a variable implicitly by simply using its name in your script.
That is not generally a good practice because you could misspell the variable name in one or more places, causing unexpected results when your script is run.
For that reason, the Option Explicit statement is available to require explicit declaration of all variables. The Option Explicit statement should be the first statement in your script.
Note:

Variables declared with Dim at the script level are available to all procedures within the script. At the procedure level, variables are available only within the procedure.
Public statement variables are available to all procedures in all scripts.
Private statement variables are available only to the script in which they are declared.

Naming Convention

There are standard rules for naming variables in VBScript. A variable name:

  1. · Must begin with an alphabetic character.
  2. · Cannot contain an embedded period.
  3. · Must not exceed 255 characters.
  4. · Must be unique in the scope in which it is declared.

Assigning Values to Variables

Values are assigned to variables creating an expression as follows: the variable is on the left side of the expression and the value you want to assign to the variable is on the right. For example:

B = 200


Scalar Variables and Array Variables

Much of the time, you only want to assign a single value to a variable you have declared. A variable containing a single value is a scalar variable. Other times, it is convenient to assign more than one related value to a single variable. Then you can create a variable that can contain a series of values. This is called an array variable. Array variables and scalar variables are declared in the same way, except that the declaration of an array variable uses parentheses ( ) following the variable name. In the following example, a single-dimension array containing 11 elements is declared:

Dim A(10)

Although the number shown in the parentheses is 10, all arrays in VBScript are zero-based, so this array actually contains 11 elements. In a zero-based array, the number of array elements is always the number shown in parentheses plus one. This kind of array is called a fixed-size array.

Constants

A constant is a meaningful name that takes the place of a number or a string, and never changes. VBScript in itself has a number of defined intrinsic constants like vbOK, vbCancel, vbTrue, vbFalse and so on.
You create user-defined constants in VBScript using the Const statement. Using the Const statement, you can create string or numeric constants with meaningful names and assign them literal values. For example:

Const MyString = "This is my string."Const MyAge = 49

Note that the string literal is enclosed in quotation marks (" "). Also note that constants are public by default.
Within procedures, constants are always private; their visibility can't be changed.
Next post we will deal with constructs and arrays.

This is in continuation from
VB Script and QTP - Part1 on our series of posts on VB Script. Here, we will dwell upon conditional constructs, iterative constructs and arrays.
Conditional Constructs
Conditional Constructs execute statements or repeat certain set of statements based on conditions.
The following conditional constructs are available in VBScript
· If – Then –Else
· Select Case

If – Then – Else Construct

The If – Then- Else Construct is used to evaluate whether a condition is true or false and depending on the result, to specify one or more statements to execute. Usually the condition is an expression that uses a comparison operator to compare one value or variable with another. The If- Then – Else statements can be nested to as many levels as needed.
For example:

Sub ReportValue(value)If value = 0 ThenMsgBox valueElseIf value = 1 ThenMsgBox valueElseIf value = 2 thenMsgbox valueElseMsgbox "Value out of range!"End If


You can add as many ElseIf clauses as you need to provide alternative choices. Extensive use of the ElseIf clauses often becomes cumbersome. A better way to choose between several alternatives is the Select Case statement.

Select Case Construct

The Select-Case structure is an alternative to If Then Else for selectively executing one block of statements from among multiple blocks of statements. The Select Case Construct makes code more efficient and readable.

A Select Case structure works with a single test expression that is evaluated once, at the top of the structure. The result of the expression is then compared with the values for each Case in the structure. If there is a match, the block of statements associated with that Case is executed.

For example:

Select Case Document.Form1.CardType.Options(SelectedIndex).Text

Case "MasterCard"

DisplayMCLogo

ValidateMCAccount

Case "Visa"

DisplayVisaLogo

ValidateVisaAccount

Case "American Express"

DisplayAMEXCOLogo

ValidateAMEXCOAccount

Case Else DisplayUnknownImage PromptAgain

End Select

Iterative Constructs
Looping allows to run a group of statements repeatedly. The loop is repeated based on a condition. The loop runs as long as the condition is true. The following looping constructs are available in VBScript.
· Do – Loop

· While – Wend

· For – Next

Do – Loop

Do – Loop statements are used to execute a block of statements based on a condition. The statements are repeated either while a condition is true or until a condition becomes true. While Keyword can be used to check a condition in a Do – Loop construct. The condition can be checked before entering into the loop or after the loop has run at least once.
The basic difference between a "Do while – Loop" and "Do - Loop while" is that the previous one gets executed only when the condition in the while statement holds true where as a "Do – Loop while" gets executed atleast once, because the condition in the while statement gets checked at the end of the first iteration.
While – Wend

The While...Wend statement is provided in VBScript for those who are familiar with its usage. However, because of the lack of flexibility in while...wend, it is recommended that you use Do...Loop instead.

For..Next

The For-Next loop can be used to run a block of statements a specific number of times. For loops use a counter variable whose value is increased or decreased with each repetition of the loop. The Step Keyword is used to increase or decrease the counter variable by the value that is specified along with it. The For-Next statement can be terminated before the counter reaches its end value by using the Exit For statement.
For example:
Dim j, total
For j = 2 To 10 Step 2

total = total + j

Next
MsgBox "The total is " & total
Arrays

An array is a contiguous area in the memory referred to by a common name. It is a series of variables having the same data type. Arrays are used to store related data values. VBScript allows you to store a group of common values together in the same location. These values can be accessed with their reference numbers.

An array is made up of two parts, the array name and the array subscript. The subscript indicates the highest index value for the elements within the array. Each element of an array has a unique identifying index number by which it can be referenced. VBScript creates zero based arrays where the first element of the array has an index value of zero.

Declaring Arrays

An array must be declared before it can be used. Depending upon the accessibility, arrays are of two types:
· Local Arrays

A local array is available only within the function or procedure, where it is declared.

· Global Arrays

A global array is an array that can be used by all functions and procedures. It is declared at the beginning of the VBScript Code.

The Dim statement is used to declare arrays. The syntax for declaring an array is as follows:

Dim ArrayName(subscriptvalue)

Where, ArrayName is the unique name for the array and SubscriptValue is a numeric value that indicates the number of elements in the array dimension within the array.

Example:

Dim No_Passengers(3)

The No_Passengers can store 4 values.

Assigning values to the array

No_Passengers(0) = 1
No_Passengers(1) = 2

No_Passengers(2) = 3

No_Passengers(3) = 4
Static and Dynamic Arrays:
VBScript provides flexibility for declaring arrays as static or dynamic.

A static array has a specific number of elements. The size of a static array cannot be altered at run time.

A dynamic array can be resized at any time. Dynamic arrays are useful when size of the array cannot be determined. The array size can be changed at run time.

Next we will deal with user defined procedures, functions and subroutines.

===================================================

In VBScript, there are two types of procedures:

  1. Sub Procedures
  2. Function Procedures

Sub Procedures

A sub procedure is a series of VBScript statements, enclosed by Sub and End Sub statements which perform actions but do not return a value. A sub procedure can take arguments. If a sub procedure doesn't receive any arguments, its Sub statement must include an empty parenthesis().

The following Sub procedure uses two intrinsic, or built-in, VBScript functions, MsgBox and InputBox , to prompt a user for information. It then displays the results of a calculation based on that information. The calculation is performed in a Function procedure created using VBScript. The Function procedure is shown after the following discussion.

Sub ConvertTemp()

temp = InputBox("Please enter the temperature in degrees F.", 1)

MsgBox "The temperature is " & Celsius(temp) & " degrees C."

End Sub

Function Procedures

A function procedure is a series of VBScript statements enclosed by the Function and End Function statements. A function procedure is similar to a sub procedure but it can return value to the calling function. A function procedure can take arguments (constants, variables or expressions that are passed to it by a calling procedure). If a function procedure has no arguments, it Function statement must include an empty set of parenthesis. A function returns a value by assigning a value to its name in one or more statements of the procedure. Since VBScript has only one base data type, a function always returns a variant.

In the following example, the Celsius function calculates degrees Celsius from degrees Fahrenheit. When the function is called from the ConvertTemp Sub procedure, a variable containing the argument value is passed to the function. The result of the calculation is returned to the calling procedure and displayed in a message box.

Sub ConvertTemp()

temp = InputBox("Please enter the temperature in degrees F.", 1)

MsgBox "The temperature is " & Celsius(temp) & " degrees C."

End Sub


Function Celsius(fDegrees)

Celsius = (fDegrees - 32) * 5 / 9

End Function

Tips:

  • To get data out of a procedure, you must use a Function. Remember, a Function procedure can return a value; a Sub procedure can't.
  • A
    Function in your code must always be used on the right side of a variable assignment or in an expression.
  • To call a Sub procedure from another procedure, type the name of the procedure along with values for any required arguments, each separated by a comma. The Call statement is not required, but if you do use it, you must enclose any arguments in parentheses.
  • The following example shows two calls to the MyProc procedure. One uses the Call statement in the code; the other doesn't. Both do exactly the same thing.

Call MyProc(firstarg, secondarg)

MyProc firstarg, secondarg

Notice that the parentheses are omitted in the call when the Call statement isn't used.

Sunday, June 21, 2009

QC-2

Q. What is the use of Test Director software?

Test Director is Mercury Interactive's software test management tool. It helps quality assurance personnel plan and organize the testing process. With Test Director you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

Q. In the defect life cycle, if you find a defect in QC and think that it is not a valid defect. what will you do? Just close it or store it in any document for future review, if yes what type of document and where?

In the above scenario, the defect can be rejected.

Q. Can you install Quality center in Vista?

 No, we can not install QC in Vista.

 QC supports the below plotforms:

     -Windows 2000 with service pack4

    -Windows XP with service pack2

 
 

 Q. What are the tabs in Test Director? Explain each tab.

1.Requirements.

2.Test Plan.

3.Test Lab.

4.Defect Track

 
 

Q. How to switch between two projects?

Suppose there are 2 projects P1 and P2.

You are already logged in to project P1, then see in the right hand side of the application, there is an option called TOOLS, click on that and you will get a option CHANGE PROJECT, click on that and then click on select, u can move to the project P2. 

Q. In which tab are test cases stored in Test director?

In Test Plan tab test cases are written for the requirements which are written in Requirements part. And these test cases are executed in Test Lab. If the test fails directly we can link that defect to Defects with specific ID.

Q. Can we add user defined fields to Test Director?

Yes, we can add the user defined fields in QC 9.0, but not in TD 8.2.

Q. How to map requirements with test cases in Test Director?

There are separate tabs for requirements and test cases (Test plan). For each requirement, there is an option using which we can select the corresponding test case and map to it.

Q. What is the main purpose to storing requirement in TD?

To map the test cases against specs so that one can find if there is any missing coverage.

Q. How can we save the tests Executed in test lab?

We can select the test which we want to save right click on it.click on save as, then save it in any format like  excel, word, xml etc.

Q. What is the Purpose of Creating Child Requirement

To add low level requirements for the parent requirement.

Q. How many types of reports can be generated

Requirements Coverage Report

Planning Report

Execution Report

Defects Report


 

Q. What is RTM in test director?

RTM is known as Requirement Traceability Matrix.

By RTM, the Requirement of the user can be identified with the exact software workings and the tested action and bugs reported for that particular requirement can be matched and identified in detail.

 Q. What is connection between test plan tab and test lab tab? how can u access your test cases in test lab tab?

Test Plan tab has all the Test Cases written or uploaded for particular project's requirement. It is the Test Lab tab where Test case execution is done by creating same folder structure as in Test Plan tab and pulling the respective Test Cases in the grid.

Q. Explain how to link Quality center with QTP . Please provide detailed step for connectivity.

You have the option in your QTP tool bar to connect to QC, Also you can connect through File menu , click on File -> Quality Center Connection it opens Quality center Server connection window with default URL.

Click on connect -> Enter Username, Password.

Q. What is connection between test plan tab and test lab tab? how can u access your test cases in test lab tab?

Test Plan tab has all the Test Cases written or uploaded for particular project's requirement. It is the Test Lab tab where Test case execution is done by creating same folder structure as in Test Plan tab and pulling the respective Test Cases in the grid.

Q. Can you write test cases without mentioning the requirements?

Yes, you can write test cases without using requirements. But its not recommended.

Q. What is the Extra tab in QC?

In test director we have requirement tab, test plan tab, test lab, defect tab. Apart from these all QC have business component and dash board tabs.

Q. What is connection between test plan tab and test lab tab? How can u access your test cases in test lab tab?

Test plan tab is used to create the test case and we can also write step by step description of the test and the expected results. Test lab tab is used to execute the test case.

To access the test cases that are created by you is by using your username and login.


 

Q. How do you create Test script from Quality centre Using QTP?

In QC test scripts will be generated in test plan module, by creating the tests in which you need to write all the steps needed for the testing should be written first in manually then clicking on the script generate button in design steps, then QC will prepare a dummy script in test script tab now open the test script tab and click on the quality center button, now the tool will be opened automatically, after u need to write the script, save n close the same script will be reflected to the qc test script tab, that's why qc generate the test scripts.

Q. What is the difference between Quality Center and Test Director?

Quality Center is the advanced version of Test Director. In QC there are two extra tabs called Business Component and Dashboard which is used to prepare graph. Given below the complete difference between the two in details:

Subject

Test Director 8.0

Quality Center 9.0

Technology

  • C++
  • IIS
  • COM
  • Back-end is Java based
  • Runs on application servers

Operating Systems

Microsoft Windows

  • Microsoft Windows
  • Red Hat Linux
  • Solaris

Clustering

Single server only

Full clustering support

Database Connectivity

  • Requires database client installation
  • ADO interface
  • Does not require database client installation
  • Direct access to a database server using a JDBC type 4 driver

Repository

Domain repository (TD_Dir).

Repository divided into two subdirectories:

  • QC directory for default and user-defined domains.
  • SA directory for Site Administrator data

Virtual Directory

Virtual directory name is tdbin

  • Quality Center server virtual directory name is qcbin.
  • Site Administrator server virtual directory name is sabin.

Supported Databases

  • Microsoft Access
  • Microsoft SQL
  • Oracle
  • Sybase
  • Microsoft SQL
  • Oracle

Site Administrator Data (domains, projects, and users)

Data stored in the doms.mdb file

Data stored in the Site Administrator schema on a database server

Common Settings

Data stored in the file system

Data stored in the database

User Authentication

Windows authentication

LDAP authentication

 Q. What is the last version in TD to support for qtp?

The last version of TD is 8.0. Version 9.0 is called QC (Quality center).

 
 

  Q. What are the tabs in TD?

Test Director mainly have four grids

1. Requirements

2. Test plan

3. Test lab

4. Defect


 

 Q. TD is web based or client/server?

It is a client server not web based tool. It will maintain client server architecture where server is the database with stores the test information, and client is qc center machine by which the data stored will be retrieved and the server n client r communicating with active x controls.

 Q. What is the latest version in TD?

Latest Version Of QC is 9.5. Before that the latest version of TD was 8.0. 

Q. How to map test cases with requirements in test lab?

First go to the test lab and select the test case that needed to be mapped. On the right hand side of the test lab select the requirement coverage tab and click the select requirement tab, requirement tree is displayed on the right side. You need to search for the requirement that need to be attached or you can find by requirement name in the find box. Then click on the add to coverage (This is shown in the requirement tree with the arrow). This will add a requirement to the test case. Now you can close the requirement tree.

Q. How do you prepare Bug Reports? What all do you include in Bug Report?

Bug report preparation:

1.Click on Track defects tab (mandatory)

2.Click on Add

3.Enter all the details like below:

 Summary:---> What test you are performing? say GUI test on a  particular window

Detected by: tester who found the bug

Version:

Date:

Assigned to: developer who will be fixing it.

Status:

Test set:

Project id:

Severity:

Priority

In status field there r 6 options:

 New: when the bug is detected for the first time
Open: after sending the bug report to the developer
Rejected: when the bugs are found in the modified version of the application that is sent by the developer
Re-open: when the bug report is sent to the developer for subsequent no of times.
Fixed: when the modified version of the application is found free from bugs.
Closed:  the status is selected by the QA Lead after verifying the test to confirm whether it is passed.

 Q. How can we connect defect report from test lab & test plan tabs?

Actually when u link the defect to any test that indirectly linked to the requirement 1st way, be in test plan module select the test for which u want to link the defect n go for linked defects tab n a list of all defects will be shown in that select the defect which has to be linked 2nd way be in defect tab open the defect which u want to link click on linked entities select the test which needed to be linked to the current defect

 Q.  How to upload test cases from excel sheet to Quality center test plan section. How to upload data (written test case in excel) to Quality center?

First install the excel add ins present in QC/TD, then you will get the option upload to TD of your excel sheet under Tools menu. Then click on Tools. Then rest is self explanatory as you follow the steps u will find.

 Q. What is the purpose of Dashboard in Quality Center? Mention the advantages of dashboard?

Dashboard functionality is introduced in Quality Center as an Add-in, if a user wants to analyze the their test measurement in a more explanatory way you can use Dashboard, using the Dashboard functionality you can see the Test Measurements in Graphical way and in Charts with different filters.

Q. How do we find that there are duplicate bugs in Test Director or quality center?

Open the Defect Manager tab, you can see "Find Similar Defects" button, enter defect description in short, in the Find Similar Defects panel that opens. You can get similar defects. 

Q. Can we also write test cases in test director instead of excel or word if yes, how?

Yes, we can write test cases in TD also. In TD you have Test Plan Tab. In that first you map/create a requirement and create a test and using the "Design test" sub tab in "Test plan" tree you can write test steps.

 Q. In automation testing (QTP), how we find the bugs?

In QTP after recording your application, you need to run it to check whether it is failed or passed. Once the application is executed, test result window displaces the result of the execution. In that it gives us a where it failed and the detail description about the failure.

Q. How we do the smoke testing?

Smoke testing will be conducted to ensure whether the most crucial functions of a software work, but not bothering with finer details. Smoke Testing is conducted by testing team on receiving of each build as to check whether build/release given is stable and it can be considered for further testing.

 Q. What do you mean by Requirement Coverage?

Requirement coverage comes under Functional Coverage Criteria - Test Development and Optimization

a.  Where to Find Requirements

b.  Traceability

c.  Testability

d.  Attributes of Testable Requirements

e.  Test Matrix

Q. How to write test cases if we have, given requirement and template?

In the test cases we will write Test case number, Name ,Date of the test , Objective and the test steps .If someone has more idea about test cases.

Q. Can we have dependency between to bugs in test director? Like Bug #10 is dependent on Bug #1?

Here is the scenario. I'm raising Bug#1. Another person has raised Bug#10. Bug #10 can be dependent on Bug#1. And when the new build is delivered after a fix, Bug#1 needs to tested again followed by Bug#10. This is the procedure that needs to be followed.

Q. Can we maintain test data in test director?

yes we can maintain the test data in TD...if ur uploading the excel sheet as well as when ur writing the test cases directly in TD, when you write test case in TD you can see a separate column for test data.

 Q. How we load the test script into test director?

Here we have an option like ADD-INS in TestDirector. By connecting Win Runner Add In to the TestDirector we can load the test scripts as mentioned below:

step 1:create test scripts in WinRunner as usual.

step 2:Now we are trying to save it in TD.

Tools--> Test director connection-->same URL

(TD)..connect...close connection between WinRunner & TestDirector

File--save--save test to TestDirector Project

then save GUI

save--GUI map editor...save GUI file to TestDirector project

 Q. Have you integrated your automated scripts from Test Director?

When you work with WinRunner, you can choose to save your tests directly to your Test Director database or while creating a test case in the Test Director we can specify whether the script in automated or manual. And if it is automated script then Test Director will build a skeleton for the script that can be later modified into one which could be used to test the AUT (Application Under Test).

Q. How to run several tests at a time using test director, with out using automation tool? (for example if i have given 20 tests to test director for running purpose in today's night i want to check the test results next day) how to do this ?

Click on the Run test Set under Test lab in that you have the option of running the test Automatically or manually, you can either select all test to run or just 1 test. The run can be done Locally or remotely...by creating a host or adding new host in the list. Please use the user Guide for further details.

Q. Where and how do you write test cases in TD? What steps i.e. how do you change status of bug in TD?

All test cases we will write only in Test plan and Then those test cases pull from test plan to Test Lab. Where as Consider Defects, we can choose Defect Status based using defect Status field.

 Q. How to generate Test case id in Test Director? a)Testcase id 1, b)Testcase id 2, c) TestCase id 3 d) Test case id 4?

1.      Go to requirement tab, 

2.      Go to view. 

3.      click numeration

This generates ID or numbers to the test cases and requirements

 Q. How can we connect test director to WinRunner and database?

An add ins for winrunner or QTP should be downloaded and installed from Test director. This enables to integrate Test director with winrunner allowing to design and run winrunner tests and to view the results in Test director

 Q. How can i post the results to a site once I run the test set in Test Director?

While testing the each steps in the Test lab there will be the actual results and drop down box contains all information for ex pass, fail, no run, etc there u can select the option and you can give ur actual comments in the actual field

Q. How many Reports can be generated from Test Director?

 They are four type of reports can be generated in TD

 
 

1. Test requirements

2. Plan Test

3. Run Test

4. Track Defect

 
 

Q. What are disadvantages of Test director?

 
 

TD does not support formatting and hence the test cases need to be saved in the plain text.

 
 

 Q. What is the role of snap shot in Test Director?

Using this feature we can log a defect with corresponding screen shot, so it will give appropriate understanding

 Q. If you delete a test from Test Lab will it be updated in Requirements/Test Plan tabs also?

No, It will be not reflected in Requirements/Test Plan. For each build/release we create a new test bed set in the Test Lab.

Q. What is test set?

Test Set is nothing but group of test cases stored in Test Set builder from which the tests can be executed in chronological order.

 Q. Yesterday night you have given some scripts to run and went home. Today morning you come to office and want to check the result in Test Director (TD). How will you check this?

By checking the Status column against each test case script in Test Lab tab in TD that depicts whether test case got passed or failed.

Q. How to fetch data from excel sheet using Test Director?
 
Go to Add In menu in Test Director. Find the Excel Add in and Install it in you machine.
Now open Excel. Click on Tools > export to TD and follow the steps.
 
Enter
1. URL of Test director.
2. Domain name and Project Name
3. User name and Password
4. Select any one of these 3 options: requirement or test case or defects
5. Select a Map option. A. selects a map b. Select a new map name c. Create a temporary map
6. Map the TD to Corresponding Excel. Map the field whatever you mentioned in Excel.
7 & 8 are not required; those pages will be shown by TD as process and exported successfully.

These are the required steps to export excel into TD.
Q. How to execute test case in TD?

 There are two ways:

1. Manual Runner Tool for manual execution and updating of test status.

2. Automated test case execution by specifying Host name and other automation pertaining details.

Q. Where do you maintain the test cases for the project for automation testing using Test director?

We can maintain the test cases in test director or in the qtp tool. We can save project related documents in the Visual Source Safe (VSS).

 Q. How do you configure test cases in test director for automation Testing?

To configure Test scripts you need to Connect to the TD URL and run the scripts from there. You can connect by using Connect---> Enter the URL where TD is present and give the username and password of that particular project in TD. and then you're done.

Q. How do you execute the test cases in Test Director?

 
 

Execute test cases in 

TEST LAB tab--->execution grid---->use  RUN  or RUN TEST 

SET----> 

Then say execute in next window...u can say passed/failed, then write for expected result and actual result, then close and come back to test lab.

 Q. Which is the latest versions of WinRunner ,QTP, Test Director companies now using in real time?

 QTP 9.5

Win Runner 9.2

Load Runner 9.2

Quality Center 9.2

 Q. What is Test Bed?

Test Bed means the collectivity of test data and test guidelines. Test beds are the environments in which the standard tasks may be implemented. The purpose of a test bed is to provide metrics for evaluation (objective comparison) and to lend the experimenter a fine-grained control in testing agents.

 Q. How you upload test cases in to TD?

 Note: In excel add-in for exporting the test cases to TD should be installed.

 
 

In an Excel sheet type the test cases in the following format in each column:

Path - Test Case ID - Description_1 - Step Name- Description - Expected Result

In the excel sheet select Tools -> Export to TD. Wizard appears, just fill in the wizard and map the columns. Click Finish. your test case will be uploaded to TD.

Q. What is Test case coverage?

Test case coverage is nothing but the test cases written for the coverage of a particular test scenario that is coverage in Traceability Matrix

QC

1. Why is TestDirector used for?

TestDirector is a test management tool. The completely web-enabled TestDirector supports high level of communication and association among various testing teams, driving a more effective and efficient global application-testing process. One can also create reports and graphs to help review the progress of planning tests, executing tests, and tracking defects before a software release.

2. Why are the requirements linked to the test cases?

TestDirector connects requirements directly to test cases, ensuring that all the requirements have been covered by the test cases.

3. What are the benefits and features of TestDirector?

TestDirector incorporates all aspects of the testing process i.e. requirement management, test planning, test case management, scheduling, executing tests and defect management into a single browser-based application. It maps requirements directly to the test cases ensuring that all the requirements have been covered by the test cases. It can import requirements and test plans from excel sheet accelerating the testing process. It executes both manual and automated tests.

4. What is the use of filters in TD?


Filters in TestDirector are mainly used to filter out for the required results. It helps to customize and categorize the results. For eg: to quickly view the passed and failed tests separately filters are used.

5. What is Test Lab?


In the Test Lab the test cases are executed. Test Lab will always be linked to the test plan. Usually both are given the same name for easy recognition.

6. How to customize the defect management cycle in Quality Center?

Firstly one should collect all the attributes that has to be part of the defect management like version, defect origin, defect details, etc. Later using the modify options in QC one can change the defect module accordingly.

7. What is the advantage of writing test cases in Quality Center than writing in excel sheet?

Although creating test cases in excel sheet will be faster than doing it in QC as excel is more user friendly when compared to QC one require to upload them to QC and this process may cause some delay due to various reasons. Also QC provides link to other tests which in turn is mapped to the requirements.

8. What is the difference between TestDirector and Quality Center?

The main difference is QC is more secured than TestDirector. In Quality Center the login page shows projects associated only to the logged in user unlike in Test Director where one can see all the available projects. Also test management is much more improved in QC than TD. Defect linkage functionality in QC is more flexible when compared to TD.

9. What is meant by Instance?

Test instance is an instance of test case in Test Lab. Basically it means the test case which you have imported in Test lab for execution.

10. What is the use of requirement option in TestDirector?

Requirement module in TD is used for writing the requirements and preparing the traceability matrix.

11. Is it possible to maintain test data in TestDirector?

Yes one can attach the test data to the corresponding test cases or create a separate folder in test plan to store them.

12. If one tries to upgrade from TestDirector 7.2 to QC 8.0 then is there risk of losing any data?

No there is no risk of losing the data during the migration process. One has to follow proper steps for successful migration.

13. How is a bug closed in TestDirector?

Once the test cases are executed in the Test Lab and bugs are detected, it is logged as a defect using the Defect Report Tab and sent to the developer. The bug will have 5 different status namely New, Open, Rejected, Deferred and Closed. Once the bug has been fixed and verified its status is changed to closed. This way the bug lifecycle ends.

14. In TD how are the test cases divided into different groups?

In the test plan of TestDirector one can create separate folder for various modules depending on the project. A main module in the test plan can be created and then sub modules be added to that.

15. What is the difference between TD and Bugzilla?

TestDirector is a test management tool. In TD one can write manual and automated test cases, add requirements, map requirements to the test cases and log defects. Bugzilla is used only for logging and tracking the defects.

16. Are TestDirector and QC one and the same?

Yes TestDirector and Quality Center are same. Version of TD 8.2 onwards was known as Quality Center. The latest version of Quality Center is 9.2. QC is much more advanced when compared to TD.

17. What is the instance of the test case inside the Test Set?

Test set is a place containing sets of test cases. We can store many test cases inside test set. Now instance of test case is the test case which you have imported in the test tab. If an another test case has the same steps as this test case till half way then you can create the instance of this test case.

18. What are the various types of reports in TestDirector?

In TD reports are available for requirements, test cases, test execution, defects, etc. The reports give various details like summary, progress, coverage, etc. Reports can be generated from each TestDirector module using the default settings or it can be customized. When customizing a report, filters and sort conditions can be applied and the required layout of the fields in the report can be specified. Sub-reports can also be added to the main report. The settings of the reports can be saved as favorite views and reloaded as required.

19. How can one map a single defect to more than one test script?

Using the 'associate defect' option in TestDirector one can map the same defect to a number of test cases.

20. Is it possible to create custom defect template in TestDirector?

It is not possible to create ones own template for defect reporting in TestDirector but one can customize the template that is already available in TestDirector as required.

21. Can a script in TD be created before recording script in Winrunner or QTP?

Any automation script can be created directly in TD. You need to open the tool (Winrunner or QTP) and then connect to TD by specifying the url, project, domain, userid and password. And then you just record the script like you always do. When you save the script, you can save it in TD instead of your local system.

22. How to ensure that there is no duplication of bugs in TestDirector?

In the defect tracking window of TD there is a "find similar defect" icon. When this icon is clicked after writing the defect, if anybody else has entered the same defect then it points it out.

23. How is the Defect ID generated in TestDirector?

The Defect ID is automatically generated once the defect is submitted in TD.

24. What does the test grid contain?

The test grid displays all the relevant tests related to a project in TD. It contains the some key elements like test grid toolbar with various buttons for commands which are commonly used when creating and modifying the tests, grid filter which displays the filter that is currently applied to a column, description tab which displays a description of the selected test in the test grid and history tab that displays the changes made to a test.

25. What are the 3 views in TD?

The three views in TD are Plan Test which is used to prepare a set of test cases as per the requirements, Run Test which is used for executing the prepared test scripts with respect to the test cases and finally Track Defects which is used by the test engineers for logging the defects.

26. How to upload data from an excel sheet to TestDirector?

In order to upload data from excel sheet to TD firstly excel addin has to be installed, then the rows in the excel sheet which has to be imported to TD should be selected, and then finally the Export to TD option in the tools menu of TestDirector should be selected.

27. How many types of tabs are available in TestDirector?

There are 4 types of tabs available in TestDirector. They are Requirement, Test Plan, Test Lab and Defect. It is possible to customize the names of these tabs as desired.

28. Is 'Not covered' and 'Not run' status the same?

Not Covered status means all those requirements for which the test cases are not written whereas Not Run status means all those requirements for which test cases are written but are not run.

29. How does TestDirector store data?

In TD data is stored on the server.

30. Why should we create an Instance?

Test Instance is used to run the test case in the test lab. It is the test instance that you can run since you can't run the test case in test set.


====================================================================


1. What is meant by test lab in Quality Centre?
Test lab is a part of Quality Centre where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in the project. Internally Quality Centre will refer to this test while running then in the test lab.

2. Can you map the defects directly to the requirements(Not through the test cases) in the Quality Centre?
In the following methods is most likely to used in this case:
Create your Req.Structure
Create the test case structure and the test cases
Map the test cases to the App.Req
Run and report bugs from your test cases in the test lab module.

The database structure in Quality Centre is mapping test cases to defects, only if you have created the bug from Application. test case may be we can update the mapping by using some code in the bug script module(from the customize project function), as per as i know, it is not possible to map defects directly to an requirements.

3. how do you run reports from Quality Centre. Does any one have good white paper or articles?
This is how you do it
1. Open the Quality Centre project
2. Displays the requirements modules
3. Choose report
Analysis > reports > standard requirements report

4. Can we upload test cases from an excel sheet into Quality Centre?
Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine.
Now open excel, you can find the new menu option export to Quality Centre. Rest of the procedure is self explanatory.

5. Can we export the file from Quality Centre to excel sheet. If yes then how?
Requirement tab– Right click on main req/click on export/save as word, excel or other template. This would save all the child requirements

Test plan tab: Only individual test can be exported. no parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as.

Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats. Select all or selected option

Defects Tab: Right click anywhere on the window, export all or selected defects and save excel sheet or document.

6. How many types of tabs are there in Quality Centre. Explain?
There are four types of tabs are available

1. Requirement : To track the customer requirements
2. Testplan : To design the test cases and to store the test scripts
3. test lab : To execute the test cases and track the results.
4. Defect : To log a defect and to track the logged defects.

7. How to map the requirements with test cases in Quality Centre?
1. In requirements tab select coverage view
2. Select requirement by clicking on parent/child or grandchild
3. On right hand side(In coverage view window) another window will appear. It has two tabs
a) Tests coverage
b) Details
Test coverage tab will be selected by default or you click on it.
4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You cans elect any test case you want to map with your requirements.

8. How to use Quality Centre in real time project?
Once completed the preparing of test cases
1. Export the test cases into Quality Centre( It will contained total 8 steps)
2. The test cases will be loaded in the test plan module
3. Once the execution is started. We move the test cases from test plan tab to the test lab module.
4. In test lab, we execute the test cases and put as pass or fail or incomplete. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver)
5. If we got any defects and raise the defects in the defect module. when raising the defects, attach the defects with the screen shot.

9. Difference between Web Inspect-QA Inspect?
QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability.
Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect, auditors, compliance officers and security experts can perform security assessments on a web enabled application. Web inspect enables users to perform security assessments for any web application or web service, including the industry leading application platforms.

10. How can w add requirements to test cases in Quality Centre?
Just you can use the option of add requirements.
Two kinds of requirements are available in TD.
1. Parent Requirement
2. Child requirements.
Parent Requirements nothing but title of the requirements, it covers high level functions of the requirements
Child requirement nothing but sub title of requirements, it covers low level functions of the requirements

Perf_websphere

WebSphere : Performance Testing and Analysis

This article provides advice and methods for finding and resolving common performance problems with IBM® WebSphere® Portal. The WebSphere Portal SEAL team engages with high-profile customers experiencing significant problems with their WebSphere Portal deployments. In roughly 50 percent of these engagements, the major complaint is performance related. The task usually becomes one of finding and resolving these performance issues, often after the system has already been put into production.



Finding and fixing performance problems in a production environment is challenging on a number of levels. Optimally, most bottlenecks in the system should be found and fixed before the system is allowed into production. This article explains a tested process that can ensure that, with high probability, most of the significant performance issues are found and addressed before you promote a system to production. Performance testing and analysis have three main objectives:

1. Determining the load level at which a system under test fails
2. Finding bottlenecks in a system that throttle throughput and removing them as soon as possible and practical
3. Capacity planning, for example, predicting the amount of horsepower needed to sustain defined users loads within agreed-upon service level agreements (SLAs)

The system is defined as the complete end-to-end set of components required to deliver the requested Web page to the requesting user's browser. The most visible and often the most troublesome components tend to be WebSpherePortal itself, the Websphere Portal database, the Lightweight Directory Access Protocol (LDAP), and the back-end systems (databases, application servers, and so forth) that supply content to the portlets.

The back-end systems in many systems tend to present the most risk in WebSphere Portal deployments because they are frequently maintained by separate organizations. This separation dilutes the communication channel between the WebSphere Portal deployment team and the back-end teams with respect to performance objectives.

The methodology presented in this article is a holistic approach that meets all three of the objectives when it is executed successfully.

The environment

To meet the performance test objectives outlined previously, the performance test environment needs to be either the production environment itself or a mirror of it that has, as practically possible, the same hardware, the same topology, and the same back-end systems. If any piece of this complex test topology is different from its production counterpart, you must extrapolate the results in the test environment to predict its effect in the production environment. These extrapolations generally require detailed implementation knowledge of the portal and the deployed applications, which generally are not available to the testing organization. By making the test environment equivalent to the production environment, your confidence in the test results as they relate to what actually happens in production becomes acceptable.

An important goal of the test environment is the repeatability of results. As slight changes are made in the system, repeatability ensures that you can accurately measure the effect of these changes. For that reason, it is optimal to have the system on an isolated network during the performance testing. Running the performance test on the production network introduces variability (for example, user traffic) that can skew such metrics as page render response time. There is also a more pragmatic reason to isolate the performance test network. Putting WebSphere Portal under stress likely puts the corporate network under stress. This stress is often problematic during normal business hours.

If placing the performance test on an isolated network is not feasible, you should at least try to ensure that the components of the test are all collocated on the same subnet of a network router. Normal WebSphere Portal best practice recommends using a gigabit Ethernet connection between the portal and its database. Optimally, this connection extends to the LDAP servers, the Web servers, and important back-end services. It is crucial that the load generator also be on a local segment to the Web server and/or the portal itself.

A common customer concern involves the load generators being on the same local LAN segment as the portal itself. In this case, "This test does not get a true picture of the performance of the system as it excludes the network from the data center to the users." The answer to this concern is often difficult for customers to accept. The process described here is for tuning and resolving issues with the portal and its surrounding components. Trying to tune the network between the users (or the load generators) and the portals makes the analysis and problem resolution needlessly complex. We therefore remove it from the test. There are far better tools and processes for network tuning than the processes used here.

Portal infrastructure baseline

In contrast to the mirrored production environment, it is strongly advisable to also conduct an incremental set of baseline tests that exercise the infrastructure. At that point, subsequent tests should gradually augment the portal with customer-written code. The test plan should thus move from a simple topology to the final production topology to make it easier to isolate problematic components.

The first test is the complete WebSphere Portal infrastructure using an out-of-the-box portal. Transfer the database, and enable security. Make sure that all front-end Web servers, firewalls and load balancers are in place. Security managers (for example, Computer Associates SiteMinder or IBM Tivoli® Access Manager) should also be in place and correctly configured. Create a simple home page with a couple of portlets that do not access any back-end systems (for example, the World Clock portlet). Create a simple load testing script that accesses the unauthenticated home page and then logs in (authenticates) and idles without logging out. From this point, you want to add simulated users (Vusers) until the system is saturated. Using the bottleneck analysis techniques described below, find and fix any bottlenecks in the infrastructure. Note the performance baseline of this system.

Now, add to the system any customized themes and skins, and repeat the previous test. Find and fix any important bottlenecks in the revised system. Finally, as described below, add the actual portlets to be used on the home page and perform bottleneck analysis.

This baseline environment can be very effective in finding bottlenecks in the infrastructure that are independent of the application. Further, it can provide a reference when analyzing the extent to which the applications place additional load above and beyond the basic WebSphere Portal infrastructure.

Your strategy is to conduct the same tests listed below for bottleneck analysis in this baseline environment, optimize the environment, and then perform bottleneck analysis with the actual applications.

Application of the Portal Tuning Guide recommendations

Apply the recommendations outlined in the WebSphere Portal Tuning Guide to all systems before you embark on any performance testing. The guide provides a good starting point because it fixes known performance inhibitors in a default WebSphere Portal installation. Although bottleneck analysis would likely find the same problems, it is better to remove them from the beginning.

Load generation

A proper performance test also requires the use of a load generator that produces simulated user requests for Web pages. It is important that this tool produce such metrics as response time and page views per second. These metrics allow you to determine when the system fails its SLA contract or is saturated to the point that injecting more page requests per unit time does not result in higher page production. Saturation is discussed later in this article. The generator's ability to aggregate data such as CPU utilization on the portal and HTTP servers as well as mod_status data from the HTTP server aids in problem determination.

A number of tools are commonly used to create Web traffic (also known as drive load) in the test system. Some of the more commonly used tools include Mercury LoadRunner, Borland SilkPerformer, and IBM Rational® Performance Tester.

It is important that the load generator have sufficient virtual users (vUsers) to drive the system to saturation. Note that virtual users do not map directly to actual users. A virtual user represents an active channel on the load generator. A virtual user may simulate multiple actual users; however, only one actual user can be active for each virtual user.

It is also important, especially in a WebSphere Portal context, that if the system requires authenticated access to the WebSphere Portal applications under test, that sufficient unique test user IDs exist in the LDAP directory and that scripts ensure that only a reasonable number of duplicated logins occur during the test. (A reasonable number in this context accounts for the fact that some users might have a couple of instances of the browser open, each with the same WebSphere Portal login ID.) WebSphere Portal has a large caching infrastructure for portal artifacts. These artifacts are generally cached on a per-user basis. If the load simulation uses the same user ID for all tests, performance appears artificially high because the artifacts do not need to be loaded from the LDAP directory and the database.
The general methodology

The sections below describe the iterative process that is used to do the actual analysis of the system. The sections that precede "The process" define concepts that are important to understand during the execution of that process.
User scenarios

To tune the WebSphere Portal system to handle large numbers of users and to accurately predict its ability to handle specific numbers of users correctly, it is important to determine the most probable scenarios for users of the system. The test must then accurately simulate those user scenarios using the load generator. One effective way to do this step is to list the most likely use cases. Write a script for each of these use cases or as many as are practical. Now, assign a probability of likelihood that a percentage of the whole user population will execute that scenario. As the test is run, assign use cases to Vusers in the same proportion as the expected general population. As the number of Vusers is ramped up (discussed later), try to maintain this proportion.

NOTE: "Vuser" is a LoadRunner term. It represents one active channel over which requests are made and returned.
Think time

Think time is the average amount of time that a normal user pauses during individual mouse clicks or key presses during the course of using WebSphere Portal. In the load generation tools, this time is usually programmable, yielding a random time within a predefined range.

As think time is reduced, the number of requests per second increases, which in turn increases the load on the system. Reducing think time generally increases the average response time for WebSphere Portal login and page-to-page navigation. Therefore, accurately estimating real user think time is important for producing an accurate model of the system in production, particularly for capacity planning.

In most use cases, a think time of 10 seconds plus or minus 50 percent is reasonable for a portal having experienced users. A figure closer to 30 seconds is more reasonable for a portal with inexperienced users.
Cookies and sessions

Generally, most real users log into the portal and execute the task that needs to be done; however, they rarely log out by using the logout button. Rather, they let the browser sit idle until their session times out. Typically, a lot of sessions in memory are waiting for cleanup pending the WebSphere Application Server session timeout. This behavior increases the Java™ Virtual Machine (JVM) heap working set, which increases the probability of heap exhaustion in the JVM. Heap exhaustion can be both a performance bottleneck and a cause for a JVM failure.

Effective simulations must model this behavior of users who do not explicitly log out. As each individual simulation executes a particular use case, it should end the use case by going idle as opposed to logging out. As the script cycles back around to log in a new user on this particular Vuser, the cookies for old session (typically JSESSIONID) and Lightweight Third-Party Authentication (LTPA) along with any application-specific cookies need to be cleaned up appropriately before logging in the next user using that script. This model also implies that sufficient test IDs need to exist so that a test ID can sit idle for the length of the WebSphere Application Server session timeout without risk of being reused until the previous session times out.
Metrics

It is important that the scripts be instrumented for metrics. The most important metrics are login response time along with page-to-page response times. Most of the load generators already provide aggregate Page View per second (PV/s) metrics. PV/s is the most important metric for determining the saturation point of the system.

At the conclusion of each test, a graph of Vusers ramp rate versus the three metrics is required for doing analysis.

In addition to the metrics gathered by the load generation tool, a system monitoring tool such as IBM Tivoli Composite Application Manager for WebSphere or the Computer Associates Wily IntroScope product should be employed. These tools run on the WebSphere Portal instance and instrument the JVM directly. They are useful in both detection and resolution of system bottlenecks, vUsers, or think time.

A common misconception is that to accurately simulate a large population that generates requests at a certain rate, a smaller number of users that generate requests with a smaller think time will suffice.

It's important to note that the effects of running with a small number of users and a low think time result in unrealistically high cache hit rates. It also means that too few sessions are created. Because session size is often a serious problem for many portlet applications, this approach gives an unrealistically good view of the system performance and leads to surprises in production.

Another poor practice is running a small set of vUsers with no think time.
Repeatability principle

In a large population, it is easy to assume that most user actions appear to be random as users navigate through the portal. Experienced users, though, typically use the same patterns over and over. Furthermore, from a test engineering perspective, the user scenarios need to be reasonably static so that system changes can be effectively measured from run to run.

Therefore, the definition of the repeatability principle is that for all runs of a particular scenario, the metrics (average response time, PV/s, saturation point, and so on) produced by the runs all converge to the same results if the runs are sufficiently long. Note that with more variation (that is, unique scenarios) in the test scripts, longer times are required to converge, on average.

The simulation scripts written for the performance tests should adhere to the repeatability principle.

Driving to saturation

Saturation is defined as the number of active Vusers at which point adding more Vusers does not result in an increase in the number of PV/s. Note that this saturation point is for a given simulation; each different simulation likely has a different saturation point. The saturation point varies depending on the usage pattern.

To effectively drive a system to saturation, add Vusers a few at a time, let the system stabilize , observe whether PV/s increase, and add more Vusers as possible. ("Stabilize," in this context, means that the response times are steady within a window of several minutes.) On LoadRunner, if you plot Vusers against throughput (PV/s), the PV/s initially rises linearly with the number of Vusers, then reaches a maximum and actually decreases slightly from that point. The saturation point is the number of Vusers at which the PV/s is at maximum.


Back to top


Bottleneck analysis

The goal of bottleneck analysis is to remove impediments to driving the system to a higher load. The metric defined for higher load is a higher number of PV/s at saturation. Therefore, bottleneck analysis removes impediments to improve the saturation point.

Bottlenecks in a WebSphere Portal environment under load are generally the result of contention for shared resources. This contention can be the result of synchronized Java classes, methods, or data structures, contention for serial resources (for example SystemOut.log) or excessive response times in back-end databases or Web servers. You must also be mindful of bottlenecks such as the network itself. Components such as routers and firewalls can impose congestion control or can be poorly tuned.

As load increases, contention for these resources increases, making contention locks easier to detect and correct. This detail is why effective load testing is a requirement for bottleneck analysis.

A common mistake is to focus only on page response times. Many performance testers prefer to optimize render response times because this delay is the most obvious user requirement. This type of performance analysis requires path length reduction in the customer portlet applications. Response time optimization is generally more appropriately done in a non-loaded system and with tooling specific to the task (for example, JProbe).

The process

The process of performing bottleneck analysis is straightforward. For a particular performance analysis (for example, LoadRunner) simulation, follow these steps:

1. Ramp a single WebSphere Portal JVM to saturation.
2. Determine the bottlenecks that exists at saturation.
3. Resolve the bottlenecks.
4. Unless satisfied with system capacity, go to step 1 and find the next bottleneck.

Note that this process is iterative. The key concept is that you fix one bottleneck to find the next bottleneck.

A single JVM is used in this case because detection of the bottleneck is much simpler. Finding and resolving cross-JVM contention can be quite complex. After a single JVM has been tuned as much as desired, you move on to the capacity planning analysis for multiple nodes as described later in this article.
Note on ramp rates

A common question in performance testing is the rate at which Vusers should be ramped into the system.

Do not ramp in several hundred users as quickly as possible until the system collapses. This approach is not representative of reality, and it does not provide repeatable results.

You should model reality. Predict or measure the actual highest ramp rate that you would expect your portal to endure. This rate might typically occur during the hours that your users most often log into your portal, such as first thing in the morning when they arrive at the office. We recommend that you ramp a small fixed number (for example, two Vusers per minute) for a set period of time (for example, five minutes), do not add any users for a time to let the system stabilize (for example, five minutes), then loop back and add another batch of Vusers in the same fashion.

This technique gives the portal time to fill the various caches in an orderly fashion and provides for the ability to more accurately detect saturation points.
Priming the portal

After a portal restart, a short script should be executed prior to the main test to preload certain caches (for example, WebSphere Portal access control and the anonymous page cache before the real test starts. Failure to do so can skew the initial response times inordinately.

Analysis techniques

After you have a portal at saturation, you can determine the cause of a bottleneck in the system by taking a Java thread dump (using a kill -3 command ) against the portal Java process under test. A thread dump shows the state of all threads in the JVM. The general procedure is to look for threads that are all blocked by the same condition or are all waiting in the same method of the same class.

In general, search for threads that are blocked or in a wait state. By ascertaining why certain classes statistically show up blocked, you can then proceed to remove that reason and thus remove the bottleneck. The next section discusses some common bottleneck problems. Going into all the problems is really the art of WebSphere Portal bottleneck analysis and takes time and experience to master.

If the bottleneck is not the WebSphere Portal JVM itself, detection and resolution techniques are varied and outside the scope of this article.

Common problems

This section lists common problems that many customers have seen during their performance testing.
Logging

Logging using direct writes to SystemOut.log or using a logging class such as log4j causes serialization between running threads and significantly degrades portal performance. In production portal systems, log only what is absolutely needed. When using log4j, log only errors; do not log warnings or informational messages. If logging is required for audit purposes, consider using a portal service or a different service running in a separate JVM.

Turn off all logging and remove all debug code that writes to files before doing performance testing.
Java class and variable synchronization

Use of method-level synchronization blocks where a method is in a monitor wait (MW) state with one method holding a lock can be problematic. In this case, you have Java code that is synchronized and is causing serialization in the system.

Use of synchronized class variables or synchronized HashMaps can also cause this problem.

In both cases (method or variable synchronization), the problem can be exacerbated by arbitrarily increasing the number of WebSphere Application Server transport threads in which the portal runs. By increasing the number of threads, you increase the probability of hitting portal code that is synchronized in this fashion, which ultimately serializes all the threads.
Database contention

If the thread dump indicates numerous threads waiting in Java Database Connectivity (JDBC) classes in Socket.read() methods, then there are likely response time issues in the database itself.

When threads are waiting on JDBC pool resources in WebSphere Application Server, you see the threads in a condition wait (CW) state in the WebSphere Application Server connection pool (J2C) classes. In this case, you might need to increase the pool size for this data source, or you might need to increase the number of connections that the database server can handle concurrently.
LDAP responsiveness

If several threads are in the Socket.read() method of the Java Naming and Directory Interface (JNDI) classes, they are likely waiting on results from the LDAP directory.
Excessive session sizes

If customer-written portlets are storing too much data in the session, that condition invariably leads to memory and performance issues.
Exceptions being thrown

Even though this problem might seem obvious, in many customer situations performance analysis and bottleneck reduction are attempted in systems that are repeatedly throwing exceptions in the logs. When the JVM is handling unchecked exceptions, it slows the JVM down and causes serial I/O (printing) to the SystemOut.log print stream, which serializes the WebSphere Application Server transport threads.

A more general issue involves trying to characterize and tune a system that is inherently flawed. All results that are generated in such an environment must be labeled as non-repeatable and subject to change (potentially in a significant way) as the flaws are eliminated.

Finally, it should be your policy that the WebSphere Portal system is not allowed to enter a high-load production environment with any errors in the logs.
Dynacache concerns DRS replication modes

WebSphere Portal requires that the WebSphere Application Server Dynamic Cache Service be enabled. The dynamic cache (or "dynacache" as it is commonly known) is a data structure that is commonly used to provide caching of data from back-end services (for example, database results) in WebSphere Portal. Dynacaches can ensure cache synchronization across a cluster of WebSphere Portal members. For proper operation in a cluster, WebSphere Portal requires that cache replication be enabled. The default mode of replication, PUSH, can cause performance problems, though, in the WebSphere Portal environment. This setting is contained on the deployment manager on a per portal cluster member (JVM) basis.

The use of NOT SHARED is strongly recommended for the vast majority of WebSphere Portal configurations. Three actions are needed to ensure that the node is fully enabled for WebSphere Portal. The first is to set the replication mode to NOT SHARED using the WebSphere Application Server console for each cluster member. The second is to install PK64925. The third is to install PK62457. Depending on which WebSphere Portal service level is installed, these two APARs might already be installed.

WebSphere Content Manager's dynacaches also should be set to NOT SHARED. To complete this task, in the Deployment Manager console, navigate to Resources - Cache Instances - Object Cache Instances and change each of the individual cache instances. As of the time of this writing, there are 11 instances for WebSphere Content Manager.
Dynacache eviction concerns

Since WebSphere Portal version 5.1.0.2, the size of the WebSphere Portal dynacaches has been increased to a default that is appropriate for most customer's WebSphere Portal applications. There are situations, though, in which these defaults are inadequate and can cause significant performance problems. For example, if a portal has a large number of derived pages with a common parent in WebSphere Portal V5.1.0.x, the portal access control (PAC) caches can be small enough to cause cache thrashing. Similarly, if the portal objectID cache is too small, thrashing occurs.

Customers need to install and use the advanced dynacache monitor and monitor all the caches. If one or more of the caches seem to have large amounts of least recently used (LRU) evictions, the size of that cache might need to be increased. The sizes of the WebSphere Portal caches are mostly located in the CacheManagerService.properties file.
Customer portlets

Some common problems noted from past customer engagements include the following:

1. Use of synchronized class variables.
2. Excessive database calls. Consider using DB caching layers or dynacache to reduce the load on application databases or back-end services.
3. Unsynchronized use of HashMaps. There are timing scenarios in which these classes get into infinite loops if separate threads hit the same HashMap without being synchronized.



Back to top


Capacity planning

The goal of capacity planning is to estimate the total number of WebSphere Portal JVMs required that satisfy a certain user population within predetermined SLA metrics prior to entering production.

Typical metrics include these:

1. Portal login response time (typically around four seconds)
2. Page-to-page response times after being already logged in (typically around two seconds)

The process

The process for running the load test looks very much like the one for running the test for bottleneck analysis except that there is now a second criterion for stopping the test. One criterion is saturation, as previously defined. The second criterion is failure of any of the SLA metrics.

If the test reaches saturation before any of the SLA metrics are exceeded and if it has already been determined that there are no bottlenecks that can or will be excised, then you can immediately calculate the number of nodes required.

If the SLA metrics are exceeded before reaching saturation, then you must analyze the failure to determine the next course of action. If you determine that you do not need to resolve the response time issues, then proceed directly to calculating the number of nodes, as discussed in the next section of this article.

Extrapolating results

In general, if a single WebSphere Portal node can sustain n users within given SLA metrics, then 2 nodes can sustain 1.95 * n users. The accepted horizontal scaling factor for a portal is .95. Thus, if a single WebSphere Portal node can sustain n users within given SLA metrics, then m nodes can sustain:

n (1 + .95 + .952 + .953 + … + .95 m )

Thus, the horizontal scaling factor is slightly less than linear.

This scaling factor assumes that the database capacity does not bottleneck the system. In fact, this scaling factor is primarily a metric of the degeneration of the WebSphere Portal database for logging in users.

Vertically cloning (scaling) is somewhat different. Vertical cloning is indicated when a single JVM saturates a node at a processor utilization around 80 percent or less. Note that in most cases, bottleneck analysis usually provides relief. In the absence of Java heap issues, a single JVM can usually be tuned to saturate a node at 85 to 90 percent processor utilization.

Vertical scaling is discussed more fully later in this article.

Testing with the full cluster

If sufficient load generation capacity exists (including test IDs), it is wise to do a final series of tests in which the whole user community is simulated against the full cluster to ensure viability of the entire system.

Failover testing

If there is a system requirement for full performance during a failover, this scenario should also be scripted and tested.

Before running this scenario, review the plugin-cfg.xml file at the HTTP server to ensure that the cluster definitions are correct. Consider adding the parameter ServerIOTimeOut to the cluster members. This parameter augments the ConnectIOTimeout parameter. ConnectIOTimeout is the amount of time before a cluster member is marked as down in the event that the remote server fails to open a socket connection upon request. The parameter is normally present in the plugin-cfg.xml file and defaults to 0, which means that it relies on the operating system to return timeout status to the plug-in instead of the plug-in explicitly timing the connection itself.

The parameter ServerIOTimeout is, by default, not included in plugin-cfg.xml. This parameter sets a time-out on the actual HTTP requests. If the portal does not answer in the allotted time, the server is marked down. This step is useful because there are certain classes of failures whereby the WebSphere Portal cluster member opens a socket upon request, but the JVM has hung and will not respond to HTTP requests. Without ServerIOTimeout, the plug-in does not mark the cluster member as down; however, it is not able to handle requests. This situation results in requests being routed to a hung server.

During this test, start with the cluster fully operational. Enable Vusers in your simulation to the maximum number that your SLA mandates. Then, stop one or more cluster members. You can do this step gracefully by stopping the cluster members from the deployment manager or by simulating a network failure by removing the Ethernet cable from a cluster node. Many other failure modes might be worth investigating (for example, database failures, Web service failures, and so on). After the simulated cluster member outage, ensure that the surviving cluster members handle the remaining load according to your system requirements. Then, restart the offline cluster members to ensure that the load returns to a balanced state over time.

Ongoing capacity planning

If a system is already in production and is meeting its current SLA goals, you also want to plan for future growth in the number of users of the system. Assuming that the applications on the WebSphere Portal do not significantly change, you can derive the necessary measurements and calculations from a running production system. You need proper tooling, though, to take the measurements.

In short, if n JVM can support x users, then each JVM can support (x/n)^(1/.95) users. Using the formula explained previously, you can easily plan for future growth.


Back to top


Vertical clustering considerations

A common technique for improving performance is to vertically clone the WebSphere Portal JVM on the same physical system. Engineers initially assume that if one JVM is good, two must be better.

The ultimate goal of vertical cloning is to increase the net aggregate throughput in transactions per second of the sum of the cluster members (clones) on a single node. This goal is usually possible only if, when running under the load, a single, well-tuned cluster member does not consume most of the CPU available in that node. In fact, in a well-tuned WebSphere Portal, vertical cloning always carries a cost. Vertical cloning is indicated when the benefits outweigh the costs.

WebSphere Application Server clustering comes in two flavors. The first is the horizontal type. In this arrangement, a functionally equivalent duplicate of an application server is created on another node. This duplication is done with a WebSphere component known as the deployment manager. The resulting set of equivalent nodes is known as a cluster. The result is that a front-end HTTP server can forward a request from a client to either of the cluster members (clones), and the result is identical.

Similarly, you can also create cluster members vertically, which means that multiple JVMs are created on the same node. Each cluster member can serve the same content just as in the horizontal cluster member case.

In the WebSphere Portal case, each cluster member shares the one (and only one) WebSphere Portal database. This statement changes slightly in WebSphere Portal V6, but it is true for V5.x. Therefore, as the number of cluster members increases, the WebSphere Portal database has a higher likelihood of becoming a bottleneck due to the dilution of its capacity.

Costs of vertical clustering

When additional cluster members are active on the same physical node, costs are associated with it. First, there is process context switching. The operating system must now manage additional processes (JVMs).

Second, there is more contention for processor resources. Generally, vertically clustering is always a bad choice if the number of active cluster members exceeds the number of processors in the node less one. You should never have three cluster members on a three-processor node, for example. Two cluster members on a three-processor node might be acceptable under certain conditions.

Indications for vertical clustering

This section describes some of the situations in which vertical cluster members provide value.
Reliability

Apart from performance concerns, having additional cluster members might make sense strictly for reliability reasons. If a WebSphere Portal installation is a single node, then in the event of a software failure that crashes one JVM (without crashing the operating system), you can mitigate the effect of the crash by adding vertical cluster members. The assumption is that most software failures are localized to a single JVM and do not affect the others on the same node. Therefore, the cluster continues serving requests while the failing JVM is restarted.
Memory utilization

In a 32-bit operating system, process address spaces are limited to 4 gigabytes of memory. Most operating systems split this space as 2 gigabytes of user space and 2 gigabytes of kernel space. There are exceptions whereby the user space can be increased to 3 gigabytes and the kernel reduced to 1 gigabyte (Solaris, AIX®, and Microsoft® Windows® 2003 Enterprise, for example).

If the address space available to the JVM is 2 gigabytes, then the JVM can allocate approximately a 1.5-gigabyte heap space.

There are cases when the combination of the WebSphere Portal base memory working set, along with the total memory required for all the portlets running during stress, could approach and exhaust the 1.5-gigabyte heap. When this happens, and if there is still a significant amount of processor resource available (20 to 30 percent or more), then vertical cloning could increase the total throughput of the box by effectively creating 3 gigabytes of JVM heap and dividing the workload evenly between the two 1.5-gigabyte heap JVMs.
Java synchronized methods and class variables

If your WebSphere Portal application (and the portal itself) uses enough synchronized methods or class variables, you can, under load, end up with a high and frequent number of blocked threads in the application server. You can identify this situation by taking thread dumps under load and noticing that there are lots of Web container threads sitting in MW state waiting for these synchronized artifacts.

In this case, reducing the maximum number of Web container threads on a per-cluster-member basis reduces these stalls. If, after that change, the processor is not consumed as described previously, then vertical cloning can increase the aggregate throughput for the whole node.


Back to top


Conclusion

With proper testing before putting WebSphere Portal into production, you can remove many common performance problems, thereby providing for a much smoother user experience. This article provided a framework for building the test plan and execution processes needed to ensure that performance is acceptable and predictable as the system is deployed to production.


Resources

* Participate in the discussion forum.

* Learn more about IBM WebSphere Extended Cache Monitor.

* Refer to the IBM WebSphere Portal Information Center tips on performance tuning.

* Refer to the IBM WebSphere Portal Performance and Tuning Guide.



About the author



Alex Lang joined IBM in 1982. Since that time he has had various technical and management assignments in networking, digital signal processing, Java advocacy, and IBM WebSphere. He is currently the technical team lead for the WebSphere Portal SEAL team. His primary focus is resolving critical customer situations with the architecture, deployment, and operation of WebSphere Porta