Friday, July 24, 2009

QTP Q&A1

What is Quick Test Pro? What is Quick Test Professional?


Mercury QuickTest Professional™ provides the industry's best solution for functional test and regression test automation - addressing every major software application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing to radically simplify test creation and maintenance. Unique to QuickTest Professional's Keyword-driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.QuickTest Professional satisfies the needs of both technical and non-technical users. It enables you to deploy higher-quality applications faster, cheaper, and with less risk. It works hand-in-hand with Mercury Business Process Testing™ to bring non-technical subject matter experts into the quality process in a meaningful way. Plus, it empowers the entire testing team to create sophisticated test suites with minimal training.The deployment of Mercury QuickTest Professional is optimized through the use of Mercury best practices. Mercury best practices cover all aspects of deployment, including product installation and operation, organizational design, process implementation, continual process improvement and measurement of return on investment (ROI). Throughout your implementation Mercury applies these best practices to your specific situation, creating world-class procedures for you that drive long-term success.source

What's New in QuickTest Professional 8.2?

  • Keyword View: Lets you easily build and maintain tests without writing VBScripts.
  • Auto-Documentation: Provides improved test clarity and the ability to view test steps in plain English.
  • Step Generator: Allows you to quickly insert custom-built functions into your tests.
  • Mercury Business Process Testing: Enhanced integration with BPT -- Business Components, Scripted Components, and Application Areas.
  • Enhanced Expert View: Provides greater efficiency when generalizing test components.
  • Action Parameters: Allows you to generalize testing actions for greater reusability.
  • Data Parameters: You can now specify test or action parameters to pass values into and from your test, and between actions in your test.
  • Open XML Report Format for Test Results: Test results are now stored in an open XML format, enabling you to easily customize the reports according to your own requirements, and to integrate the test result information with other applications.
  • Unicode Support: Lets you test global deployments of your enterprise applications.
  • Function Definition Generator: You can use the new Function Definition Generator to generate definitions for user-defined functions, add header information to them, and register functions to a test object.

Explain QTP Testing process?


The QuickTest testing process consists of 7 main phases:
Create your test plan
Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application.

Recording a session on your application
As you navigate through your application, QuickTest graphically displays each step you perform in the form of a collapsible icon-based test tree. A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form.


Enhancing your test
Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly.
NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process.
Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data.
Adding logic and conditional statements to your test enables you to add sophisticated checks to your test.

Debugging your test
If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption.


Running your test on a new version of your application
You run a test to check the behavior of your application. While running, QuickTest connects to your application and performs each step in your test.

Analyzing the test results
You examine the test results to pinpoint defects in your application.

Reporting defects
As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool.


How Does Run time data (Parameterization) is handled in QTP?

You can then enter test data into the Data Table, an integratedspreadsheet with the full functionality of Excel, to manipulate datasets and create multiple test iterations, without programming, toexpand test case coverage. Data can be typed in or imported fromdatabases, spreadsheets, or text files.


What is keyword view and Expert view in QTP?


QuickTest's Keyword Driven approach, test automation expertshave full access to the underlying test and object properties, via anintegrated scripting and debugging environment that is round-tripsynchronized with the Keyword View.Advanced testers can view and edit their tests in the Expert View,which reveals the underlying industry-standard VBScript thatQuickTest Professional automatically generates. Any changesmade in the Expert View are automatically synchronized with theKeyword View.


Explain about the Test Fusion Report of QTP?


Once a tester has run a test, a TestFusion report displays allaspects of the test run: a high-level results overview, an expandableTree View of the test specifying exactly where application failuresoccurred, the test data used, application screen shots for every stepthat highlight any discrepancies, and detailed explanations of eachcheckpoint pass and failure. By combining TestFusion reports withQuickTest Professional, you can share reports across an entire QAand development team.


To which environments does QTP supports?


QuickTest Professional supports functional testing of all enterprise environments, including Windows, Web, .NET, Java/J2EE, SAP,Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.




What is QTP or Quick Test Pro?


QuickTest is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QuickTest Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications.

Explain the QTP Tool interface?


QTP Tool interface contains the following key elements:

Title bar
displaying the name of the currently open test

Menu bar
displaying menus of QuickTest commands

File toolbar
containing buttons to assist you in managing tests

Test toolbar
containing buttons used while creating and maintaining tests

Debug toolbar
containing buttons used while debugging tests.

Note: The Debug toolbar is not displayed when you open QuickTest for the first time. You can display the Debug toolbar by choosing View > Toolbars > Debug. Note that this tutorial does not describe how to debug a test. For additional information, refer to the
QuickTest Professional User's Guide.

Action toolbar
containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow.

Note: The Action toolbar is not displayed when you open QuickTest for the first time. You can display the Action toolbar by choosing View > Toolbars > Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically. For additional information, refer to the
QuickTest Professional User's Guide.

Test pane
containing two tabs to view your test-the Tree View and the Expert View

Test Details pane
containing the Active Screen

Data Table
containing two tabs, Global and Action, to assist you in parameterizing your test

Debug Viewer pane
containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.)

Status bar
displaying the status of the test

How QTP recognizes Objects in AUT?


QuickTest stores the definitions for application objects in a file called the Object Repository. As you record your test, QuickTest will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by QuickTest), and will contain a set of properties (type, name, etc) that uniquely identify each object.Each line in the QuickTest script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.


What are the types of Object Repositorys in QTP?


QuickTest has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test.
The object repository per-action mode is the default setting. In this mode, QuickTest automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object's property values in one action, you may need to make the same change in every action (and any test) containing the object.

Explain the check points in QTP?


A checkpoint verifies that expected information is displayed in a Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP.

  • A page checkpoint checks the characteristics of a Application
  • A text checkpoint checks that a text string is displayed in the appropriate place on a Application.
  • An object checkpoint (Standard) checks the values of an object on a Application.
  • An image checkpoint checks the values of an image on a Application.
  • A table checkpoint checks information within a table on a Application
  • An Accessibility checkpoint checks the web page for Section 508 compliance.
  • An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application.
  • A database checkpoint checks the contents of databases accessed by your web site


how many ways we can add check points to an application using QTP?


We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note: To perform the second one The Active screen must be enabled while recording).


How does QTP identifies the object in the application?


QTP identifies the object in the application by LogicalName and Class.
For example:
The Edit box is identified by

  • Logical Name : PSOPTIONS_BSE_TIME20
  • Class: WebEdit

If an application name is changes frequently i.e while recording it has name, in this case how does QTP handles?

while recording it has name "Window1" and then while running its "Windows2" in this case how does QTP handles?
QTP handles those situations using "Regular Expressions".

What are the Features & Benefits of Quick Test Pro (QTP 8.0)?

Operates stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0 allowing for fast test creation, easier maintenance, and more powerful data-driving capability. Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Collapses test documentation and test creation to a single step with Auto-documentation technology. Enables thorough validation of applications through a full complement of checkpoints.

How to handle the exceptions using recovery scenario manager in QTP?

There are 4 trigger events during which a recovery scenario should be activated. A pop up window appears in an opened application during the test run: A property of an object changes its state or value, A step in the test does not run successfully, An open application fails during the test run, These triggers are considered as exceptions. You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps: 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run

What is the use of Text output value in QTP?

Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.

How to use the Object spy in QTP 8.0 version?

There are two ways to Spy the objects in QTP: 1) thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible. or window is minimized then, hold the Ctrl button and activate the required window to and release the Ctrl button.

How Does Run time data (Parameterization) is handled in QTP?

You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.

What is keyword view and Expert view in QTP?

Quick Test's Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.

Explain about the Test Fusion Report of QTP?

Once a tester has run a test, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you can share reports across an entire QA and development team.

Which environments does QTP support?

Quick Test Professional supports functional testing of all enterprise environments, including Windows, Web, .NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.

What is QTP?

Quick Test is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, Quick Test Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications

Explain QTP Testing process?

Quick Test testing process consists of 6 main phases:

Create your test plan - Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application.

Recording a session on your application - As you navigate through your application, Quick Test graphically displays each step you perform in the form of a collapsible icon-based test tree. A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form.

Enhancing your test - Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly. NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process. Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional statements to your test enables you to add sophisticated checks to your test.

Debugging your test - If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption.

Running your test on a new version of your application - You run a test to check the behavior of your application. While running, Quick Test connects to your application and performs each step in your test.

Analyzing the test results - You examine the test results to pinpoint defects in your application.

Reporting defects - As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool.

Explain the QTP Tool interface.

It contains the following key elements: Title bar, displaying the name of the currently open test, Menu bar, displaying menus of Quick Test commands, File toolbar, containing buttons to assist you in managing tests, Test toolbar, containing buttons used while creating and maintaining tests, Debug toolbar, containing buttons used while debugging tests. Note: The Debug toolbar is not displayed when you open Quick Test for the first time. You can display the Debug toolbar by choosing View — Toolbars — Debug. Action toolbar, containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. Note: The Action toolbar is not displayed when you open Quick Test for the first time. You can display the Action toolbar by choosing View — Toolbars — Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically. Test pane, containing two tabs to view your test-the Tree View and the Expert View, Test Details pane, containing the Active Screen. Data Table, containing two tabs, Global and Action, to assist you in parameterizing your test. Debug Viewer pane, containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.) Status bar, displaying the status of the test

How does QTP recognize Objects in AUT?

Quick Test stores the definitions for application objects in a file called the Object Repository. As you record your test, Quick Test will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the Quick Test script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.

What are the types of Object Repositories in QTP?

Quick Test has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test. The object repository per-action mode is the default setting. In this mode, Quick Test automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object's property values in one action, you may need to make the same change in every action (and any test) containing the object.

Explain the check points in QTP?

A checkpoint verifies that expected information is displayed in an Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP. A page checkpoint checks the characteristics of an Application. A text checkpoint checks that a text string is displayed in the appropriate place on an Application. An object checkpoint (Standard) checks the values of an object on an Application. An image checkpoint checks the values of an image on an Application. A table checkpoint checks information within a table on an Application. An Accessibilityy checkpoint checks the web page for Section 508 compliance. An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application. A database checkpoint checks the contents of databases accessed by your web site

In how many ways we can add check points to an application using QTP?

We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note: To perform the second one The Active screen must be enabled while recording).

How does QTP identify objects in the application?

QTP identifies the object in the application by Logical Name and Class.

What is Parameterizing Tests?

When you test your application, you may want to check how it performs the same operations with multiple sets of data. For example, suppose you want to check how your application responds to ten separate sets of data. You could record ten separate tests, each with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each time the test runs, it uses a different set of data.

What is test object model in QTP?

The test object model is a large set of object types or classes that Quick Test uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that Quick Test can record for it. A test object is an object that Quick Test creates in the test or component to represent the actual object in your application. Quick Test stores information about the object that will help it identifies and checks the object during the run session.

What is Object Spy in QTP?

Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object's hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.

What is the Diff between Image check-point and Bit map Check point?

Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space. For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly. You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins is loaded). Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.

How many ways we can parameterize data in QTP?

There are four types of parameters: Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test. Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, Quick Test uses a different value from the Data Table. Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that Quick Test generates for you based on conditions and options you choose. Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have Quick Test generate a random number and insert it in a number of tickets edit field.

How u does batch testing in WR & is it possible to do in QTP, if so explain?

Batch Testing in WR is nothing but running the whole test set by selecting Run Test set from the Execution Grid. The same is possible with QTP also. If our test cases are automated then by selecting Run Test set all the test scripts can be executed. In this process the Scripts get executed one by one by keeping all the remaining scripts in waiting mode.

If I give some thousand tests to execute in 2 days what do u do?

Adhoc testing is done. It Covers the least basic functionalities to verify that the system is working fine.

What does it mean when a check point is in red color? what do u do?

A red color indicates failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a Application issue.

What is Object Spy in QTP?

Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object's hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.

What is the file extension of the code file & object repository file in QTP?

Code files extension is.vbs and object repository is.tsr

Explain the concept of object repository & how QTP recognizes objects?

Object Repository: displays a tree of all objects in the current component or in the current action or entire test (depending on the object repository mode you selected). We can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordinal identifier such as objects location on the page or in the source code.

What are the properties you would use for identifying a browser & page when using descriptive programming?

Name would be another property apart from title that we can use.

Give me an example where you have used a COM interface in your QTP project?

com interface appears in the scenario of front end and back end. For eg: if you r using oracle as back end and front end as VB or any language then for better compatibility we will go for an interface. of which COM will be one among those interfaces. Create object creates handle to the instance of the specified object so that we program can use the methods on the specified object. It is used for implementing Automation (as defined by Microsoft).

Explain in brief about the QTP Automation Object Model.

Essentially all configuration and run functionality provided via the Quick Test interface is in some way represented in the Quick Test automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in Quick Test have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the Quick Test automation object model, along with standard programming elements such as loops and conditional statements to design your program.

Why do we need Automation?

Automation used when:

To speed up the testing process

To reduce the human errors

To Maintain the Test to reuse


What are the advantages of QTP when compared with other functional automation tools?

QTP is an Advanced Keyword Driven Testing.

The Action Parameters allow you to generalize the testing actions for greater reusability

It supports ERP/CRM, .NET, Web Services and Multimedia

It contains an Active Screen which gives the snapshot of the application

It supports the languages such as European, Japanese, Chinese, and Korean


What are the limitations of QTP?

QTP does not support Flash


Does Quickest have any debugging capabilities?

Yes

What are the environments supported by QTP?


Windows Applications

(MFC)

• Visual Basic

• Java

• ActiveX

Enterprise Applications

• SAP

• Oracle

• PeopleSoft

• Siebel

Web Technologies

HTML

• DHTML

• JavaScript

Browsers

IE

• Netscape

• AOL

Emerging Technologies

• .Net Win forms,

Web forms, Web services

• J2EE Web services

• XML, WSDL, UDDI

Terminal Emulators

• 3270

• 5250

• VT100

Server Technologies

• Oracle

• Microsoft

• IBM

• BEA

• ODBC

• COM/COM+

Multimedia

• RealAudio/Real Video

• Windows Media Player

• Flash

Languages

• European

• Japanese

• Chinese (traditional and

simplified)

• Korean









Explain Testing Life Cycle

Identify the objects to the Object Repository

Identify the Reusable Actions

Identify the Functions

Author the Script for the testing

Enhance the Script based on the Requirements

Debug the Test

Run the Test

Analyze the Results

Report the Defects


What are the different types of recording?

There are three types of recording in Quick Test

Standard Recording

Analog Recording

Low-Level Recording


Differences between analog and low-level recordings?

Analog Recording

Records the Exact mouse movements and the keyboard operations

Records with respect to Screen and Window

Generates a single statement called a Track in the test script

We cannot edit the steps in the Analog recording

Low-Level Recording

Records all the Run time object as Win Object

Records on any object whether QTP identifies or not


How to record on non standard menus?

Record using Analog mode and Low-Level mode


How can I record on objects or environments not supported by Quickest?

By creating Virtual Object and Perform Analog recording on those objects


What is the Syntax for an Exist sync statement?

Object. Exist (Timeout)

Example:

If Browser ("Mercury Tours").Exist Then
MsgBox "The browser exists."
End
If




What is the diff between Wait () and Exist ()?


You can enter Exist and/or Wait statements to instruct QuickTest to wait for a window to open or an object to appear. Exist statements return a Boolean value indicating whether or not an object currently exists. Wait statements instruct QuickTest to wait a specified amount of time before proceeding to the next step. You can combine these statements within a loop to instruct QuickTest to wait until the object exists before continuing with the test or component.


Why do you need synchronization and explain how global synchronization is implemented?

When you run a test or component, your application may not always respond with the same speed. For example, it might take a few seconds:

for a progress bar to reach 100%

for a status message to appear

for a button to become enabled

for a window or pop-up message to open


What are the different types of views available in QTP?

Keyword View

Expert View

What are mandatory properties?

Mandatory provides unique identification of the objects. The default properties are also called as the Mandatory properties. For example the default properties for the Web Image Object are alt, html tag and image type


Why do we use 'SPY'?

SPY is used to view the Properties and the Methods of the Object in the application


How to change the logical name of a Test Object?

The Logical name is the name of the object that QTP identified in the application.

Right click on the Object name and select "Rename" to rename the logical name of the object


How many ways you can add object in repository?

Use the Add Objects option in the Object Repository dialog box. You can add any object as a single object or a parent object, along with all its children.

Choose the View/Add Object option from the Active Screen.

Insert a step in your test or component for a selected object from the Active Screen.


What is the use of active screen at least 3 uses?

Active screen is used to show the snapshot of the application

To insert the Check Points we can use the Active screen

To view the Source

To add the objects to the object repository


Can I store functions and subroutines in a function library

Yes

How we make actions reusable?

After creating the test action, in the Keyword view right click on the respective action and select Action Properties, Check the "Reusable Action"


What are the different files available in QTP and their use?

Object Repository File – To store the Objects in the application

Recovery Scenario File – To store the Collection of the Recovery Scenarios

Library File

VBS File

What is Smart identification?

Smart Identification is used when there is an Ambiguity between the two objects

When two objects in an application contains same properties then identifying the objects is very difficult, at these conditions Quick test identifies the object by enabling the smart identification


What is correlation?

The process in which the output value of a variable is used as an input to the another variable


What is recovery scenario? Do we use this in real time?

Unexpected events, errors, and application crashes during a run session can disrupt your run session and distort results. This is a problem particularly when running tests or components unattended—the test or component is suspended until you perform the operation needed to recover.



How to use Environment variable

What is the extension on recovery scenario file?

< .qrs >

Which method is used to call a function which is in the VBS file?

How many ways can we provide input to an action?

Three Ways we can provide input to an action

Data table

Environmental Variable


How to execute VB function?

Call Function<Function Name>


How many types of Check points are there in QTP?

Standard Checkpoint

Text Checkpoint

Text Area Checkpoint

Bitmap Checkpoint

Database Checkpoint

Accessibility Checkpoint

XML Checkpoint (Web Frame)

XML Checkpoint (File)


What is descriptive programming?

You can also instruct QuickTest to perform methods on objects without referring to the Object Repository, without referring to the object's logical name. To do this, you provide QuickTest with a list of properties and values that QuickTest can use to identify the object or objects on which you want to perform a method. This is called as descriptive programming.


How to send messages to test the result?

Reporter. report event is used to send the messages to the test results


How does QuickTest capture user processes in Web pages?

How can I record and run tests on objects that change dynamically from viewing

To viewing?

Sometimes the content of objects in a Web page or application changes due to dynamic content. You can create dynamic descriptions of these objects so that QuickTest will recognize them when it runs the test


How can I check that a child window exists (or does not exist)?

Sometimes a link in one window creates another window. You can use the Exist method to check whether or not a window exists. For example:

Browser ("Window_logical_name").Exist

You can also use the Child Objects method to retrieve all child objects (or the subset of child objects that match a certain description) on the Desktop or within any other parent object.


How does QuickTest record on dynamically generated URLs and Web pages?

QuickTest actually clicks on links as they are displayed on the page. Therefore, QuickTest records how to find a particular object, such as a link on the page, rather than the object itself. For example, if the link to a dynamically generated URL is an image, then QuickTest records the "IMG" HTML tag, and the name of the image. This enables QuickTest to find this image in the future and click on it.


How does QuickTest handle cookies?

Server side connections, such as CGI scripts, can use cookies both to store and retrieve information on the client side of the connection. QuickTest stores cookies in the memory for each user, and the browser handles them as it normally would.


Where can I find a web page's cookie?


Is text area check point supported by web application?



What is the difference between image check point and Bitmap check point?

Image checkpoints enable you to check the properties of a Web image. In the Image Checkpoint Properties dialog box, you can specify which properties of the image to check and edit the values of those properties. This dialog box is similar to the standard Checkpoint Properties dialog box, except that it contains the Compare image content option. This option enables you to compare the expected image source file with the actual image source file.


Bitmap Checkpoint checks an area of your Web page or application as a bitmap. For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.

How do I maintain my test when my application changes?

The way to maintain a test when your application changes depend on how much your application changes. This is one of the main reasons you should create a small group of tests rather than one large test for your entire application. When your application changes, you can rerecord part of a test. If the change is not significant, you can manually edit a test to update it.

How do you handle new objects that appear in a new version of the same software?

Adding the new objects in the object repository

BY changing the Index property in the repository


How do we add a step to the Action during test run?


How does QuickTest handle session IDs?

The server, not the browser, handles session IDs, usually by a cookie or by embedding the session ID in all links. This does not affect QuickTest.

How does QuickTest handle server redirections?

When the server redirects the client, the client generally does not notice the redirection, and misdirection generally do not occur. In most cases, the client is redirected to another script on the server. This additional script produces the HTML code for the subsequent page to be viewed. This has no effect on QuickTest or the browser.

How does QuickTest handle meta tags?

Meta tags do not affect how the page is displayed. Generally, they contain information only about who created the page, how often it is updated, what the page is about, and which keywords represent the page's content. Therefore, QuickTest has no problem handling meta tags.

Does QuickTest work with .asp?

Dynamically created Web pages utilizing Active Server Page technology have an .asp extension. This technology is completely server-side and has no bearing on QuickTest.

Does QuickTest work with COM?

QuickTest complies with the COM standard.

QuickTest supports COM objects embedded in Web pages (which are currently accessible only using Microsoft Internet Explorer) and you can drive COM objects in VBScript.

Does QuickTest work with XML?

XML is eXtensible Markup Language, a pared-down version of SGML for Web documents, that enables Web designers to create their own customized tags. QuickTest supports XML and recognizes XML tags as objects.

You can also create XML checkpoints to check the content of XML documents in Web pages, frames or files. QuickTest also supports XML output and schema validation.


Sunday, July 12, 2009

LoadRunner Good Material

 

Soak Tests (Also Known as Endurance Testing)

Soak testing is running a system at high levels of load for prolonged periods of time.  A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed.

Also, it is possible that a system may ‘stop’ working after a certain number of transactions have been processed due to memory leaks or other defects.  Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.

Typical usage profile, that needs to be considered before commencing soak testing.

The above diagram shows activity for a certain type of site.   Each login results in an average session of 12 minutes duration with and average eight business transactions per session.

A soak test would run for as long as possible, given the limitations of the testing situation.  For example, weekends are often an opportune time for a soak test.  Soak testing for this application would be at a level of 550 logins per hour, using typical activity for each login. 

The average number of logins per day in this example is 4,384 per day, but it would only take 8 hours at 550 per hour to run an entire days activity through the system.

By Starting a 60 hour soak test on Friday evening at 6 pm  (to finish at 6am Monday morning), 33,000 logins would be put through the system, representing 7½ days of activity.  Only with such a test, will it be possible to observe any degradation of performance under controlled conditions. 

Some typical problems identified during soak tests are listed below:

 

bullet

Serious memory leaks that would eventually result in a memory crisis,

bullet

Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.

bullet

Failure to close database cursors under some conditions which would eventually result in the entire system stalling.

bullet

Gradual degradation of response time of some functions as internal data-structures become less efficient during a long test.

Apart from monitoring response time, it is also important to measure CPU usage and available memory.  If a server process needs to be available for the application to operate, it is often worthwhile to record its memory usage at the start and end of a soak test.   It is also important to monitor internal memory usages of facilities such as Java Virtual Machines, if applicable.

Long Session Soak Testing

When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not Logins and transactions per day, but transactions per active user for each user each day.

This type of situation occurs in internal systems, such as ERP and CRM systems, where users login and stay logged in for many hours, executing a number of business transactions during that time.  A soak test for such a system should emulate multiple days of activity in a compacted time-frame rather than just pump multiple days worth of transactions through the system.

Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed.  VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.

Test Duration

The duration of most soak tests is often determined by the available time in the test lab.  There are many applications, however, that require extremely long soak tests.  Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders, such as a month.  Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of a soak test.

A classic example of a system that requires extensive soak testing is an air traffic control system.  A soak test for such a system may have a multi-week or even multi-month duration.

 

Failover Tests

Failover Tests verify of redundancy mechanisms while the system is under load.  This is in contrast to Load Tests which are conducted under anticipated load with no component failure during the course of a test.  

For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies. 

Does the load balancer react quickly enough?

Can the other web servers handle the sudden dumping of extra load? 

Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outage.  It also provides a baseline of failover capability so that a 'sick' server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load.

Explanatory Diagrams:

The following is a configuration where failover testing would be required.

Diagram: Example failover configuration for a web system

This is just one of many failover configurations.  Some failover configurations can be quite complex, especially when there are redundant sites as well as redundant equipment and communications lines. 

In this type of configuration, when one of the application servers goes down, then the two web servers that were configured to communicate with the failed application server can not take load from the load balancer, and all of the load must be passed to the remaining two web servers.  See diagram below:

Diagram: web system after failover of application server

When such a failover event occurs, the web servers are under substantial stress, as they need to quickly accommodate the failed over load, which probably will result in doubling the number of HTTP connections  as well as application server connections in a very short amount of time.  The remaining application server will also be subjected to severe increase in load and the overheads associated with catering for the increased load.

It is crucial to the design of any meaningful failover testing that the failover design is understood, so that the implications of a failover event, while under load can, be scrutinized.

Fail-back Testing:

After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.

 

Stress Tests

Stress Tests determine the load under which a system fails, and how it fails.  This is in contrast to Load Testing, which attempts to simulate anticipated load.  It is important to know in advance if a ‘stress’ situation will result in a catastrophic system failure, or if everything just “goes really slow”.  There are various varieties of Stress Tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructure and contribute to downtime, a stress-full environment for support staff and managers, as well as possible financial losses.  If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed.

Before conducting a Stress Test, it is usually advisable to conduct targeted infrastructure tests on each of the key components in the system.   A variation on targeted infrastructure tests would be to execute each one as a mini stress test.

The diagram below shows an unexpectedly high amount of demand on a typical web system.  Stress situations are not expected under normal circumstances.  

Diagram: Stress on system making it a candidate for Stress Testing

The following table lists possible situations for a variety of applications where stress situations may occur.

Type of Application

Circumstances that could give rise to Stress levels of activity.

Online Banking

After an outage - when many clients have been waiting for access to the application to do their banking transactions.

Marketing / Sales Application

Very successful advertising campaign - or substantial error in advertising campaign that understates pricing details.

Various applications

Unexpected publicity - for example, in a news article in a national online newspaper.

Focus of stress test.

In a stress event, it is most likely that many more connections will be requested per minute than under normal levels of expected peak activity.  In many stress situations, the actions of each connected user will not be typical of actions  observed under normal operating conditions.  This is partly due to the slow response and partly due to the root cause of the stress event. 

Lets take an example of a large holiday resort web site.  Normal activity will be characterized by browsing, room searches and bookings.  If a national online news service posted a sensational article about the resort and included a URL in the article, then the site may be subjected to a huge number of hits, but most of the visits would probably be a quick browse.  It is unlikely that many of the additional visitors would search for rooms and it would be even less likely that they would make bookings.  However, if instead of a news article, a national newspaper advertisement erroneously understated the price of accommodation, then there may well be an influx of visitors who clamour to book a room, only to find that the price did not match their expectations.

In both of the above situations, the normal traffic would be increased with traffic of a different usage profile.  So a stress test design would incorporate a Load Test as well as additional virtual users running a special series of 'stress' navigations and transactions. 

For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the Load Test. However, one must then keep in mind that a system failure with that type of activity may be different to the type of failure that may occur if a special series of 'stress' navigations were utilized for stress testing.

Stress test execution.

Typically, a stress test starts with a Load Test, and then additional activity is gradually increased until something breaks.  An alternative type of stress test is a Load Test with sudden bursts of additional activity.  The sudden bursts of activity generate substantial activity as sessions and connections are established, where as a gradual ramp-up in activity pushes various values past fixed system limitations. 

Diagram showing two types of Stress Tests - Gradual Rampup and Burst.

Ideally, stress tests should incorporate two runs, one with burst type activity and the other with gradual ramp-up as per the diagram above, to ensure that the system under test will not fail catastrophically under excessive load.  System reliability under severe load should not be negotiable and stress testing will identify reliability issues that arise under severe levels of load.

An alternative, or supplemental stress test is commonly referred to as a spike test, where a single short burst of concurrent activity is applied to a system.  Such tests are typical of simulating extreme activity where a 'count-down' situation exists.  For example, a system that will not take orders for a new product until a particular date and time.  If demand is very strong, then many users will be poised to use the system the moment the count down ends, creating a spike of concurrent requests and load.

 

 

Load Tests

Load Tests are end to end performance tests under anticipated production load.  The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations (or Service Level Agreements - SLAs).  Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates.  An important variation of the load test is the Network Sensitivity Test, which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN.

Load Tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment.  If the project has a pilot in production then logs from the pilot can be used to generate ‘usage profiles’ that can be used as part of the testing process, and can even be used to ‘drive’ large portions of the Load Test.  

Load testing must be executed on “today’s” production size database, and optionally with a “projected” database.  If some database tables will be much larger in some months time, then Load testing should also be conducted against a projected database.  It is important that such tests are repeatable, and give the same results for identical runs.  They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed SLAs. 

 

What is the purpose of a Load Test?

The purpose of any load test should be clearly understood and documented.  A load test usually fits into one of the following categories:

  1. Quantification of risk.  - Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load.  This is a traditional Quality Assurance (QA) type test.  Note that load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.
  2. Determination of minimum configuration.  - Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders - so that extraneous hardware, software and the associated cost of ownership can be minimized.  This is a Business Technology Optimization (BTO) type test.

 

What functions or business processes should be tested?

The following table describes the criteria for determining the business functions or processes to be included in a test.

Basis for inclusion in Load Test

Comment

High frequency transactions

The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.

Mission Critical transactions

The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has, by definition, the greatest impact.

Read Transactions

At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.

Update Transactions

At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.

 

Example of Load Test Configuration for a web system

The following diagram shows how a thorough load test could be set up using LoadRunner. 

Comprehensive Load Testing Configuration

The important thing to understand in executing such a load test is that the load is generated at a protocol level, by the load generators, that are running scripts developed with the VUGen tool.  Transaction times derived from the VUGen scripts do not include processing time on the client PC, such as rendering (drawing parts of the screen) or execution of client side scripts such as JavaScript.  The WinRunner PC(s) is utilized to measure end user experience response times.  Most load tests would not employ a WinRunner PC to measure actual response times from the client perspective, but is highly recommended where complex and variable processing is performed on the desktop after data has been delivered to the client.

The LoadRunner controller is capable of displaying real-time graphs of response times as well as other measures such as CPU utilization on each of the components behind the firewall.  Internal measures from products such as Oracle, WebSphere are also available for monitoring during test execution.

After completion of a test, the Analysis engine can generate a number of graphs and correlations to help locate any performance bottlenecks. 

 

Simplified Load Test Configuration for a web system

Simplified Load Testing Configuration

In this simplified load test, the controller communicates directly to a load generator that can communicate directly to the load balancer.  No WinRunner PC is utilized to measure actual user experience.  The collection of statistics from various components is simplified as there is no firewall between the controller and the web components being measured.

Reporting on Response Time at various levels of load.

Expected output from a load test often includes a series of response time measures at various levels of load, eg 500 users, 750 users and 1,000 users.  It is important when determining the response time at any particular level of load, that the system has run in a stable manner for a significant amount of time before taking measurements.

For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize.  Taking measurements over the next ten minutes would then give a meaningful result.  The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.

 

 

 

 

Targeted Infrastructure Tests

Targeted Infrastructure Tests are Isolated tests of each layer and or component in an end to end application configuration.   It includes communications infrastructure, Load Balancers, Web Servers, Application Servers, Crypto cards, Citrix Servers, Database… allowing for identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level.

Each test can be quite simple, For example, a test ensuring that 500 concurrent (idle) sessions can be maintained by Web Servers and related equipment, should be executed prior to a full 500 user end to end performance test, as a configuration file somewhere in the system may limit the number of users to less than 500.  It is much easier to identify such a configuration issue in a Targeted Infrastructure Test than in a full end to end test. 

The following diagram shows a simple conceptual decomposition of load to four different components in a typical web system.

Targeted infrastructure testing separately generates load on each component, and measures the response of each component under load.  The following diagram shows four different tests that could be conducted to simulate the load represented in the above diagram.

Different infrastructure tests require different protocols.  For example, VUGen™ supports a number of database protocols, such as DB2 CLI, Informix, MS SQL Server, Oracle and Sybase. 

Performance Tests

Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database.  This sets ‘best possible’ performance expectation under a given configuration of infrastructure.  It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.  For example, a customer search may take 15 seconds in a full sized database if indexes had not been applied correctly, or if an SQL 'hint' was incorporated in a statement that had been optimized with a much smaller database.  Such performance testing would highlight such a slow customer search transaction, which could be remediated prior to a full end to end load test.

It is 'best practice' to develop performance tests with an automated tool, such as WinRunner, so that response times from a user perspective can be measured in a repeatable manner with a high degree of precision.  The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.

Repeatability

A key indicator of the quality of a performance test is repeatability.  Re-executing a performance test multiple times should give the same set of results each time.  If the results are not the same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.

Performance Tests Precede Load Tests

The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined.  Developing performance test scripts at such an early stage provides opportunity to identify and remediate serious performance problems and expectations before load testing commences. 

For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'.  However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take 2 seconds.  Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order.  Performance tests provide a means for this education. 

Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing. 

A common example is one or more missing indexes.  When performance testing of a "customer search" screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement.  By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.

Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used.  For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links. 

Pre-requisites for Performance Testing

A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like.  The following table list pre-requisites for valid performance testing, along with tests that can be conducted before the pre-requisites are satisfied:

Performance Test

Pre-Requisites

Comment

Caveats on testing where

pre-requisites are not satisfied.

Production Like Environment

Performance tests need to be executed on the same specification equipment as production if the results are to have integrity.

Lightweight transactions that do not require significant processing can be tested, but only substantial deviations from expected transaction response times should be reported.

Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.

Production Like Configuration

Configuration of each component needs to be production like. 

For example: Database configuration and Operating System Configuration.

While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported.

Production Like Version

The version of software to be tested should closely resemble the version to be used in production.

Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.

Production Like Access

If clients will access the system over a WAN, dial-up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method.

See Network Sensitivity Tests for more information on testing WAN access.

Only tests using production like access are valid.

Production Like Data

All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data.

e.g. Having one million customers, 999,997 of which have the name "John Smith" would produce some very unrealistic responses to customer search transactions

Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.

Documenting Response Time Expectations.

Rather that simply stating that all transactions must be 'sub second', a more comprehensive specification for response time needs to be defined and agreed to be relevant stakeholders.

One suggestion is to state an Average and a 90th Percentile response time for each group of transactions that are time critical.   In a set of 100 values that are sorted from best to worst, the 90th percentile simply means the 90th value in the list.

Click on this link for more information on response time definition.

Executing Performance Tests.

Performance testing involves executing the same test case multiple times with data variations for each execution, and then collating response times and computing response time statistics to compare against the formal expectations.  Often, performance is different when the data used in the test case is different, as different numbers of rows are processed in the database, different processing and validation come into play, and so on.

By executing a test case many times with different data, a statistical measure of response time can be computed that can be directly compared against a formal stated expectation.

 

Network Sensitivity Tests

Network sensitivity tests are variations on Load Tests and Performance Tests that focus on the Wide Area Network (WAN) limitations and network activity (eg. traffic, latency, error rates...).  Network sensitivity tests can be used to predict the impact of a given WAN segment or traffic profile on various applications that are bandwidth dependant.  Network issues often arise at low levels of concurrency over low bandwidth WAN segments.  Very 'chatty' applications can appear to be more prone to response time degradation under certain conditions than other applications that actually use more bandwidth.  For example, some applications may degrade to unacceptable levels of response time when a certain pattern of network traffic uses 50% of available bandwidth, while other applications are virtually un-changed in response time even with 85% of available bandwidth consumed elsewhere.

This is a particularly important test for deployment of a time critical application over a WAN.

Also, some front end systems such as web servers, need to work much harder with 'dirty' communications compared with the clean communications encountered on a high speed LAN in an isolated load and performance testing environment.

Why execute Network Sensitivity Tests

The three principle reasons for executing Network Sensitivity tests are as follows:

bullet

Determine the impact on response time of a WAN link.  (Variation of a Performance Test)

bullet

Determine the capacity of a system based on a given WAN link. (Variation of a Load Test)

bullet

Determine the impact on the system under test that is under 'dirty' communications load. (Variation of a Load Test)

Execution of performance and load tests for analysis of network sensitivity require test system configuration to emulate a WAN.  Once a WAN link has been configured, performance and load tests conducted will become Network Sensitivity Tests.

There are two ways of configuring such tests.

bullet

Use a simulated WAN and inject appropriate background traffic.

This can be achieved by putting back to back routers between a load generator and the system under test.   The routers can be configured to allow the required level of bandwidth, and instead of connecting to a real WAN, they connect directly through to each other.

Diagram of simple back to back router setup to conduct bandwidth testing.

 

When back to back routers are configured to be part of a test, they will basically limit the bandwidth.  If the test is to be more realistic, then additional traffic will need to be applied to the routers.

 

This can be achieved by a web server at one end of the link serving pages and another load generator generating requests.  It is important that the mix of traffic is realistic.  For example, a few continuous file transfers may impact response time in a different way to a large number of small transmissions.

Diagram of more realistic back to back router setup to conduct bandwidth testing and network sensitivity testing.

 

By forcing extra more traffic over the simulated WAN link, the latency will increase and some packet loss may even occur.  While this is much more realistic than testing over a high speed LAN, it does not take into account many features of a congested WAN such as out of sequence packets. 

 

bullet

Use the WAN emulation facility within LoadRunner.

The WAN emulation facility within LoadRunner supports a variety of WAN scenarios.   Each load generator can be assigned a number of WAN emulation parameters, such as error rates and latency.  WAN parameters can be set individually, or WAN link types can be selected from a list of pre-set configurations.  For detailed information on WAN emulation within LoadRunner follow this link - mercuryinteractive.com/products/LoadRunner/wan_emulation.html.

 

It is important to ensure that measured response times incorporate the impact of WAN effects both at an individual session, as part of a performance test, and under load as part of a load test, because a system under WAN affected load may work much harder than a system doing the same actions over a clean communications link.

Where is the WAN?

Another key consideration in network sensitivity tests is the logical location of a WAN segment.  A WAN segment is often between a client application and it's server.  Some application configurations may have a WAN segment to a remote service that is accessed by an application server.   To execute a load test that determines the impact of such a WAN segment, or the point at which the WAN link saturates and becomes a bottleneck, one must test with a real WAN link, or a back to back router setup - as described above.  As the link becomes saturated, response time for transactions that utilize the WAN link will degrade.

Response Time Calculation Example.

A simplified formula for predicting response time is as follows:

Response Time = Transmission Time + Delays + Client Processing Time + Server Processing Time.

Where:

Transmission Time = Data to be transferred  divided by  Bandwidth.

Delays = Number of Turns multiplied by 'Round Trip' response time.

Client Processing Time = Time taken on users software to fulfil request.

Server Processing Time = Time taken on server computer to fulfil request.

 

Try entering in values and clicking on various buttons below to see how various parameters affect response time.  Note that this is a simplified model to demonstrate impact of various parameters.  Other parameters such as error rates, lost pack rates .... are not included.

Simple Response Time Calculator / Model

Data transfer for transaction (KB) 

[          ]

Press any buttons to update values.

Number of Turns (or resources on web page [          ]  eg gifs )

[          ]

 

Effective Bandwidth (Kbps) 

[          ]

 

Round Trip Time (ms) 

[          ]

 

Server Processing Time (ms) 

[          ]

 

Client Processing Time (ms) 

[          ]

/

 

 

/

 

 

/

 

Estimated Response Time (in seconds)

[          ]   

                   [Reset values to defaults] 

If you run ping from your command line, first with a small number of bytes and then with a moderate number of bytes, and enter the results here, the actual values of bandwidth and latency  from where you are to the site you pinged will be incorporated into the above table.

 

Number of bytes

Time (mS)

Average roundtrip time for small number (eg 48)of bytes.  (eg ping -l 48 www.merc-int.com.au)

[          ]bytes

[          ]mS

Average roundtrip time for moderate number (eg 2048)of bytes.  (eg ping -l 2048 www.merc-int.com.au)

[          ]bytes

[          ]mS

 

A final word on bandwidth congestion.

Care should be taken when considering the congestion of an existing network link, when attempting to replicate that link in a test environment.  Take the example of a site that has four staff.  If one of those staff members spent all day downloading stuff from the web, using up all of the bandwidth, then analysis would show a link with high utilization.  If however, three staff spent all day downloading files, the line utilization would be much the same, but the available bandwidth of the remaining staff member would be greatly reduced when compared with the first scenario of only one person downloading files.

Determining the effective available bandwidth takes into account this effect of excessive bandwidth demand and should be used in preference to the 'stated' bandwidth.

 

Volume Tests

Volume Tests are often most appropriate to Messaging, Batch and Conversion processing type situations.  In a Volume Test, there is often no such measure as Response time.  Instead, there is usually a concept of Throughput. 

A key to effective volume testing is the identification of the relevant capacity drivers.  A capacity driver is something that directly impacts on the total processing capacity.  For a messaging system, a capacity driver may well be the size of messages being processed. 

Volume Testing of Messaging Systems

Most messaging systems do not interrogate the body of the messages they are processing, so varying the content of the test messages may not impact the total message throughput capacity, but significantly changing the size of the messages may have a significant effect.  However, the message header may include indicators that have a very significant impact on processing efficiency.  For example, a flag saying that the message need not be delivered under certain circumstances is much easier to deal with than a message with a flag saying that it must be held for delivery for as long as necessary to deliver the message, and the message must not be lost.  In the former example, the message may be held in memory, but in the later example, the message must be physically written to disk multiple times (normal disk write and another write to a journal mechanism of some sort plus possible mirroring writes and remote failover system writes!)

Before conducting a meaningful test on a messaging system, the following must be known:

bullet

The capacity drivers for the messages (as discussed above).

bullet

The peak rate of messages that need to be processed, grouped by capacity driver.

bullet

The duration of peak message activity that needs to be replicated.

bullet

The required message processing rates.

A test can then be designed to measure the throughput of a messaging system as well as the internal messaging system metrics while that throughput rate is being processed.  Such measures would typically include CPU utilization and disk activity.

It is important that a test be run, at peak load, for a period of time equal to or greater than the expected production duration of peak load.  To run the test for less time would be like trying to test a freeway system with peak hour vehicular traffic, but limiting the test to five minutes.  The traffic would be absorbed into the system easily, and you would not be able to determine a realistic forecast of the peak hour capacity of the freeway.  You would intuitively know that a reasonable test of a freeway system must include entire 'morning peak' and 'evening peak' of traffic profiles, as both peaks are very different. (Morning traffic generally converges on a city, whereas evening traffic is dispersed into the suburbs.) 

Volume Testing of Batch Processing Systems

Capacity drivers in batch processing systems are also critical as certain record types may require significant CPU processing, while other record types may invoke substantial database and disk activity.  Some batch processes also contain substantial aggregation processing, and the mix of transactions can significantly impact the processing requirements of the aggregation phase. 

In addition to the contents of any batch file, the total amount of processing effort may also depend on the size and makeup of the database that the batch process interacts with.  Also, some details in the database may be used to validate batch records, so the test database must 'match' test batch files.

Before conducting a meaningful test on a batch system, the following must be known:

bullet

The capacity drivers for the batch records (as discussed above).

bullet

The mix of batch records to be processed, grouped by capacity driver.

bullet

Peak expected batch sizes (check end of month, quarter & year batch sizes).

bullet

Similarity of production database and test database.

bullet

Performance Requirements (eg. records per second)

Batch runs can be analysed and the capacity drivers can be identified, so that large batches can be generated for validation of processing within batch windows.  Volume tests are also executed to ensure that the anticipated numbers of transactions are able to be processed and that they satisfy the stated performance requirements.

Sociability (sensitivity) Tests

Sensitivity analysis testing can determine impact of activities in one system on another related system.  Such testing involves a mathematical approach to determine the impact that one system will have on another system.  For example, web enabling a customer 'order status' facility may impact on performance of telemarketing screens that interrogate the same tables in the same database.  The issue of web enabling can be that it is more successful than anticipated and can result in many more enquiries than originally envisioned, which loads the IT systems with more work than had been planned.  

Tuning Cycle Tests

A series of test cycles can be executed with a primary purpose of identifying tuning opportunities.  Tests can be refined and re-targeted 'on the fly' to allow technology support staff to make configuration changes so that the impact of those changes can be immediately measured.

Protocol Tests

Protocol tests involve the mechanisms used in an application, rather than the applications themselves.  For example, a protocol test of a web server may will involve a number of HTTP interactions that would typically occur if a web browser were to interact with a web server - but the test would not be done using a web browser.  LoadRunner is usually used to drive load into a system using VUGen at a protocol level, so that a small number of computers (Load Generators) can be used to simulate many thousands of users.

Thick Client Application Tests

A Thick Client (also referred to as a fat client) is a purpose built piece of software that has been developed to work as a client with a server.  It often has substantial business logic embedded within it, beyond the simple validation that is able to be achieved through a web  browser.  A thick client is often able to be very efficient with the amount of data that is transferred between it and its server, but is also often sensitive to any poor communications links.  Testing tools such as WinRunner are able to be used to drive a Thick Client, so that response time can be measured under a variety of circumstances within a testing regime.

Developing a load test based on thick client activity usually requires significantly more effort for the coding stage of testing, as VUGen must be used to simulate the protocol between the client and the server.  That protocol may be database connection based, COM/DCOM based,  a proprietary communications protocol or even a combination of protocols.

 

Thin Client Application Tests

An internet browser that is used to run an application is said to be a thin client.  But even thin clients can consume substantial amounts of  CPU time on the computer that they are running on.  This is particularly the case with complex web pages that utilize many recently introduced features to liven up a web page.  Rendering a page after hitting a SUBMIT button may take several seconds even though the server may have responded to the request in less than one second.  Testing tools such as WinRunner are able to be used to drive a Thin Client, so that response time can be measured from a users perspective, rather than from a protocol level.