Chapter One Questions
1. What is software testing? Explain the purpose of testing?
Ans.
Software testing is the process of executing a program with the intent of finding errors. It is used to ensure the correctness of a software product. Software
testing is also done to add value to software so that its quality and reliability is raised.
testing is also done to add value to software so that its quality and reliability is raised.
The main purpose of software testing is to demonstrate that the software works according to its specification and the performance requirements are fulfilled. The test results indicate reliability and quality of the software being tested.
2. Explain the origin of defect distribution in a typical software development life cycle?
Ans.
In a software development life cycle, defects occur in every phase. However, the maximum defects occur due to the improper understanding of product requirements. Also, the requirements keep changing during the entire software development life cycle. Therefore, the maximum defects originate in the requirements phase. If any defect occurs during the requirements phase, it needs to be detected during that phase itself. Otherwise, the defects in the requirements phase lead to defects in the design and coding phases.
3. Explain the importance of testing. What happens if a software program is not testing before deployment?
Ans.
Testing is important because it performs the following tasks:
n Detects errors in a software product
n Verifies that a software product conforms to its requirements
n Establishes confidence that a program or system does what it is supposed to do
n Evaluates an attribute or capability of a software product and determines that it meets the required results
If a software program is deployed without testing, there might be some bugs in the program, which are left undetected. These bugs can result in non-conformance of the software to its requirements.
FAQ
4. "Testing can show the presence of defects in software but cannot guarantee their absence." How can you minimize the occurrence of undetected defects in your software?
Ans:
The occurrence of undetected defects can be minimized by starting testing early in the development life cycle and by designing test cases that adequately exercise each aspect of the program logic.
5. What activities are performed in the testing phase of SDLC?
Ans:
The activities performed during the testing phase of SDLC are analyzing risk, planning, designing test, executing test, defect tracking, and reporting.
6. What could be the responsibilities of a software tester?
Ans:
A software tester is responsible for the following activities during the testing process:
n Developing test cases and procedures
n Creating test data
n Reviewing analysis and design artifacts
n Executing tests
n Using automated tools for regression testing
n Preparing test documentation
n Tracking defects
n Reporting test results
7. How can you test World Wide Web sites?
Ans:
World Wide Web sites are client/server applications, with Web as the server and browser as the client. While testing World Wide Web sites, the main concern should be on the interaction between the HTML pages, applications running on the server side, and Internet connections. You can use HTML testing, browser testing, or server testing to test World Wide Web sites.
8. What is the role of documentation in Quality Assurance?
Ans:
Documentation plays a very critical role in Quality Assurance. Proper documentation of standards and procedures is essential to assure quality. This is because the SQA activities of process monitoring, product evaluation, and auditing rely on documented standards and procedures to measure project compliance. QA practices should be documented so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, defect reports, and user manuals should be documented for clarity and common understanding among all involved parties.
9. Who participates in testing?
Ans:
The people who participate in testing include the following:
n Customer
n End user
n Software developer
n Software tester
n Senior management
n Auditor
---------------------------------------------------------------------------------------------------
FAQ
10. What is unit testing?
Ans:
Unit testing involves testing each individual unit of software to detect errors in its code.
11. What is integration testing?
Ans:
Integration testing involves testing two or more previously tested and accepted units to illustrate that they can work together when combined into a single entity.
12. What is system testing?
Ans:
System testing is the process of testing a completely integrated system to verify that it meets specified requirements.
13. What is acceptance testing?
Ans:
Acceptance testing is the process in which actual users test a completed information system to determine whether it satisfies its acceptance criteria.
14. What is a test plan?
Ans:
The test plan is a document that describes the complete testing activity. The creation of a test plan is essential for effective testing and requires about one-third of the total testing effort. Test-planning phase also involves planning for cross-platform testing. Cross platform testing involves testing the developed software on different platforms to verify that it works as expected on all the desired platforms. During the test-planning phase, you must decide the platforms on which the software is to be tested.
15. When should you start testing?
Ans:
Testing should be started as early as possible in the software development life cycle. The traditional "big bang" approach to testing, where all the testing effort is concentrated in the testing phase, when the development is complete, is more expensive than a continuous approach to testing. The cost of correcting a software defect increases in the later stages of the software development.
16. When should you stop testing?
Ans:
Testing is potentially endless. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. We cannot test till all the defects are detected and removed. At some point, we have to stop testing and ship the software.
Testing is a trade-off between budget, time, and quality. The following are a few common factors in deciding when to stop testing:
n The time allocated for testing is exhausted.
n Test cases are completed with a certain percentage passed.
n The test budget is depleted.
n The coverage of code, functions, and requirements reaches a specified point.
n The defect rate falls below a certain level.
n The beta or alpha testing period ends.
-----------------------------------------------------------------------------------------------------------
Solutions to Chapter Three Questions
17. Quality and reliability are related concepts, but are fundamentally different in a number of ways. Discuss them.
Ans.
Reliability is a quality measure that cannot be measured with absolute certainty, but requires statistical and probabilistic methods for its measurement. It is concerned with measuring the probability of occurrence of failures. Reliability is measured by evaluating the frequency of failure, as well as severity of errors induced by the program.
Software quality is the conformance to requirements, which are either explicit such as usability or implicit such as interoperability. Quality involves a number of factors, such as correctness, efficiency, portability, interoperability, and maintainability apart from reliability.
18. Can a program be correct and still not be reliable? Explain.
Ans.
It is possible that a program is correct, but it is not reliable. A program is correct if it behaves according to its stated functional specifications. If a program is correct, it does not ensure the non-occurrence of a failure. Thus, if the consequence of a software error is not serious, incorrect software may be reliable.
19. Can a program be correct and still not exhibit good quality? Explain.
Ans.
It is not necessary that a correct program would exhibit good quality. For example, a program may be correct but the logic may be such that it does not efficiently utilize the system resources. As a result, the quality of the product may not be too good. A good quality program should utilize the system resources efficiently and the output should be produced in a minimum possible time. A good quality software must be delivered on time and within budget, and should be maintainable.
20. Explain in more detail, the review technique adopted in quality assurance.
Ans.
Various review techniques are used in quality assurance to review a software, its documentation, and the processes used to produce the software. The review process helps to check if the project standards are followed and if the software conforms to these standards. The conclusions at the end of the review process are recorded and passed to the document author for corrections. Apart from specifications, designs, and test plans, configuration management procedures and process standards can also be reviewed. The software quality assurance involves three types of reviews:
n Code inspection
n Walkthroughs
n Round review process
Code inspections are formal reviews that are conducted explicitly to find errors in a program. This process helps to check the software deliverables for technical accuracy and consistency. It also verifies whether the software product conforms to applicable standards or not. It involves a group of peers which first inspects the product individually and then gets together in a formal meeting to discuss the defects found by each member and identifies more defects. The inspection team includes the author, moderator, and reviewers. The moderator is responsible for planning and successfully executing the inspection process. The following activities need to be performed before an organization decides to introduce inspection in its software process:
n Prepare a checklist of likely errors.
n Define a policy stating that the inspection process is a part of verification and not personal appraisals.
n Invest on the training of inspection team leaders.
A walkthrough is an informal technique for analyzing the code. Its main purpose is to train the walkthrough attendees about the work product. A code walkthrough is conducted after the coding of the module is complete. The members of the development team select some test cases and simulate the execution of the code. In this process, the author describes the work product to all the members and gets feedback from them. The members also discuss the solutions to various problems detected.
FAQ
21. What is software quality assurance?
Ans:
Software quality assurance is defined as a planned and systematic approach to the evaluation of the quality of a software product. It evaluates the adherence to software product standards, processes, and procedures for software development.
22. What are the principal activities in software quality management? Explain.
Ans:
Software quality management involves three activities:
n Quality assurance: Establishes a structure of organizational standards and procedures that contribute to high quality product.
n Quality planning: Selects standards and procedures from the established structure and applies them to a particular software product.
n Quality control: Ensures that there is no deviation from the established quality plan.
23. Explain what do you mean by correctness of a product.
Ans:
A software is said to be correct if all the requirements specified in the Software Requirement Specifications (SRS) document are implemented correctly.
24. What is ISO 9000 standard?
Ans:
ISO 9000 standard specifies the guidelines for maintaining a quality system. In an organization, the quality system applies to all the activities that relate to its products and services. This standard addresses aspects, such as responsibilities, procedures, and resources for implementing quality management.
25. Explain the need for a SQA plan.
Ans:
The quality assurance activities performed by the software engineering team and the SQA group are governed by the SQA plan. The SQA plan helps to identify the following issues:
n The evaluations that need to be performed.
n The audits and reviews to be conducted.
n The standards that are applicable to the product.
n The procedures to be used for error detection and correction.
n The documents to be produced by the SQA group.
---------------------------------------------------------------------------------------------------------
Solutions to Chapter Four Questions
26. Is code review relevant to software testing? Explain the process involved in a typical code review.
Ans.
Code reviews are relevant to software testing as they detect the defects in the code. Code reviews are performed after the defects found during code reading and static analysis are corrected. Code reviews enhance the reliability of the software and reduce the effort during testing.
Before the code review starts, the design documents and the code to be reviewed are distributed to the review team members. The review team includes the programmer, the designer, and the tester. The errors uncovered during the process are recorded and the process ends with an action plan. The programmer of the software product is responsible for correcting the discovered errors.
27. Explain the need for inspection and list the different types of code reviews.
Ans.
Inspection process is needed for various purposes as described below:
n It helps to identify errors at an early stage.
n It helps in identifying the most error-prone sections of the program at an early stage of software development cycle.
n The programmers receive feedback concerning his or her programming style and choice of algorithms and programming techniques.
n Other participants gain programming knowledge by being exposed to another programmer's errors and programming style.
The different types of code reviews are:
n Code inspections
n Walkthrough process
n Round robin reviews
28. Consider a program and perform a detailed review and list the review findings in detail.
Ans.
We will review a program to first read student data, such as student name, roll number, store the data in a file and then read the data from the file.
struct student
{
char name[20];
int roll;
};
void main()
{
struct student stud[20];
fstream inputfile;
char filename[10];
int i,n;
char fname[10];
cout<<"Enter file name";
cin >> fname;
cout<<"\nNumber of student records to store:";
cin>>n;
cout<<"Enter student details";
for(int x=0;x<n;x++)
{
cout<<"Student name:"; cin>>stud[x].name;
cout<<"Student roll number:"; cin>>stud[x].roll;
}
inputfile.open(fname,ios::out);
for(i=0;i<n;;i++)
inputfile<<stud[i].name<<stud[i].roll<<endl;
infile.close();
inputfile.open(fname,ios::out); // Read student records from the file
i=0;
while(!infile.eof())
{
infile>>stud[i].name>>stud[i].roll;
++i;
}
for(int j=0;j<n;j++)
cout<<stud[j].name<<stud[j].roll<<endl;
inputfile.close();
}
}
We formed an inspection team of four people to conduct a detailed review of the above program. The inspection team used a common checklist of errors for a detailed review of the program. The team searched for different types errors in the program. The findings of the detailed review using various checks are listed as follows:
Check for Data-Declaration Errors
n Some of the variables used in the program have not been explicitly declared and some variables, which have been declared, have not been used.
n The program declares a character array named filename to store the file name. But, a different variable fname is used to input the file name from the user. The variable fname has not been declared in the program.
n The program has correctly declared an integer variable to input the number of student records to be stored in a file.
n The program declares a file type variable to perform file input-output operations.
Check for Data-Reference Errors
n The file close () function references a file variable, which has not been set in the program.
n The program declares an array to store ten student records but no checks exist on the number of student records a user can input.
n Check for comparison errors to detect if there is any comparison between variables having inconsistent data types.
n The program correctly compares the expression in the while loop.
n The program correctly performs the Boolean check operation to detect the end of a file.
Check for Control-Flow Errors
The for loops used to input student data, store student data in a file and display the data stored in the file will terminate properly provided an user inputs at the most ten student records.
Check for File Input/Output Errors
n The program stores student records in a file and later reads the stored data. To write student records in the file, the file has been correctly opened with the correct file opening mode. After all the data is written in the file, the file, which has been opened, is not closed. The file name parameter used to open the file for the write operation is not declared in the program.
n To read file data, the file has been opened with a wrong file opening mode. Also, the file name parameter used to open the file for the read operation is not declared.
In this way, the inspection team performed a detailed inspection of the program.
29. Explain the difference between code walk through and Inspection.
Ans.
An inspection process is a formal meeting done by a group of peers. The group first inspects the product privately and then gets together in a meeting to discuss the problems detected by each member. Its primary purpose is to detect the errors in various stages of SDLC. The inspection process includes the author, reviewers, and a moderator. The moderator is responsible for planning and successfully executing the inspection process. Reviewers are not directly responsible for the development of the software, but are concerned with the product. Reviewers may include designers and testers. During the meeting, the moderator explains the objective of the review, defines the roles of different people, gives clarifications about the process, and distributes the inspection package to the reviewers. The inspection package consists of the documents to be inspected, additional documents that help in better understanding of the product, and checklists to be used.
A walkthrough is an informal technique for analyzing the code. Its main purpose is to train the walkthrough attendees about the work product. A code walkthrough is conducted after the coding of the module is complete. The members of the development team select some test cases and simulate the execution of the code. In this process, the author describes the work product to all the members and gets feedback from them. The members also discuss the solutions to various problems detected.
FAQ
30. What is the difference between a walkthrough process and a code inspection process?
Ans:
Code inspection is a formal method of inspecting the code by a group of people with the purpose of identifying the defects in it. Walkthroughs are informal meetings conducted by the author of the program with the intent of educating the participants about the work product. It also involves discussions for identifying alternative solutions to the problems uncovered.
31. How can you determine the quality of a software before executing it?
Ans:
Software quality can be determined by using static techniques, such as code reviews and walkthroughs before executing the code.
32. What are the various techniques used for developing test cases?
Ans:
Two approaches are used for developing test cases: black box testing and white box testing. Black box testing can use either equivalence partitioning method, boundary value analysis, or cause effect graph analysis for designing test cases. White box testing involves control flow methods and data flow methods.
33 What are the various elements that are included in the inspection package for code review?
Ans:
The inspection package for code review includes:
n Program source code
n Significant portions of design or specification document
n Checklists to be used for review
n System constraints
34. What is the moderator's responsibility after the inspection meeting completes?
Ans:
After the inspection meeting completes, the moderator prepares a summary report of the meeting. The report lists the various errors uncovered during the meeting. The moderator also ensures that all the issues in the report are addressed. The moderator also decides whether or not to perform a re-review of the product.
------------------------------------------------------------------------------------------------------------
Chapter Five Questions
35. What is black box testing? Explain.
Ans.
Black box testing focuses on the functional requirements of a software. In this testing method, the structure of the program is not considered. Test cases are designed only on the basis of the requirements or specifications of the program and the internals of the modules are not considered for the selection of test cases. Black box testing is applied at later stages in the testing process. Black box testing is used to find incorrect or missing functions, interface errors, errors in data structures, performance errors, and initialization and termination errors. The techniques used for black box testing are:
n Equivalence partitioning
n Boundary value analysis
n Cause effect graphing techniques
36. What are the different techniques that are available to conduct black box testing?
Ans.
The different techniques available to conduct black box testing are:
n Equivalence partitioning
n Boundary value analysis
n Cause effect graphing
Equivalence class partitioning is a way of selecting test cases for black box testing. In this technique, the domain of all the inputs is divided into a set of equivalence classes. If any test in an equivalence class succeeds, then every test in that class will succeed. This means that you need to identify classes of test cases such that the success of one test case in a class implies the success of other test cases. The following points should be remembered while designing equivalence classes:
n If a range of values defines the input data values to a system, then one valid and two invalid equivalence classes are always defined.
n If the input data presume values from a set of discrete members of some domain, then one equivalence class for valid input values and another equivalence class for invalid input values are always defined.
Boundary value analysis (BVA) leads to selection of test cases at the boundaries of different equivalence classes. It is observed that boundary points for any inputs are not tested properly. This leads to many errors. Guidelines for BVA are:
n If the input range is specified to be between a and b, test cases should be designed with values a and b and with the values just above and just below a and b.
n If the input contains a number of values, test cases that use minimum and maximum values should be designed. The values just above and just below the minimum and maximum values are also tested.
The above two guidelines are also applied to output conditions.
Cause effect graphing is a technique that helps to select combinations of input conditions in a systematic manner, such that the number of test cases does not become too large. The technique starts with the identification of causes and effects of the system. A cause specifies an input condition and the effect is a distinct output condition. In cause effect graphing, you need to create a graph of important program objects, such as modules or collections of programming languages and describe the relationships between them. A series of tests are then conducted on each object of the graph so that each object and the relationship between objects are verified and errors are uncovered. In the graph, the nodes represent objects and the links represent the relationship between objects.
37. Explain different methods available in white box testing.
Ans.
The different methods available in white box testing are:
n Basis path testing: Enables the test case designer to derive a logical complexity measure of a procedural design. This measure is used to define a basis set of execution paths. The test cases that exercise the basis set execute every statement in the program at least once. To derive the basis set you need to take the following steps:
· Draw a flow graph corresponding to the design or the code.
· Calculate the cyclomatic complexity of the resultant flow graph.
· Determine a basis set of linearly independent paths.
· Prepare test cases that force execution of each path in the basis set.
n Condition testing: Verifies the logical conditions contained in a program module. The possible types of elements in a condition include a Boolean operator, a Boolean variable, a relational operator, or an arithmetic expression. This method not only detects the errors in the conditions of a program, but also detects other types of errors. There are a number of conditional strategies used, which are branch testing and domain testing. In branch testing, each decision in the program needs to be evaluated to true and false values at least once during testing. Domain testing requires three or four tests to be derived for a relational expression. The relational expression is given in the following form:
E1<relational operator>E2
You require three test cases for the above expression. One of the test cases have the expression, E1 greater than E2, the other has E1 equal to E2, and the third test case has E1 less than the expression, E2.
n Data flow testing: Enables the selection of test paths of a program according to the location and uses of variables in the program.
n Loop testing: Focuses on the validity of loop constructs. The different types of loops include simple loop, concatenated loop, nested loop, and unstructured loop. Different sets of tests are applied to each loop.
FAQ
38. Can a person other than the software engineer design black box tests?
Ans:
Black box tests are architecture independent and require no knowledge of the underlying system. It is not concerned with how the output is produced, but it checks only whether the actual output matches the expected output or not. As a result, one need not be a software engineer to design black box tests.
39. What is a test case? How is it useful?
Ans:
A test case is a document that describes an input, action, and the expected output to determine if the program module is working correctly or not. Test case design helps to discover problems in the requirements or design of an application, as it requires a complete understanding of the operations performed by the application.
40. What is test data?
Ans:
The information given below enables the learner to acquire a generic overview of test data.
n The data developed in support of a specific test case is test data.
n Test data can either be manually generated or extracted from an existing resource, such as production data.
n Test data that is a result of test execution can serve as an input to a subsequent test.
n Another source of test data includes recording user input using a capture or playback tool.
41. What is the difference between usability testing, recovery testing, and compatibility testing?
Ans:
Usability testing involves testing the ease with which a user learns about a product and uses it. Recovery testing involves verifying the system's ability to recover the loss that has occurred from varying degrees of failure. Compatibility testing involves testing whether the system is compatible with other systems with which it has to communicate during the execution of a process.
-----------------------------------------------------------------------------------------------------------
Chapter Six Questions
42. Explain the need for GUI testing and its complexity.
Ans.
GUI is increasingly used in applications because they enable users to use different services provided by the applications easily. GUIs need to be properly tested to verify that they fulfill their objectives.
There are no standard specifications based on which GUIs are designed. GUI designs are primarily guided by the user psychology that varies from application to application. It becomes difficult to understand the user psychology and prepare appropriate test methods to verify the correctness and appropriateness of GUIs. Therefore, GUI testing is a complex process.
43. List the guidelines required for a typical tester during GUI testing.
Ans.
The guidelines for GUI testing can be categorized into many operations:
n For windows:
· Check if the windows open properly based on the menu-based commands.
· Check if the window can be resized, moved, scrolled.
· Check if the window properly regenerates when it is overwritten and then recalled.
· Check if all the functions that relate to the window are available, when needed.
· Check if all the functions relating to the window operational.
· Check if all relevant pull-down menus, tool bars, scroll bars, dialog boxes, and buttons, icons, and other controls are available and properly represented.
· Check if the active window is properly highlighted.
· Check if multiple or incorrect mouse picks within the window cause unexpected side effects.
· Check if audio and/or color prompts within the window appear according to the specification.
· Check if the window is properly closed.
n For pull-down menus and mouse operations:
· Check if the menu bar displayed is in the appropriate context.
· Check if the application menu bar displays system related features.
· Check if the pull-down operations work properly.
· Check if the breakaway, menus, palettes, and tool bars work properly.
· Check if all menu functions and pull-down sub functions are properly listed.
· Check if all menu functions are properly addressable by the mouse.
· Check if the text typeface, size, and format is correct.
· Check if it is possible to invoke each menu function using its alternative text-based command.
· Check if the menu functions are highlighted (or grayed-out) based on the context of the current operations within a window.
· Check if each menu function performs as advertised.
· Check if the names of menu functions are self-explanatory.
· Check if the help available for each menu item is context sensitive.
· Check if the mouse operations are properly recognized throughout the interactive context.
· Check if the multiple clicks required are properly recognized in context.
· Check if the mouse having multiple buttons is properly recognized in context.
· Check if the cursor, processing indicator (e.g. an hour glass or clock), and pointer properly change as different operations are invoked.
n For data entry:
· Check if the alphanumeric data entry is provided properly.
· Check if the graphical modes of data entry (e.g., a slide bar) work properly.
· Check if the invalid data is properly recognized.
· Check if data input messages are intelligible.
· Check if the basic standard validation on each data is considered during the data entry itself.
· Once the data is entered completely and if a correction is to be done for a specific data, is it required that the entire data should be entered again?
· Check if the mouse clicks are properly used.
· Check if the help buttons are available during data entry.
44. Select your own GUI based software system and test the GUI related functions by using the listed guidelines in this Chapter.
Ans.
To test the GUI related functions of a GUI based software system, we have selected a text editor application. The text editor application is a GUI based application that performs text-editing operations, such as file, edit, and search operations. The file operations include creating a new file, opening existing files, saving the files and exiting the application. The edit operations include undo, cut, copy, paste, delete, select all, time/date, word wrap and set font operations. The search operations include find and find next operations.
Listed below are the results of testing the text editor application for GUI related functions.
n Test for Windows
1. A text editor screen is displayed on opening the text editor application. The menu bar contains File, Edit, Search, and Help sub menus in the menu bar for different types of text editing operations.
2. The text editor screen can be resized using the maximize and minimize buttons.
3. Each menu in the menu bar lists appropriate sub menus.
4. The text editor screen is highlighted when selected for text editing operations.
5. The close button on the text editor screen closes the text editor application.
n Test for pull-down menus
1. The menu bar in the text editor system displays menus to perform text-editing operations, such as file, edit, and search operations.
2. Each menu in the menu bar has a pull-down menu to display the sub menus listed in that particular menu.
3. The sub menus are properly listed for each menu. For example, the File menu contains New, Open, Save, Save As and Exit sub menus.
4. All the menus and sub menus are addressable with a mouse. For example, the open dialog box appears on clicking the Open sub menu of the File menu.
5. Each menu is highlighted on being pointed by a mouse.
6. The names of the menus and sub menus are self-explanatory. For example, a user can easily make out that the New sub menu on the File menu opens a new file.
7. Every menu and sub menu in the text editor screen has a hot key to perform the related function without clicking a mouse.
n Test for data entry operations
1. The text editor screen facilitates entry of alphanumeric data.
2. Graphical data cannot be input in the text editor screen, and appropriate error message is displayed specifying invalid data entry.
3. The text editor screen facilitates the removal of unwanted data entered by selecting the text to be deleted and clicking the Delete sub menu on the Edit menu. Similarly, repetitive text can be copied and pasted in multiple locations on the text screen using the Paste sub menu in the Edit menu.
FAQ
45. What are interim test reports?
Ans:
Interim test reports describe the status of testing. They are designed so that the test team can track progress against the test plan. Interim test reports are also important for the development team, as the test reports will identify defects that should be corrected.
46. What are the long-term benefits of a test report?
Ans:
The main long-term benefits of developing a test report are as follows:
n Problems in a software application can be traced if it functions improperly at the time of production. The relevant stakeholders of a software application can scan through a test report to trace the functions that have been correctly tested and those that may still contain defects. This can assist them in making appropriate decisions.
n Data in the test report can be used to analyze the rework process for making changes to prevent defects from occurring in the future. This can be done by accumulating the results of many test reports to identify the components of the rework process, which are defect-prone.
47. Name some client/server testing tools
Ans:
Some client/server testing tools are:
n Mercury Interactive
n Performance Awareness
n AutoTester Inc
48. What is one important feature, which the testing tools should possess?
Ans:
The testing tool you select must function on all of the operating environments that run in your organization. For example, AutoTester works on Windows 3.11, Windows 95, Windows NT, Unix, and OS/2. Segue Software's QA Partner runs on Windows 95, Windows NT, Window 3.1, OS/2, Macintosh System 7.x, and more than a dozen flavors of Unix. Great Circle from Geodesic Systems works with Windows 3.11, Windows NT, and Windows 95, and will soon be released for OS/2 Warp 3.0 and Sun Solaris X86.
49. What are the various formats used for documenting tests?
Ans:
The various formats used for documenting tests are:
n Test Plan
n Test Cases
n Test Procedures or Test Scripts
n Test Results Documentation
n Test Summary Log
n Test Fault or Incident Log
n Test Summary or Exit Report
n Minutes from Test Exit Meeting
n Test Data
n Test Schedule
------------------------------------------------------------------------------------------------------------
Solutions to Chapter Seven Questions
50. What is the difference between verification and validation? Explain in your own words.
Ans.
In verification and validation, the main concern is on the correctness of the product. Verification is the process that is used to determine whether or not the products of a given phase of software development fulfill the specification established during the previous phase. Validation is the process that evaluates software at the end of software development to ensure compliance with the software requirements. For high reliability of the software, you need to perform both, verification and validation.
51. Explain unit test method with the help of your own example.
Ans.
In unit testing, different modules are tested against the specifications produced during design for the modules. The purpose of unit testing is to test the internal logic of the modules. The programmer of the module performs unit testing. The testing method focuses on testing the code. As a result, structural testing is used at this level. Consider the example of a system that displays the division of a student. The system is developed in many modules. Before you integrate these modules, it is necessary to perform tests on each module independently. Such tests are referred to as unit tests as each module or unit is being tested in the process.
52. Develop an integration testing strategy for any the system that you have implemented already. List the problems encountered during such process.
Ans.
Suppose you have developed an application for the payroll system of a company. The application is developed in many modules. You first developed the individual modules that calculate the basic salary of the employee, DA, and number of leaves. After developing each module, you perform unit testing and integrate them to obtain the module that calculates an employee's salary. Using bottom up integration testing, the lower-level modules including the basic salary, DA, and number of leaves are tested. Then, these tested modules are combined with higher-level module that calculates an employee's salary. However, at any stage of testing the lower-level modules exist and have already been tested. Therefore, by using bottom up testing strategy, the parts of the application are being tested and the errors are detected while the development proceeds. It is advantageous to use his technique if major errors occur at lower-level modules. The testing becomes complex as he system is made up of a large number of subsystems.
53. What is validation test? Explain.
Ans.
The validation is the process of examining the software and providing evidence of the achievement of software requirements and specifications. The validation process provides documented evidence that the software meets its specifications and requirements consistently. To validate a software, documented evidence is presented by examining all the phases of the software life cycle. Thus, the validation method is based on the software life cycle model and consists of the following phases:
n Requirements analysis process
n Design and implementation process
n Inspection and testing
n Installation and system acceptance test
n Maintenance phase.
Examining, testing, and documenting all the above phases complete the validation process.
FAQ
54. What are the preparatory measures for creating a test strategy?
Ans:
A test strategy describes the method you can use to test a software product. A test strategy needs to be developed for all levels of testing. The testing team analyzes the requirements, identifies the test strategy, and performs a review of the plan. For creating a testing strategy, the information required includes:
n Test tools required for testing
n Description of the roles of various resources to be used for testing
n Testing methodology based on the known standards
n Functional requirements of the product
n System limitations
55. Which factors decide that the testing process should be concluded?
Ans:
The factors that decide the conclusion of the testing process are:
n Product release deadlines and testing deadlines are met
n Error rate reduces below a certain level
n Coverage of code and functionality reaches a specified point
n Test budget has been depleted
56. List the advantages and disadvantages of bottom up integration testing.
Ans:
Bottom up integration testing helps to test disjointed subsystems simultaneously. In this type of testing, stubs are not required, only test drivers are needed. A disadvantage of the bottom up strategy is that the testing becomes complex, when the system is made up of a large number of subsystems.
57. What are the various issues covered during the documentation of a testing process?
Ans:
The documentation generated at the end of the testing process is called test summary report. It includes a summary of the tests that are applied for various subsystems. It also specifies how many tests are applied to each subsystem, the number of tests that were successful, the number of tests that were unsuccessful, and the degree of deviation in unsuccessful tests.
58. What is the main drawback of integration testing and how can this be remedied?
Ans:
The problem in integration testing is that it is difficult to locate the errors discovered during the process. This is because there is a complex interaction between the system components. Therefore, when an inconsistent output is obtained, it is difficult to find the source of errors. For easily locating the errors, an integration approach should be used for system integration and testing. You start with a minimal set of subsystems and test this system. Then, you add components to this minimal system and test the system after each added increment.
59. Which of the two methods help to detect errors at an early stage in the development process, top down approach or bottom up approach?
Ans:
Top down testing approach can detect errors at an early stage in the development process.
60. Why is it difficult to implement top down testing method?
Ans:
It is difficult to implement top down testing method because you need to produce stubs that simulate the lower levels of the system. These stubs may be a simplified version of the components required or it may request the software tester to input a value or simulate the action of the components.
61. Which testing should be performed when you make changes to an existing system?
Ans:
Regression testing is performed to ensure that software is working properly after making changes to an existing system.
No comments:
Post a Comment