Blog Detail

  • manual-testing-online-training_637.jpg

    Software Testing Questions for Intermediate Level

    1> What is the difference between manual testing and automation testing?

    Manual Testing
    In manual testing, the accuracy, and reliability of test cases are low, as manual tests are more prone to human error.
    The time required for manual testing is high as human resources perform all the tasks.
    In manual testing investment cost is low, but Return of Investment(ROI) is low as well.
    Manual testing is preferred when the test cases are run once or twice. Also suitable for Exploratory, Usability and Adhoc Testing.
    Allows for human observation to find out any glitches. Therefore manual testing helps in improving the customer experience.

    Automation Testing
    Automated testing, on the other hand, is more reliable as tools and scripts are used to perform tests.
    The time required is comparatively low as software tool execute the tests
    In automation testing investment cost and Return of Investment, both are high.
    You can use test automation for Regression Testing, Performance Testing, Load Testing or highly repeatable functional test cases
    As there is no human observation involved, there is no guarantee of positive customer experience.

    2> When should you opt for manual testing over automation testing?

    There are a lot of cases when manual testing is best suited over automation testing, like:

    Short-time projects: Automated tests are aimed at saving time and resources yet it takes time and resources to design and maintain them. For example, if you are building a small promotional website, it can be much more efficient to rely on manual testing.
    Ad-hoc Testing: In ad-hoc testing, there is no specific approach. Ad-hoc testing is a totally unplanned method of testing where the understanding and insight of the tester is the only important factor. This can be achieved using manual testing.
    Exploratory Test: This type of testing requires the tester’s knowledge, experience, analytical, logical skills, creativity, and intuition. So human involvement is important in exploratory testing.
    Usability Testing: When performing usability testing, the tester needs to measure how user-friendly, efficient, or convenient the software or product is for the end-users. Human observation is the most important factor, so manual testing sounds seems more appropriate.

    3> What are the phases involved in Software Testing Life Cycle(STLC)?

    Requirement Analysis - QA team understands the requirement in terms of what we will testing & figure out the testable requirements.
    Test Planning - In this phase, the test strategy is defined. Objective & the scope of the project is determined.
    Test Case Development - Here, detailed test cases are defined and developed. The testing team also prepares the test data for testing.
    Test Environment Setup - It is a setup of software and hardware for the testing teams to execute test cases.
    Test Execution - It is the process of executing the code and comparing the expected and actual results.
    Test Cycle Closure - It involves calling out the testing team member meeting & evaluating cycle completion criteria based on test coverage, quality, cost, time, critical business objectives, and software.

    4> What is the difference between a bug, a defect and an error?

    Bug – A bug is a fault in the software that’s detected during testing time. They occur because of some coding error and leads a program to malfunction. They may also lead to a functional issue in the product. These are fatal errors that could block a functionality, results in a crash, or cause performance bottlenecks

    Defect – A defect is a variance between expected results and actual results, detected by the developer after the product goes live. The defect is an error found AFTER the application goes into production. In simple terms, it refers to several troubles with the software products, with its external behavior, or with its internal features.

    Error – An error is a mistake, misunderstanding, or misconception, on the part of a software developer. The category of developers includes software engineers, programmers, analysts, and testers. For example, a developer may misunderstand a design notation, or a programmer might type a variable name incorrectly – leads to an error. An error normally arises in software, it leads to a change the functionality of the program.

    5> What makes a good test engineer?

    A software test engineer is a professional who determines how to create a process that would best test a particular product in the software industry.

    A good test engineer should have a ‘test to break’ attitude, an ability to take the point of view of the customer
    Strong desire for quality and attention to minute details
    Tact and diplomacy to maintain a cooperative relationship with developers
    Ability to communicate with both technical (developers) and non-technical (customers, management) people
    Prior experience in the software development industry is always a plus
    Ability to judge the situations and make important decisions to test high-risk areas of an application when time is limited

    6> What is regression testing? When to apply it?

    Testing of a previously tested program to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made is called Regression Testing.

    A regression test is a system-wide test whose main purpose is to ensure that a small change in one part of the system does not break existing functionality elsewhere in the system. It is recommended to perform regression testing on the occurrence of the following events:

    When new functionalities are added
    In case of change requirements
    When there is a defect fix
    When there are performance issues
    In case of environment changes
    When there is a patch fix

    7> What is the difference between system testing and integration testing?

    System Testing

    System Testing tests the software application as a whole to check if the system is compliant with the user requirements
    Involves both functional and non-functional testings like sanity, usability, performance, stress an load
    It is high-level testing performed after integration testing

    Integration Testing

    Integration testing tests the interface between modules of the software application
    Only functional testing is performed to check whether the two modules when combined give the right outcome
    It is low-level testing performed after unit testing

    8> What is the test harness?

    A test harness is the gathering of software and test information arranged to test a program unit by running it under changing conditions like stress, load, data-driven, and monitoring its behavior and outputs. Test Harness contains two main parts:

    – A Test Execution Engine
    – Test script repository

    9> What is test closure?

    Test Closure is a document which gives a summary of all the tests conducted during the software development life cycle and also gives a detailed analysis of the bugs removed and errors found. This memo contains the aggregate no. of experiments, total no. of experiments executed, total no. of imperfections discovered, add total no. of imperfections settled, total no. of bugs not settled, total no of bugs rejected and so forth.

    10> What is the difference between Positive and Negative Testing?

    Positive testing

    Positive testing determines that your application works as expected. If an error is encountered during positive testing, the test fails
    In this testing, tester always check for an only valid set of values

    Negative testing

    Negative testing ensures that your application can gracefully handle invalid input or unexpected user behavior
    Testers apply as much creativity as possible and validating the application against invalid data

    11> Define what is a critical bug.

    A critical bug is a bug that has got the tendency to affect a majority of the functionality of the given application. It means a large piece of functionality or major system component is completely broken and there is no workaround to move further. Application cannot be distributed to the end client unless the critical bug is addressed.

    12> What is the pesticide paradox? How to overcome it?

    According to pesticide paradox, if the same tests are repeated over and over again, eventually the same test cases will no longer find new bugs. Developers will be extra careful in those places where testers found more defects and might not look into other areas. Methods to prevent pesticide paradox:

    To write a whole new set of test cases to exercise different parts of the software.
    To prepare new test cases and add them to the existing test cases.
    Using these methods, it’s possible to find more defects in the area where defect numbers dropped.

    13> What is Defect Cascading in Software Testing?

    Defect Cascading is the process of triggering other defects in the application. When a defect goes unnoticed while testing, it invokes other defects. As a result, multiple defects crop up in the later stages of development. If defect cascading continues to affect other features in the application, identifying the affected feature becomes challenging. You may make different test cases to solve this issue, even then it is difficult and time-consuming.

    14> What is the term ‘quality’ mean when testing?

    In general, quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. But again ‘quality’ is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. For example, each type of ‘customer’ will have their own slant on ‘quality’ – the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

    15> What is black box testing, and what are the various techniques?

    Black-Box Testing, also known as specification-based testing, analyses the functionality of a software/application without knowing much about the internal structure/design of the item. The purpose of this testing is to check the functionality of the system as a whole to make sure that it works correctly and meets user demands. Various black-box testing techniques are:

    Equivalence Partitioning
    Boundary Value Analysis
    Decision Table Based Technique
    Cause-effect Graphing
    Use Case Testing

    16> What is white box testing, and what are the various techniques?

    White-Box Testing also known as structure-based testing, requires a profound knowledge of the code as it includes testing of some structural part of the application. The purpose of this testing is to enhance security, check the flow of inputs/outputs through application and to improve design and usability. Various white-box testing techniques are:

    Statement Coverage
    Decision Coverage
    Condition Coverage
    Multiple Condition Coverage

    17> What are the Experience-based testing techniques?

    Experienced-based testing is all about discovery, investigation, and learning. The tester constantly studies and analyzes the product and accordingly applies his skills, traits, and experience to develop test strategies and test cases to perform necessary testing. Various experience-based testing techniques are:

    Exploratory Testing
    Error Guessing

    18> What is a top-down and bottom-up approach in testing?

    Top-Down – Testing happens from top to bottom. That is, high-level modules are tested first, and after that low-level modules. Lastly, the low-level modules are incorporated into a high-level state to guarantee the framework is working as it is expected to.

    Bottom-Up – Testing happens from base levels to high-up levels. The lowest level modules are tested first and afterward high-level state modules. Lastly, the high-level state modules are coordinated to a low level to guarantee the framework is filling in as it has been proposed to.

    19> What is the difference between smoke testing and sanity testing?

    Smoke Testing

    System Builds - Tests are executed on initial builds of software product
    Motive of Testing - To measure the stability of the newly created build to face off more rigorous testing
    Subset of? - Is a subset of acceptance testing
    Documentation - Involves documentation and scripting work
    Test Coverage - Shallow & wide approach to include all the major functionalities without going too deep
    Performed By? - Executed by developers or testers

    Sanity Testing

    System Builds - Tests are done on builds that have passed smoke tests & rounds of regression tests
    Motive of Testing - To evaluate rationality & originality of the functionalities of software builds
    Subset of? - Is a subset of regression testing
    Documentation - Doesn’t emphasize any sort of documentation
    Test Coverage - Narrow & deep approach involving detailed testing of functionalities and features
    Performed By? - Executed by testers

    20> What is the difference between static testing and dynamic testing?

    Static Testing

    Static Testing is a white box testing technique, it includes the process of exploring the records to recognize the imperfections in the very early stages of SDLC.
    Static Testing is implemented at the verification stage.
    Static testing is performed before the code deployment.
    The code error detection and execution of the program is not a concern in this type of testing.

    Dynamic Testing

    Dynamic testing includes the process of execution of code and is done at the later stage of the software development lifecycle. It validates and approves the output with the expected results.
    Dynamic testing starts during the validation stage.
    Dynamic testing is performed after the code deployment
    Execution of code is necessary for dynamic testing.

    21> How will you determine when to stop testing?

    Deciding when to stop testing can be quite difficult. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Some common factors in deciding when to stop testing are:

    Deadlines (release deadlines, testing deadlines, etc.)
    Test cases completed with certain percentage passed
    When the test budget is depleted
    Coverage of code or functionality or requirements reaches a specified point
    Bug rate falls below a certain level
    When Beta or alpha testing period ends

    22> What if the software is so buggy it can’t really be tested at all?

    Often testers encounter a bug that can’t be resolved at all. In such situations, the best bet is for testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can cause severe problems such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc managers should be notified and provided with some documentation as evidence of the problem.

    23> How you test a product if the requirements are yet to freeze?

    It’s possible that a requirement stack is not available for a piece of product. It might take serious effort to determine if an application has significant unexpected functionality, and it would indicate deeper problems in the software development process. If the functionality isn’t necessary to the purpose of the application, it should be removed. Else, create a test plan based on the assumptions made about the product. But make sure you get all assumptions well documented in the test plan.

    24> What if an organization is growing so fast that fixed testing processes are impossible? What to do in such situations?

    This is a very common problem in the software industry, especially considering the new technologies that are being incorporated when developing the product. There is no easy solution in this situation, you could:
    • Hire good and skilled people
    • Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the customer
    • Everyone in the organization should be clear on what ‘quality’ means to the end-user

    25> How do you know the code has met specifications?

    ‘Good code’ is code that works, that is bug-free, and is readable and maintainable. Most organizations have coding ‘standards’ that all developers are supposed to adhere to, but everyone has different ideas about what’s best, or what is too many or too few rules. There are a lot of tools like traceability matrix which ensures the requirements are mapped to the test cases. And when the execution of all test cases finishes with success, it indicates that the code has met the requirement.

    26> What are the cases when you’ll consider to choose automated testing over manual testing?

    Automated testing can be considered over manual testing during the following situations:

    When tests require periodic execution
    Tests include repetitive steps
    Tests need to be executed in a standard runtime environment
    When you have less time to complete the testing phase
    When there is a lot of code that needs to be repeatedly tested
    Reports are required for every execution

    27> What is ‘configuration management’?

    Every high-functioning organization has a “master plan” that details how they are supposed to operate and accomplish tasks. Software development and testing are no different. Software configuration management (SCM) is a set of processes, policies, and tools that organize, control, coordinate, and track:

    code
    documentation
    problems
    change requests
    designs and tools
    compilers and libraries

    28> Is it true that we can do system testing at any stage?

    In system testing, all the components of the software are tested as a whole in order to ensure that the overall product meets the requirements specified. So, no. The system testing must start only if all units are in place and are working properly. System testing usually happens before the UAT (User Acceptance Testing).

    29> What are some best practices that you should follow when writing test cases?

    Few guidelines that you need to follow while writing test cases are:

    Prioritize which test cases to write based on the project timelines and the risk factors of your application.
    Remember the 80/20 rule. To achieve the best coverage, 20% of your tests should cover 80% of your application.
    Don’t try to test cases in one attempt instead improvise them as you progress.
    List down your test cases and classify them based on business scenarios and functionality.
    Make sure test cases are modular and test case steps are as granular as possible.
    Write test cases in such a way that others can understand them easily & modify if required.
    Always keep end-users’ requirements in the back of your mind because ultimately the software designed is for the customer
    Actively use a test management tool to manage stable release cycle.
    Monitor your test cases regularly. Write unique test cases and remove irrelevant & duplicate test cases.

    30> Why is it that the boundary value analysis provides good test cases?

    The reason why boundary value analysis provides good test cases is that usually, a greater number of errors occur at the boundaries rather than in the center of the input domain for a test.

    In boundary value analysis technique test cases are designed to include values at the boundaries. If the input is within the boundary value, it is considered ‘Positive testing.’ If the input is outside of the boundary value, it is considered ‘Negative testing.’ It includes maximum, minimum, inside or outside edge, typical values or error values.

    Let’s suppose you are testing for an input box that accepts numbers from 01 to 10.

    Using the boundary value analysis we can define three classes of test cases:

    Test cases with test data exactly as the input boundaries of input: 1 and 10 (in this case)
    Values just below the extreme edges of input domains: 0 and 9
    Test data with values just above the extreme edges of input domains: 2 and 11
    So the boundary values would be 0, 1, 2 and 9, 10, 11.

    31> Why is it impossible to test a program thoroughly or in other terms 100% bug-free?

    It is impossible to build a software product which is 100% bug-free. You can just minimize the error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result.

    Here are the two principal reasons that make it impossible to test a program entirely.

    Software specifications can be subjective and can lead to different interpretations.
    A software program might require too many inputs, too many outputs, and too many path combinations to test.

    32> Can automation testing replace manual testing?

    Automation testing isn’t a replacement for manual testing. No matter how good automated tests are, you cannot automate everything. Manual tests play an important role in software development and come in handy whenever you have a case where you cannot use automation. Automated and manual testing each have their own strengths and weaknesses. Manual testing helps us to understand the entire problem and explore other angles of tests with more flexibility. On the other hand, automated testing helps save time in the long run by accomplishing a large number of surface-level tests in a short time.

    33> What is Defect Life Cycle?

    Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states that defect or bug goes through in its entire life. The purpose of Defect life cycle is to easily coordinate and communicate current status of defect which changes to various assignees and make the defect fixing process systematic and efficient.

    New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
    Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team
    Open: The developer starts analyzing and works on the defect fix
    Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug status as "Fixed."
    Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the tester. Since the software testing remains pending from the testers end, the status assigned is "pending retest."
    Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and changes the status to "Re-test."
    Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is "verified."
    Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to "reopened". Once again the bug goes through the life cycle.
    Closed: If the bug is no longer exists then tester assigns the status "Closed."
    Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the status is changed to "duplicate."
    Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to "rejected."
    Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then status "Deferred" is assigned to such bugs
    Not a bug:If it does not affect the functionality of the application then the status assigned to a bug is "Not a bug".

    Defect Life Cycle Explanation

    Tester finds the defect
    a> Status assigned to defect- New
    b> A defect is forwarded to Project Manager for analyze
    c> Project Manager decides whether a defect is valid
    d> Here the defect is not valid- a status is given "Rejected."
    e> So, project manager assigns a status rejected. If the defect is not rejected then the next step is to check whether it is in scope. Suppose we have another function- email functionality for the same application, and you find a problem with that. But it is not a part of the current release when such defects are assigned as a postponed or deferred status.
    f> Next, the manager verifies whether a similar defect was raised earlier. If yes defect is assigned a status duplicate.
    g> If no the defect is assigned to the developer who starts fixing the code. During this stage, the defect is assigned a status in- progress.
    h> Once the code is fixed. A defect is assigned a status fixed
    i> Next, the tester will re-test the code. In case, the Test Case passes the defect is closed. If the test cases fail again, the defect is re-opened and assigned to the developer.
    j> Consider a situation where during the 1st release of Flight Reservation a defect was found in Fax order that was fixed and assigned a status closed. During the second upgrade release the same defect again re-surfaced. In such cases, a closed defect will be re-opened.