A Beginner's Guide to Exploratory Testing Without Automation
Exploratory testing is different from traditional manual testing. In manual testing, you follow a specific set of steps given in user stories or requirements documents and check if the results match what's expected. This doesn't really use much analytical thinking. On the other hand, exploratory testing gives testers the freedom to test beyond the written documents and what is already known about the application, encouraging them to use their analytical skills to discover new things.
Let's look at eight exploratory testing frameworks:
Exploratory testing frameworks are designed to help us create mental models that we can easily use in different parts of the application. They help focus our testing efforts by making a specific function clearer and more structured.
Equivalence class partitioning
The equivalence class partitioning framework groups inputs with the same outcome or similar processing into categories.
This approach allows for testing one example from each category instead of every single input.
Example: Using a tax calculator, income amounts can be divided into tax brackets like [0 – 5000], [5001 – 15000], and [>15000]. These are our equivalence classes.
Each income amount in a group follows the same tax rules. Testing with just three amounts, one from each bracket (e.g., 2,000, 10,000, and 20,000), can effectively check for positive test cases.
This method also helps identify inputs that should cause errors, such as [negative numbers], [letters], [symbols].
Testing one example from these error-prone groups helps catch any negative test cases...
Boundary value analysis
Boundary value analysis builds on the concept of equivalence class partitioning by focusing on the boundary conditions of each class.
This method is particularly useful for finding errors since boundary conditions can be unclear and might not always be correctly defined.
For instance, consider a simple tax calculator with rules stating “5% tax for income below $5,000, 10% for income between $5,000 and $15,000, and 30% tax for income over $15,000.” The ambiguity arises in determining whether $5,000 and $15,000 fall into the lower or higher tax brackets.
Boundary value analysis helps clarify these ambiguous areas by testing the limits of each class and selecting a value that falls squarely within the class range.
Let's dive into the tax calculator example and have a closer look at the boundary values for each of the equivalence classes we identified earlier. The first class, [0 – 5000], has 0 and 5,000 as its boundary values. Now, if we think about it, when the income is 0, there really shouldn't be any taxes, right? So, this gives us new equivalence classes to consider: [0] and [1 – 5000]. To make sure we cover all our bases for the positive test cases, the boundary values we need to check out are [0, 1, 5000, 5001, 15000, 15001].
State transition
State transition testing is a method used in software testing to check if the system works properly as it changes from one state to another. It's especially helpful for testing systems whose behavior changes based on specific states or conditions.
Here's how state transition testing works:
Identify States: First, figure out all the different states the system can be in while it's running. These states might be "logged in," "logged out," "idle," "processing," and so on. Knowing all the possible states is important for testing well.
Identify Transitions: Next, identify the events or actions that cause the system to transition from one state to another. These could be user actions, system events, or external inputs. For example, logging in could transition the system from "logged out" to "logged in" state.
Create Transition Diagram: After figuring out the states and how to move from one to another, draw a state transition diagram. This diagram shows the system's states and how they change. It makes it easier to see how the system works and to plan tests.
Design Test Cases: Use the transition diagram to create test cases that cover every possible state change. Each test case should show a series of actions or events that make the system move from one state to another. Remember to include both correct and incorrect transitions in your test cases.
Execute Test Cases: Run the test cases you've created to check if the system moves between states the right way, as it should. This means watching how the system acts and seeing if it matches what you expect.
Analyze Results: Look at the test results to find any differences between what was expected and what actually happened when the system changed states. This means spotting any problems or defects that need fixing.
By following these steps, state transition testing helps ensure that the system behaves correctly as it moves through different states, improving the overall reliability and quality of the software.
In the following diagram:
"Logged out" represents the initial state when the user is not authenticated.
"Logged in" represents the state when the user is authenticated and logged in.
The transition labeled "Login" represents the action of logging in, transitioning the system from the "Logged out" state to the "Logged in" state.
The transition labeled "Logout" represents the action of logging out, transitioning the system from the "Logged in" state back to the "Logged out" state.
This diagram visually depicts the possible states of the system and the transitions between them, serving as a basis for designing test cases for state transition testing.
+------------+ Login +-----------+
| | ------------> | |
| Logged out | | Logged in |
| | <------------ | |
+------------+ Logout +-----------+
Decision table
Decision Table testing is a technique used to test systems that have a set of conditions that result in different actions or outcomes. It's particularly useful when there are multiple combinations of inputs that lead to different results, making it difficult to manage all possible scenarios individually. Decision tables help in organizing and managing such complex logic.
Here's how Decision Table testing works:
Identify Conditions: Begin by identifying all the conditions or inputs that affect the behavior of the system. These conditions can be requirements, constraints, or factors that influence the outcome. For example, in a banking system, conditions could include the type of account, the amount of money deposited, the customer's credit score, etc.
Identify Actions: Next, identify all possible actions or outcomes that can occur based on the combinations of conditions. These actions could be system responses, calculations, or decisions made by the system. Continuing with the banking example, actions could include approving a loan, denying a transaction, increasing credit limit, etc.
Create the Decision Table: Once you have identified the conditions and actions, create a decision table that represents all possible combinations of inputs and their corresponding outcomes. The decision table typically consists of columns representing conditions and rows representing different combinations of conditions. The intersection of each row and column represents the action to be taken for that combination of conditions.
Fill in the Table: Populate the decision table with the possible combinations of conditions and their corresponding actions. Each cell in the decision table represents a specific scenario or combination of inputs and outputs.
Analyze and Generate Test Cases: Analyze the decision table to ensure that all possible combinations of conditions are covered. Generate test cases based on the combinations identified in the decision table. Each test case should cover a unique combination of conditions and actions.
Execute Test Cases: Execute the generated test cases to verify that the system behaves correctly for each combination of inputs. Observe the system's responses and compare them with the expected outcomes defined in the decision table.
Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual behavior of the system. Identify defects or issues that need to be addressed and verify that the system behaves as expected for all combinations of inputs.
Decision table testing helps in managing complex logic by organizing various combinations of conditions and actions into a structured format, making it easier to design comprehensive test cases and ensure thorough testing coverage.
| Condition 1 | Condition 2 | Condition 3 | ... | Action 1 | Action 2 | Action 3 |
|-------------|-------------|-------------|-----|----------|----------|----------|
| True | True | False | ... | Perform | Perform | |
| False | True | True | ... | | Perform | Perform |
| True | False | True | ... | Perform | | Perform |
| False | False | False | ... | | | |
The columns represent conditions (Condition 1, Condition 2, etc.), and the rows represent different combinations of conditions.
Each cell in the table represents a combination of conditions and the corresponding actions to be taken.
The ellipsis (...) represents additional columns and conditions that may exist in the actual decision table.
"True" and "False" represent the presence or absence of a condition for a particular combination.
"Perform" indicates that a specific action should be taken for the corresponding combination of conditions.
This decision table diagram visually represents the relationships between conditions and actions, helping in understanding the logic of the system and generating test cases based on different combinations of inputs.
Cause-effect graphing
The Cause-Effect Graph technique, also known as Cause-Effect Analysis, is a black-box testing technique used to generate test cases based on the relationships between inputs (causes) and outputs (effects) of a system. It helps in identifying the most effective test cases by focusing on the logical relationships between input conditions and the corresponding actions or outcomes of the system.
Here's how the Cause-Effect Graph technique works:
Identify Inputs and Outputs: Begin by identifying all the inputs (causes) and outputs (effects) of the system. Inputs are the conditions, variables, or events that influence the behavior of the system, while outputs are the responses, outcomes, or actions produced by the system.
Create a Cause-Effect Graph: Construct a cause-effect graph that visually represents the relationships between inputs and outputs. This graph typically consists of nodes representing inputs and outputs, and edges representing the cause-effect relationships between them. The nodes are connected by arrows indicating the flow of influence from causes to effects.
Identify Valid Combinations: Analyze the cause-effect graph to identify valid combinations of inputs that lead to specific outputs or effects. Each valid combination represents a unique test case that exercises a particular aspect of the system's behavior.
Generate Test Cases: Based on the identified valid combinations, generate test cases that cover all critical paths and decision points in the cause-effect graph. Each test case should aim to trigger specific scenarios or conditions and observe the corresponding outputs or effects produced by the system.
Execute Test Cases: Execute the generated test cases to verify the behavior of the system under different input conditions. Observe the system's responses and compare them with the expected outputs defined in the cause-effect graph.
Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual behavior of the system. Identify defects or issues that need to be addressed and verify that the system behaves as expected for all valid combinations of inputs.
The Cause-Effect Graph technique helps in systematically identifying test scenarios based on the logical relationships between inputs and outputs, enabling efficient and effective test case generation. It ensures that the most critical aspects of the system's behavior are thoroughly tested while minimizing redundancy and unnecessary test cases.
Let's illustrate the Cause-Effect Graph technique with a simple example.
Suppose we have a system that determines whether a student is eligible for graduation based on two conditions: the number of credits earned and the GPA (Grade Point Average). The system's output is whether the student is eligible for graduation or not.
Here's how we can represent this using a cause-effect graph:
+--------------+ | | | Number of | | Credits | | | +------+-------+ | | Yes | v +--------------+ | | | +-------+ | GPA | | | +-------+ | | +------+-------+ | | Yes v +--------------+ | | | Eligible | | for | | Graduation | | | +--------------+
In this cause-effect graph:
The nodes represent inputs (Number of Credits and GPA) and outputs (Eligible for Graduation).
The arrows represent the cause-effect relationships between inputs and outputs. For example, the Number of Credits influences the GPA, and the GPA influences the eligibility for graduation.
The "Yes" at the end represents the condition being met for the output.
Based on this cause-effect graph, we can generate test cases to cover different scenarios:
Test Case 1: Number of Credits = 120, GPA = 3.5
- Expected Result: Eligible for Graduation (Yes)
Test Case 2: Number of Credits = 100, GPA = 3.8
- Expected Result: Not Eligible for Graduation (No)
Test Case 3: Number of Credits = 130, GPA = 2.9
- Expected Result: Not Eligible for Graduation (No)
These test cases cover different combinations of inputs and verify whether the system produces the correct output based on the defined cause-effect relationships.
This example demonstrates how the Cause-Effect Graph technique helps in visually representing the relationships between inputs and outputs, facilitating test case generation and ensuring thorough testing coverage.
Pairwise testing
Pairwise testing, also known as all-pairs testing or pairwise combination testing, is a combinatorial testing technique used to generate test cases for systems with multiple parameters or inputs. It aims to test all possible combinations of parameters efficiently by focusing on testing each pair of parameters at least once. This helps in reducing the number of test cases required while ensuring thorough coverage of possible interactions between parameters.
Let's illustrate pairwise testing with a simple example involving three parameters: A, B, and C.
Consider the following parameters with their respective values:
Parameter A: {a1, a2}
Parameter B: {b1, b2, b3}
Parameter C: {c1, c2}
A pairwise combination of these parameters means that every possible pair of values for the parameters will be tested at least once.
Here's how pairwise testing can be represented diagrammatically:
Test Case | Parameter A | Parameter B | Parameter C
-----------------------------------------------------
1 | a1 | b1 | c1
2 | a1 | b2 | c2
3 | a1 | b3 | c1
4 | a2 | b1 | c2
5 | a2 | b2 | c1
6 | a2 | b3 | c2
In this table:
Each row represents a test case.
The values of Parameter A, Parameter B, and Parameter C are filled in according to pairwise combinations.
Each possible pair of values for Parameter A, B, and C appears at least once in the generated test cases.
This ensures comprehensive coverage of interactions between parameters while minimizing the number of test cases needed.
Pairwise testing helps in reducing the number of test cases required compared to exhaustive testing, especially for systems with a large number of parameters. It balances efficiency with thoroughness, making it a widely used technique in software testing.
Sampling
Sampling techniques in software testing involve selecting a subset of test cases from a larger pool to represent the entire population of possible test cases. These techniques are particularly useful when it's impractical to test every possible combination or scenario due to time or resource constraints. Let's illustrate sampling technique with a simple example involving three parameters: A, B, and C.
Consider the following parameters with their respective values:
Parameter A: {a1, a2}
Parameter B: {b1, b2, b3}
Parameter C: {c1, c2}
Now, we'll demonstrate a sampling technique called Random Sampling, where test cases are selected randomly from the entire pool of possible combinations.
Here's an example of randomly selected test cases:
Test Case | Parameter A | Parameter B | Parameter C
-----------------------------------------------------
1 | a1 | b2 | c1
2 | a2 | b3 | c2
3 | a1 | b1 | c2
In this table:
Each row represents a randomly selected test case.
The values of Parameter A, Parameter B, and Parameter C are chosen randomly from their respective sets of values.
The selection process is random, but the selected test cases aim to provide reasonable coverage of the possible combinations.
Random sampling is just one of many sampling techniques. Other techniques include Stratified Sampling, where the population is divided into strata and samples are taken from each stratum, and Systematic Sampling, where samples are selected at regular intervals from an ordered list of the population.
Sampling techniques provide a pragmatic approach to test case selection, allowing testers to achieve a balance between thoroughness and resource efficiency. They are particularly valuable when exhaustive testing is not feasible due to constraints such as time, budget, or complexity.
Error guessing method
The Error Guessing technique is an informal and heuristic testing method used by experienced testers to uncover defects in software based on intuition, experience, and knowledge of common errors and pitfalls. Unlike formal testing techniques that rely on predefined test cases or systematic approaches, Error Guessing relies on the tester's intuition and ability to anticipate potential issues based on past experiences and domain knowledge.
Here's how the Error Guessing method works:
Identify Potential Error Sources: Testers use their knowledge and experience to identify potential areas in the software where errors might occur. This could include areas of complex logic, boundary conditions, input validation, error handling mechanisms, or any other part of the system that may be prone to defects.
Formulate Test Scenarios: Once potential error sources are identified, testers create test scenarios or cases based on their intuition about where defects are likely to occur. These test scenarios are often not formally documented but are instead based on the tester's understanding of the system and its behavior.
Execute Test Scenarios: Testers execute the identified test scenarios to uncover defects or errors in the software. During execution, testers actively look for unexpected behavior, anomalies, or deviations from expected results that could indicate the presence of defects.
Iterative Process: Error Guessing is an iterative process where testers continuously refine their test scenarios based on the defects found during testing. As new defects are discovered, testers update their understanding of potential error sources and adjust their testing approach accordingly.
Key aspects of the Error Guessing method include:
Experience and Expertise: Error Guessing relies heavily on the experience, intuition, and domain knowledge of the testers. Experienced testers are often better equipped to anticipate potential issues and identify areas of the software that are more likely to contain defects.
Informality: Error Guessing is an informal and ad-hoc testing technique. Test scenarios are often not documented in detail, and testing is driven by the tester's judgment and intuition rather than following a predefined process.
Supplementary Technique: While Error Guessing can be a valuable testing method, it is typically used in conjunction with other formal testing techniques such as boundary testing, equivalence partitioning, or exploratory testing to provide comprehensive test coverage.
Overall, Error Guessing is a valuable technique for uncovering defects in software, particularly when used by experienced testers who have a deep understanding of the system and its potential weaknesses. However, it should not be relied upon as the sole testing method, and it is important to supplement it with other formal testing techniques for thorough test coverage.
Simplified diagram to illustrate its concept:
+----------------------------+
| |
| Error Guessing |
| |
+----------------------------+
|
| Experience, intuition,
| and domain knowledge
|
+------------v-------------+
| |
| Identify Potential |
| Error Sources |
| |
+------------+---------------+
|
| Identified areas
| prone to defects
|
+------------v---------------+
| |
| Formulate Test Scenarios |
| |
+------------+---------------+
|
| Test scenarios
| based on intuition
|
+------------v---------------+
| |
| Execute Test Scenarios |
| |
+------------+---------------+
|
| Look for anomalies,
| unexpected behavior
|
+------------v---------------+
| |
| Uncover Defects |
| |
+------------+---------------+
In this simplified diagram:
Error Guessing is the central process represented.
Experience, intuition, and domain knowledge drive the identification of potential error sources and the formulation of test scenarios.
Testers identify potential error sources based on their experience and understanding of the system.
Test scenarios are formulated based on intuition about where defects are likely to occur.
Test scenarios are then executed, and testers actively look for anomalies and unexpected behavior.
Defects are uncovered during the execution of test scenarios.
Subscribe to my newsletter
Read articles from Ish Mishra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Ish Mishra
Ish Mishra
"Welcome to Bits8Byte. I'm Ish, a passionate software engineer with a deep love for technology and a knack for problem-solving. Through this blog, I aim to share my insights, experiences, and discoveries in the ever-evolving world of software development. Having worked in the industry for 9 years, I have had the opportunity to explore various programming languages, frameworks, and tools. I believe in continuous learning and strive to stay up-to-date with the latest industry trends and best practices. In this blog, you can expect to find practical tips, tutorials, and thought-provoking articles. I will also delve into the challenges faced in software development and share my insights on overcoming them. I encourage you to join the conversation by leaving comments, asking questions, and sharing your own experiences. Together, we can grow and inspire each other in our software development journeys.