Industry-relevant training in Business, Technology, and Design to help professionals and graduates upskill for real-world careers.
Fun, engaging games to boost memory, math fluency, typing speed, and English skillsβperfect for learners of all ages.
Enroll to start learning
Youβve not yet enrolled in this course. Please enroll for free to listen to audio lessons, classroom podcasts and take practice test.
Listen to a student-teacher conversation explaining the topic in a relatable way.
Signup and Enroll to the course for listening the Audio Lesson
Today, let's start with Regression Testing. Can anyone tell me what regression testing is?
Is it when we check to make sure new code hasn't broken the old code?
Exactly! Regression Testing verifies that new changes havenβt disrupted existing functionality. It's crucial to perform this after any code change. What are some tools we might use for regression testing?
Selenium is a good one, right?
Correct! Selenium, TestNG, and QTP are great tools for this. Remember, a simple way to remember its purpose is to think of it as a 'safety net' after updates. Now, when should we perform regression testing?
After every code change or bug fix?
Exactly right! Let's summarize: Regression Testing ensures new changes don't break existing features and is performed after every code alteration.
Signup and Enroll to the course for listening the Audio Lesson
Next up is Smoke Testing. What would you define smoke testing as?
Isn't it just a quick test to see if the app works at all?
Yes! Itβs a quick check to ensure that the core functionalities of the application are stable enough for further testing. Think of it like a 'smoke test'βif it 'smokes,' there's a problem. What might be an example of smoke testing in action?
Like checking if we can log into the application?
Perfect example! Logging in successfully is a core functionality check. And remember, smoke testing is typically done early after deployment. Let's finish with a recap of its purpose.
Signup and Enroll to the course for listening the Audio Lesson
Moving on to User Acceptance Testing. Can someone explain what it involves?
Itβs when actual users validate if the system meets their needs, right?
Exactly! UAT demonstrates that the system meets business requirements. Who usually performs this testing?
Business users and stakeholders?
That's correct! UAT is performed by business users, ensuring real-world workflows are validated. Itβs sometimes even done by Business Analysts. Letβs summarize: UAT checks system readiness for production based on user needs.
Read a summary of the section's main ideas. Choose from Basic, Medium, or Detailed.
The section elaborates on common testing types, including Regression, Smoke, Sanity, UAT, and Performance Testing. Each type is framed within its context, explaining when it is used, its purpose, and some tools that can be utilized for effective implementation.
This section details various common types of testing used in Quality Assurance (QA) processes. Each testing type serves a specific purpose and is applied at certain stages of the software development lifecycle. Key types discussed include:
In summary, choosing the right testing type is critical for aligning testing with the context and goals of the project, ensuring both functionality and performance are validated effectively.
Dive deep into the subject with an immersive audiobook experience.
Signup and Enroll to the course for listening the Audio Book
π Regression Testing
Purpose: Verify that new changes havenβt broken existing functionality.
When: After every code change, bug fix, or release
Tools: Selenium, TestNG, QTP
Regression Testing is a type of testing conducted to ensure that any new changes made to the software don't negatively impact the existing features. This is crucial because updates or bug fixes can unintentionally introduce new issues. Regression tests are performed after every code change, bug fix, or release to ensure that everything is still working as expected. Tools like Selenium, TestNG, and QTP are commonly used to automate these tests, making it easier and quicker to run them repeatedly.
Imagine you are a chef who just updated your famous pasta recipe by adding a new seasoning. After making that change, you would want to taste the entire dish again to ensure the new flavor doesn't ruin the balance of ingredients you previously developed. Thatβs similar to regression testing.
Signup and Enroll to the course for listening the Audio Book
π₯ Smoke Testing
Purpose: A quick set of tests to ensure the core system is stable enough for further testing.
Example: Can users log in? Can the homepage load?
Analogy: "Does the build smoke when turned on?"
Smoke Testing is performed to quickly assess whether the main functionalities of the application are working correctly before more detailed testing begins. This type of testing acts like a preliminary check to confirm that the software build is stable enough for further testing phases. It often includes basic tests such as checking if users can log in or if the homepage loads properly. The term 'smoke testing' comes from hardware tests where you would check if the device shows any signs of failure, like smoke, when you turn it on.
Think of smoke testing like pre-flight checks before a plane takes off. The pilots and crew perform simple checks to ensure the engines start and the navigation system works before the flight takes off. They want to make sure everything essential is functional before going into more complex checks.
Signup and Enroll to the course for listening the Audio Book
πΏ Sanity Testing
Purpose: A focused test to check if a specific function or bug fix works.
Example: A defect in the cart page is fixed β test the cart flow only
Note: Sanity = "Are we sane to proceed with deeper testing?"
Sanity Testing is a type of testing conducted to verify that a specific function or bug fix works as intended. Unlike broader tests, sanity tests focus on one particular area of the application after changes have been made. For example, if a defect was reported and fixed on the shopping cart page, the testing team would perform sanity tests to verify that the cart now functions correctly before moving on to further testing. It acts as a checkpoint to ensure that it is appropriate to proceed with deeper testing.
Imagine you just fixed a leaky faucet in your kitchen. Before you start using the sink for washing dishes or preparing food, you check to see if the leak is indeed resolved. If it is, then you can confidently proceed with using your kitchen normally. Thatβs the essence of sanity testing.
Signup and Enroll to the course for listening the Audio Book
π₯ UAT (User Acceptance Testing)
Purpose: Ensure the system meets business needs and is ready for production.
Performed By: Business users, stakeholders, sometimes Business Analysts
Focus Areas:
β Real-world workflows
β Usability and behavior
β Business rule validation
User Acceptance Testing, or UAT, is the final phase of the testing process before an application goes live. Its main purpose is to ensure that the system meets the business requirements and that it is ready for actual use. This phase is performed by end users, stakeholders, or business analysts who will validate real-world workflows, usability, and adherence to business rules. This testing helps confirm that the application behaves as expected in practical scenarios.
Think of UAT like a dress rehearsal before a big play. The actors run through the entire performance to ensure everything runs smoothly and adheres to the script. If the audience (in this case, the stakeholders) finds any issues, they can address them before the opening night (the launch).
Signup and Enroll to the course for listening the Audio Book
π Performance Testing
Purpose: Measure how the system behaves under load or stress.
Subtypes:
β Load Testing: Normal expected user load
β Stress Testing: Beyond normal limits
β Spike Testing: Sudden increase in load
β Soak Testing: Sustained usage over time
Tools: Apache JMeter, LoadRunner, Gatling
Performance Testing is designed to evaluate how a system performs under various conditions, especially when subjected to varying levels of load or stress. The goal is to identify potential bottlenecks in the system and ensure that it can handle expected and unexpected user traffic. Performance testing breaks down into subtypes: Load Testing checks how the system handles a normal load; Stress Testing assesses how it performs under extreme conditions; Spike Testing analyzes how it reacts to sudden increases in load; and Soak Testing examines how it performs over a prolonged period under a standard load. Tools such as Apache JMeter and LoadRunner are frequently used in these tests.
Consider performance testing like assessing the endurance of an athlete. Coaches measure how well a runner performs under different conditionsβnormal races, extreme weather, sudden bursts of speed, and long-distance endurance. Just as athletes need to train for various scenarios, software should be tested to ensure it functions optimally in all expected and unexpected conditions.
Learn essential terms and foundational ideas that form the basis of the topic.
Key Concepts
Regression Testing: Verifies that new changes haven't broken existing functions.
Smoke Testing: A preliminary check for stability of core functionalities.
UAT: Ensures the system meets business needs through user testing.
Performance Testing: Measures system performance under various loads.
See how the concepts apply in real-world scenarios to understand their practical implications.
Regression Testing is done after a new feature is added to ensure older features still function as intended.
Smoke Testing includes checking if the application launches and fundamental functionality such as login works.
Use mnemonics, acronyms, or visual cues to help remember key information more easily.
When the code's on a new stage, we test through a page; Regression checks, itβs all in the specs, to ensure no old errors engage.
Imagine a baker who adds a new flavor to a cake. Before serving it to customers, he always takes a small bite to ensure the old cake flavors haven't gone badβthat's like Regression Testing!
For the types of testing: R for Regression, S for Smoke, S for Sanity, U for User acceptance, and P for Performance. Remember: 'RSS UP!'
Review key concepts with flashcards.
Review the Definitions for terms.
Term: Regression Testing
Definition:
A type of testing to confirm that changes in the code do not adversely affect existing functionalities.
Term: Smoke Testing
Definition:
A preliminary test to check the core functionalities of an application before proceeding with further testing.
Term: Sanity Testing
Definition:
Focused testing to verify specific functionalities after bug fixes.
Term: User Acceptance Testing (UAT)
Definition:
Testing conducted by end users to validate that the system meets business requirements.
Term: Performance Testing
Definition:
Testing to evaluate how the system performs under various loads and conditions.