The definition of Smoke testing, Sanity testing and Regression testing – Where are they in applied testing process?
Smoke test is a type of software testing performed after software build to ensure that the most important functions work. The result of this testing is used to decide if a build is stable enough to proceed with further testing.
Sanity testing is a kind of Software Testing performed after receiving a software build, with minor changes in code, or functionality, to make sure that the changes or the adding function perform as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing.
When the software has a change or updated, we need to perform regression testing (test all of test cases) to make sure that the fixes or enhancements in the application have not affected the other parts of the application.
4.Smoke testing vs Sanity testing vs Regression testing – the same points:
- To eliminate the bugs as much as possible in STLC (software testing life circle)
- All of them can be perform in automation or manual.
5.Smoke testing vs Sanity testing vs Regression testing – the different points:
- Check the functionality of the key/important functions.
- The purpose of Smoke testing is to validate the stable of product for transfer to further testing phase.
- In STLC, smoke testing is always required to perform.
- Check the functionality of the adding function.
- The purpose of Sanity testing to validate the new adding function work as customer’s expected.
- If the software is stable already (smoke testing has performed) sanity testing can be perform alone. If not, after adding new function, the tester need to perform Smoke testing first, after that perform Sanity testing.
- Check the functionality of all the functions.
- The purpose of Regression testing is to check the functionality of all functions when software has changed or updated.
- Base on the timeline and resources, Regression testing can be carried out or skipped.
Picture 1: The important to perform Smoke testing, Sanity testing and Regression testing
Picture 2: The comparison of test case quantity between Sanity testing, Smoke testing and Regression testing
II.Where are they in applied testing process?
1.Under development software:
We will see the example below:
Customer required company A to make a web e-commerce which had full features for online business.
Company A follow Agile method so they will follow these steps bellow:
- Define backlog: Customer need an e-commerce website for online business.
- Define user stories:
- User can create account and can login again if needed.
- The website should to have a logic/friendly list items for finding and buying.
- Customer can choose an item and purchase easily with many methods.
- The website should have [help] feature to assist customer when they needed.
“The user stories will be more in real case!”
With these user stories, we create 2 sprints: Sprint 1 for a and b, sprint 2 for c and d.
Each sprint we have 5 steps: Requirement analysis (analyse a and b), Design, Development, Testing and Deployment. In Agile, although tester will join in every step. However, I will concentrate in Testing step which need tester most and it will give a clearly view about where the Smoke testing, Sanity testing and Regression testing are applied.
When we do Functional testing in System testing level. After test each of these function/feature serve for user story a and b, and it pass or it was fixed by development. We need to pass it to the non-functional testing in System testing level. Before non-functional testing is performed, we have to run Smoke testing to test again all the important function/features to make sure they work well.
When Smoke test is pass, we have 2 situations:
Situation 1: The customer needs to add a minor feature which is not important, few more code. The developer re-delivers a new build updated that minor feature to tester for testing. The tester just needs to test that feature to make sure this minor feature work as customer expected, that kink of test is Sanity testing. When time is allowed, the tester perform regression testing (usually in automation for saving time) to test all of the test cases (included test cases for Sanity testing) to make sure the adding feature not make other features defected. If time is not allowed, the regression testing is skipped and tester pass the software to non-functional testing.
Situation 2: The customer doesn’t need to add anything; the tester passes the software to non-functional testing.
- Finishing sprint 1 and finishing spring 2:
After test and finish spring 2, the tester needs to run regression testing for sprint 1 to make sure the function/features in sprint 1 still work well when functions/features of sprint 2 is created.
2.Maintenance software/Maintenance product:
After system/software was deployed in a period of time. Customer always has feedback to fix bugs/add function/feature and so on. Company have to deliver a new build to fix these problems.
When new build released with new function/feature, tester need to perform Sanity testing this new function/feature to make sure this function/feature work as expected. After that, tester perform Smoke testing to validate the main function/feature when new function/feature is added. Last thing, tester run Regression testing (usually use automation) for all of the test cases (include test cases for Sanity test) to make sure all the function/feature work as expected.
In STLC – Test execution: Whether the software is in “under development” or “maintenance” the testing will come from phase to phase (GUI testing – Usability testing – Functional testing – Non-functional testing….) we need to perform Smoke testing before transfer software from phase to the phase after. When software has minor change, add minor code for minor, and not important feature, the tester just needs to perform Sanity testing. If the time is allowed, whatever the build have minor change or major change, we need to perform Regression testing to make sure the other function/features don’t have any defect.