|April 16, 2018|
In QA testing, Negative Tests are used to make sure the application behaves when actions are performed outside of expectations.
An example would be to enter Unicode characters in a phone field. Sure you may test for alphanumeric values - but what if someone accidentally typed in a Unicode value?
Negative Tests are a good way to see how solid the code is written. Did the developer think of all the edge cases? An example did the Developer account for not allowing a single space in a required field (users do this so they don't have to put in information - such as phone numbers.)
In your Test Case Repository, you should have a lot more Positive Test Cases than Negative test cases. You should be testing more than a positive outcome will happen than negative ones.
When testing a new product, you should have more Negative Test Cases than Positive ones. QA should be making sure that there are no bad surprises when the product is launched.
You won't account for all the scenarios but can learn a lot from past projects and the way that certain developers write the code.
Most companies have done a very poor job of identifying and documenting negative test scenarios.
A good source of Negative testing data is the Bug Magnet browser extension.
Feel free to leave a comment about this post.