|Earliest: November 26, 2017||Latest: December 2, 2019||Total: 89|
Test Plan Guidlines
A test plan provides a detailed outline of testing for a particular project in a particular release.
Having a carefully well written test plan can help ensure a feature gets well tested at every release cycle.
There are several key parts of the Test Plan:
This is where the QA Lead identifies various conditions which makes it possible to test the functionality.
This is where you list the conditions that can't be known a for sure ahead of time, which you have to make assumptions. These need to be stated so that QA can evaluate whether they are reasonable.
- Database contains the correct data.
- SQL Integration Service validates the data being inserted into the database.
- Users can only setup one category for the Employee Update page.
Dependencies that this test plan has on external conditions.
- Nightly backup of external HTML XML Feed data
- Intranet User Authentication
Risks & Issues
Conditions that add risk to the quality of the functionality or that may impact the accuracy of the test plan.
Example Risk Points
- SQL Server Integration Service is unable to read the XML Feed
- Poorly written specs
- The business owner is not clearly defined for this project.
- The current database schema is not available.
- QA isn't aware of the technology being use to display the data on the page.
In several paragraphs explain in detail the feature that is being tested. Links to external spec documents and technical documents should be referenced here.
A list of project/developers should be listed here. These are the people that should get tickets of any bugs found when testing this product.
Testing Outline and Summaries
This section should list all the test cases title summary. It would be good to sort the test cases by some sort of topic.
Example Test Case Summaries
- Validate that the XML passes the XML validator test ?
- Validate that the intranet page is loading after the nightly XML update ?
- Validate that the intranet page loads if there are no new hires ?
- Validate that the intranet page loads if there are no upcoming anniversary dates ?
- Validate that if a user hasn?t set a configuration, the page still loads ?
According to the National Institute of Health:
A false negative is a test result that indicates a person does not have a disease or condition when the person actually does have it.
In Quality Assurance Automation testing, the National Institute of Health example could be rewritten as:
False Negative happens when a test is run and returns a success when the actual test should have failed.
Why Good Test Produce False Negatives
This can happen because of various reasons:
Hard Coded Information
Sometimes an automation test case may be written to use a specific datasets and/or urls. This type of test case doesn't take into account of actual path that users may use. Taking shortcuts may seem like a good idea, but are risky since they don't take into account the true paths users may take.
Real Life Example
If someone changes a button link url reference, and automation just goes to the URL, without clicking on the button, it will miss that change. The test will pass, but it should have failed.
Not Checking for Errors
In some instances, an error might be thrown to the user - or the browser console during an automation run. Since this isn't directly impacting the flow of the test case, it's considered a "pass."
Real Life Example
Not a Challenging Test Case
I have seen some test cases take the soft route of testing. The test case doesn't check for data validation. Some test cases assume too much and don't intentionally thrown an error - such as entering too much data in a text field.
Real Life Example
Ways to Avoid False Negatives
Four ideas to help reduce the chances of test cases returning False Negatives:
- As a clean up step in automation, check the console logs for errors.
- If using Ghost Inspector, watch the video in some of the critical path testing to see if there's anything out of the ordinary.
- Code review automation test cases.
- In one of my previous jobs, if an Automation test passes for 3-consecutive releases, it's a candidate for an audit. The audit should be done by someone who didn't write the initial case. The audit should think about ways the test could be more productive.
Automation Encountered a New Feature
Ideally the Automation team should get a "heads up" on a new feature being shipped in the current release and areas of impact. Every once in a while, the automation team will be left off the code review or show-me meetings.
Poorly Written Test
The automation steps didn't take into accounts of other situations that may happen when the test are run.
Something happen that cause the server to behave weird - such as a slow network or disk space full. These types of issues are unavoidable.
Best Cheap Beer Social Event
All work and No Play makes QA Testing job boring. One of the popular social events that one of the companies I worked at had was something called the "Best Check Beer Social Event."
The goal of this popular event is to do a taste test to find which of the cheapest beers taste the best. Participants would buy their own been and someone would pour them into plastic cups so that people can do a blind taste.
This is a fun social event to get people to think about what makes beer taste good - and to try to figure out what beer they are tasting.
Here?s what to do:
- Choose your favorite selection from [Beer Advocate's Bottom 100 Beers](https://www.beeradvocate.com/lists/bottom/) (or something else? doesn?t really matter? just has to be cheap)
- Let the organizer know what you plan to bring so there's no overlap.
- Bring in a 6-pack of your beer on taste day (if it only comes in 4-pack, then get two)
- We?ll have a blind taste test and give each beer a 1-10 rating in the following categories:
Then we?ll get someone in product to make some charts and graphs with the results.
Samples Beers Tried in the Past
Here's a list of some of the beers people have brought in the past. I won't give away which beer has been the winner, but I will say that Keystone Light voted pretty well in the blind taste test.
- Miller 64
- High Life
- Miller Light
- Miller Fortune
- The Beast
- Steel Reserve
- Natty Light
- Keystone Light
Did you know that you can create dynamic bookmarks using Bookmarklets?
Some "Real World" Examples:
Atlassian Jira - Show me all the new issues created in the past 7 days. (This could easily be set up as a Dashboard widget - but that's a post for another day.)
Atlassian Jira - Show me all issues that have this week tag, where the tag format tag might look like: deployment-2019-01-07
Google Photos - Show me all photos taken on this day of the year or you can get fancy and say "Show me all photos taken 90-days ago."
Change the URL Server - you can check to see what day it is and change the URL to point to production on release day.
Here's a sample code
Engineering Liability of Code
Some QA Engineers thinks that once a product hits production that they no longer have any ownership. They would be wrong.
As for Engineers, they own the change until the product actually ships. So during test periods, QA would assign any issues related to the change to that particular developer. They certainly can deferrer the bug to someone else, but in most cases, they should be a single point of contact of the change.
In my experience, I have found that developers will celebrate when code gets merged into the release branch. However, it isn't over yet. There still could be bugs/issues related to the change not discovered by initial testing.
Found this graphic on the WallpaperFool.com website. Imagine how much better code would be if all developers followed this?
Always be Updating the Test Case Repository
QA has ownership long after the product hits production. After a product/feature ships, it's the QA Engineer job to update the regression tests around the change.
If the change was small, such as a cosmetic image change, then there's no reason to go all out on automation and manual regression.
However, if the change is big, then the QA engineer responsibility is to make sure that regression test cases and manual test cases are updated. If they aren't other QA Engineers that run the tests may report the change as a bug.
When to Update Automation?
I believe that Automation should be updated once the feature is merged into the branch that's being shipped. (In some companies this would be the 'Master' branch.)
This way the automation test cases can still be run and may discover other issues that may be missed by manual QA.
Note: This topic came about when a developer once told me that they aren't responsible for a bug because it was merged into the release branch with no issues. Wrong! You own it because you made changes. This doesn't mean that you have to fix it, but you should figure out how/why a bug was discovered.
A question that I often ponder: Is there a way to force Bookmarks to open in a secure window or as a different profile?
This is because there are some situations where I don't want to store cookie data. I want to access the site if I am a new customer and see how the website functions.
Unfortunately, due to security rules, this is not possible. There is a work around.
Incognito Filter to The Rescue
Incognito Filter is a Chrome extension that will force a pre-defined set of URLs open up in Incognito mode instead of a new tab or regular browser window.
You can define the URL simply by clicking on the extension icon and then clicking on the "Add Website."
You can get fancy by trigging Incognito Filter functionality by adding a regular expression, such as qarocks$. To add, simply go into Incognito Filter and then click on "Show Options" button and then type in qarocks$ and then the "Add RegEx" button.
What this does is that any URL that ends with qarocks$ will open in an incognito window.
Now with that set, simply edit any bookmark that you have and add the qarocks$ at the end of the url.
Anyone that works in QA should be aware of the Broken Windows Theory.
The broken windows theory is a criminological theory that states that visible signs of crime, anti-social behavior, and civil disorder create an urban environment that encourages further crime and disorder, including serious crimes.
While not directly related to Quality Software Testing, it's important to understand how "a broken window" can make people feel that the area is neglected and as a results, its quality has gone downhill.
I would suggest rewording the above, for QA purposes to:
The broken windows theory is a quality assurance theory that states that visible signs of bugs, anti-quality behavior, and poor usability create an environment that encourages further bugs and complexity, including blocking issues.
Read the article on The Atlantic - 100 Years of Atlantic Stories, Broken Windows, "The Police and neighborhood safety"
Thing Things QA should take away from the Broken Window story:
- Fixing the small bugs will give the appearance that the product is more stable - even though bigger bugs still exist.
- "If you take care of the little things, then you can prevent a lot of the big things"
- When testing functionality - everything matters. It's QA responsibility to document and report why issues should be fixed.
What do you think?
Have you read the "Broken Window" story? Do you think it has some message to QA engineering?
Browser Cookie Size
Did you know that there is a limit on the length of a website cookie?
You might see the following when frequently visiting websites:
This happens because the total size of all the cookies being sent is greater than 4,096 bytes (4k). There is some information on the page of each Browser version. You can find out the technical limit on each browser from the Livewire article: "Learn the Maximum Size That a Web Cookie Can Be"
I have found that this happens on certain websites that have a lot of 3rd party connections.
Today's a short post because it's Memorial Day and time is better spent enjoying the day.
Massachusetts QA Fails
Finding QA "misses" that has happened in Massachusetts has been tricky. Every company has their issues, and not all of it makes the headlines.
I did manage to find these two stories when QA Testing would have helped avoid the public embarrassment. I'll continue to look and when I find others, I'll link them to this post:
Dorchester Historical Society Holiday Card - 2018
The Dorchester Historical Society sent out postcards in mid-November letting people know of an upcoming open house. The problem was that the wording of the card. The card read: " We're dreaming of a white Dorchester."
Once people got the postcard, they started complaining on social media on the double meaning of the message. The Dorchester Historical Society realized the mistake and responded on social media that they were removing the graphic from social media.
We are very truly sorry about our graphic used for this event. This was an unfortunate oversight on our part and the event photograph has been removed from our social media. We were simply changing the words to the classic Christmas carol and did not think it through properly. https://t.co/2Anki4JF6N? Dorchester Historical Society (@DotHistorical) November 26, 2018
Lesson We All Can Learn From This
It doesn't hurt to test the Holiday Card design in front of customers or employees. Find out what they think of it. It's not only a chance to see if the intended message gets across but if there's anything that needs to be corrected.
Massachusetts RMV Tells Thousands their License are Being Suspended -2018
In April 2018, the Massachusetts RMV send notices to more than 9,000 drivers that their licenses would be suspended because of outstanding fines.
The notices were wrong as the recently installed software had a technical glitch. The RMV identified the problem immediately and send out additional letters to drivers letting them know of the mistake. In addition, they put notices on their website and in the wait queue when calling the DMV.
Lesson We All Can Learn From This
Always Test, Test, Test, new software implementations. It cost the DMV a lot of money to fix the issue - mailing additional letters and updating various customer point of contacts.
You didn't need to test to make sure that all the addresses are qualified matches - but a good sample would have indicated a problem with the DB query.
QA Testing (International)
Software Testing is very important to every organization. You don't want customers to have a bad experience and you don't want your organization to look bad.
Here are a couple of examples of where a bug slipped QA and made it into production:
Australia misspells ?responsibility? on 46M new $50 bank notes.
Turns out that the Reserve Bank of Australia (RBA) spelled "responsibility" as "responsibilty" on millions of the new yellow notes.
The RBA has confirmed that there was at least one typo on the note and will be fixed in the next printing. For now, 46-million banknotes will have the small misprinting.
FYI: The A$50 is one of the most popular notes that is circulated in Australia.
For more details
Airline Prices $16,000 Luxury Flight at $675 After ‘Ticketing Error' on New Year's Day
For a brief few hours on New Year's Day Cathay Pacific Airways sold thousands of first and business-class seats for huge discounts.
Seats that normally would go for $16,000 were selling for $675.
Cathay has publicly announced that it would honor all such seat prices.
Happy 2019 all, and to those who bought our good - VERY good surprise ?special? on New Year?s Day, yes - we made a mistake but we look forward to welcoming you on board with your ticket issued. Hope this will make your 2019 ?special? too!— Cathay Pacific (@cathaypacific) January 2, 2019
Next week I'll highlight some costly QA mistakes in Massachusetts.