Blog Listing

Quality Assurance Image Library

This is my carefully curated collection of Slack images, designed to perfectly capture those unique QA moments. Whether it's celebrating a successful test run, expressing the frustration of debugging, or simply adding humor to your team's chat, these images are here to help you communicate with personality and style.

August 26, 2025

Snoop on the Competition: Learning from Other People's Mistakes

One day, three fishermen sat by the river.

The first cast his net and said, "I only watch my own line. I don't care what anyone else catches. If I focus hard enough, I'll surely bring in the biggest fish."

The second laughed. He looked over at the fisherman downstream, saw him struggling to pull in his line, and said, "That rock always breaks the nets. He's about to lose his catch. I'll cast my net upstream instead."

The third fisherman quietly walked along the shore, studying where the reeds broke, where the currents swirled, and where fish had slipped from others' nets. Then he cast exactly once - and came back with a basket full of fish.


The Point of the Story

In software, too many teams behave like the first fisherman: focused only on their nets, their modules, their scope of control. They're diligent but blind to the lessons unfolding downstream.

But the cheapest problems to fix are the ones someone else already discovered.

Your competitors have already stumbled on rocks in the river - bugs, failures, outages. If you're paying attention, you don't need to stumble on the same ones.


How to "Snoop" the Right Way

  • Regulatory Reports: In regulated industries, detailed problem reports are public archives. They're goldmines of "what can go wrong."
  • Social Media & News: Customers are quick to share their frustrations online. Even a Twitter rant can point you to a hidden testing scenario.
  • Support Forums: Many companies run customer Q&A boards and even semi-public bug trackers. Every unanswered complaint is a test idea waiting for you.
  • Fake Post-Mortems: Take a competitor's public failure, pretend it happened to you, and run a retrospective. What would have let that slip through?

Closing Thought

You don't always need to cast your nets in deeper waters. Sometimes, you just need to watch where others' nets have torn. By snooping smartly, you protect your product from repeating history - and your users from catching the same old rocks.

Because in testing as in fishing: the wisest catch isn't the one you fight hardest for. It's the one you didn't lose in the first place.

August 19, 2025

Decouple Coverage from Purpose

A story for QA:

A farmer once owned two fields. In one, he planted wheat; in the other, corn.

When the wheat began to sprout, the farmer wanted to know if the soil was good. Instead of walking into the wheat field and testing the soil there, he brought in oxen, plows, and workers. He had them plow both fields, water them, and wait weeks to see how both wheat and corn responded.

The farmer eventually learned that yes, the wheat field soil was fine—but he wasted time, water, and effort, and the corn crop was trampled in the process.

His neighbor, seeing this, simply scooped a handful of soil from his wheat field, tested it directly, and moved on. The neighbor learned what he needed without disturbing the other crops.

The farmer shook his head. “I wanted to know only about the wheat—but I tested the whole farm.”

Lesson for QA

Many QA teams fall into the same trap. They confuse coverage (what part of the system you’re exercising) with purpose (what you’re trying to learn).

  • To check if one service talks to the database, they create sprawling end-to-end tests that drag in half the system.
  • To verify a business rule, they go through the UI when the real logic lives in a single function.
  • To prove acceptance criteria, they think they must run API tests—even if unit-level tests would suffice.

Like the farmer plowing the entire farm to check one field, the team wastes time, energy, and resources. The result? Slower tests, higher maintenance, and broader failures than necessary.

The lesson: decouple coverage from purpose.

  • A business rule can be tested at the unit level.
  • A technical check can be run end-to-end.
  • The goal defines the test’s purpose; the system level defines the coverage.

When you separate these two, you’ll find your testing becomes faster, clearer, and far more effective.

August 12, 2025

The Parable of the Bridge Builders

Why should developers be responsible for test automation?

Let me share a short story to explain.

Once, in a bustling valley, a team of builders was tasked with constructing a great bridge to connect two towns. The builders worked swiftly, laying stones and beams, focused only on raising the structure. "Our job is to build," they said. "Let others ensure it stands."

A separate group, called the Checkers, was hired to test the bridge’s strength after it was built. They came later, prodding and measuring, but their tools were unfamiliar to the builders, and their tests took time—too much time. By the time the Checkers found weak beams, the builders had moved on to new projects. Fixing the flaws meant unraveling work already done, coordinating with new builders, and delaying the bridge’s opening. The townspeople grew frustrated, waiting for a bridge that was either late or unsafe.

Sometimes, the Checkers, overwhelmed by the builders’ speed, couldn’t test everything. The bridge opened anyway, but creaks and cracks soon appeared. "Why bother testing?" some builders grumbled. "It slows us down." Others built bridges harder to test, with tangled beams and hidden joints, making the Checkers’ job even tougher. The bridge grew slower to build, costlier to maintain, and less reliable.

One day, a wise builder proposed a change. "Let us, the builders, test as we go," she said. "We know the beams, the stones, the design. We can check each piece as we place it." The team agreed. They learned to test their work, building with care and checking strength at every step. The bridge rose faster, sturdier, and ready for use. The townspeople crossed it with confidence, and the builders took pride in their craft.

What’s the lesson here?

When developers own test automation, they’re like builders testing their own work. They know the system best—its internals, its quirks. By writing and running tests as they code, they catch issues early, before others touch the work. This avoids duplicated effort, delays, and brittle end-to-end tests that slow delivery. Developers design testable systems, use familiar tools, and keep quality high. No bottlenecks, no blame—just a strong bridge, delivered fast.

Why not keep separate test automation specialists?

Specialists can’t keep up with a flood of code from many developers. They lack deep system insight, so their tests are slow and fragile. Their tools, often unfamiliar to developers, create silos and bottlenecks. The delivery pipeline either slows to a crawl or ships untested code, eroding trust in testing. It’s a lose-lose cycle that only breaks when developers take responsibility for testing.

How do we make this work?

Empower developers with the skills and tools to automate tests. Make testing part of their workflow, not an afterthought. Ensure they have the knowledge to write effective tests—unit, integration, whatever fits. When developers own the quality of their code, the whole pipeline flows smoother, faster, and stronger.

Like the builders who tested as they built, developers who own testing create systems that stand tall and serve well. It’s not just faster—it’s the only sustainable way to deliver quality at scale.

August 5, 2025

Explore Capabilities, Not Features: A Parable for QA Testers

Imagine a builder, hammer in hand, proudly showing off the house he had just completed. "Look at this kitchen," he said, "with its state-of-the-art appliances. And the bedrooms are spacious, just as requested. The bathrooms are luxurious, with all the modern fixtures."

The quality inspector nodded, checking off each item on their list. "Yes, everything matches the specifications perfectly."

But when the homeowners arrived, their excitement turned to disappointment. "Why is the kitchen so far from the dining area?" one asked. "It's going to be a hassle carrying hot dishes back and forth."

"And these bedrooms," another added, "they feel stuffy. Is there no way to get some fresh air in here?"

The inspector, puzzled, looked back at their checklist. Everything was correct, yet something was missing.

Then, the inspector had an idea. Instead of just checking features, they decided to experience the house as a homeowner would. They imagined waking up, getting dressed, making breakfast, and relaxing in the evening. As they walked through these scenarios, they noticed the inconveniences and discomforts.

"Ah," the inspector thought, "I see now. It's not just about the features; it's about how they enable people to live comfortably."

From then on, the inspector explored the capabilities of the house, ensuring it truly met the needs of those who would call it home.

The Lesson for QA

In software QA, features are like the rooms in that house - specific functionalities we're asked to test. But capabilities? They're about what users can do with the software, how it fits into their lives, and how it solves their problems.

When we base exploratory testing on new features or user stories, it's easy to fall into tunnel vision. We end up checking if the feature works as expected, rarely straying off the path. But exploratory testing shines when it tackles the unexpected - those hidden risks that don't show up on a checklist.

Focusing on capabilities gives us a broader yet focused lens. It's not just "Does this button work?" but "How does this help users achieve their goals?" By exploring what the software enables - or prevents - testers can uncover issues that feature-focused testing might miss.

Takeaway

So the next time you're exploratory testing, don't just tick off features. Step into the user's shoes and explore the capabilities those features provide. Balance the scope - neither too narrow nor too wide - and you'll deliver software that's not just functional, but truly valuable.

July 29, 2025

Describe What, Not How: The Story of the Two Chefs

Let me tell you a story.

There were once two chefs tasked with preparing a meal for a grand feast. The first chef, eager to impress, handed his apprentice a step-by-step scroll: “Chop the onions this thin, heat the pan to just this temperature, add everything in this order…” He explained every dash and sprinkle. His apprentice, overwhelmed, spent more time deciphering instructions than cooking. The meal, though technically precise, lacked flavor and joy.

The second chef took a different approach. She told her apprentice, “Tonight, we must serve a soup that warms the soul on a cold night, that smells like coming home.” She described the experience she wanted the diners to have, the outcome—not the process.

The apprentice chose his own path, using what he knew and could discover, tasting as he went. He focused on the warmth, the comfort, the delight. His soup was a triumph. The guests were moved. No one asked how it was made.

The Lesson

When we write acceptance criteria - or any instructions - there’s a desire to dictate every step. But in doing so, we risk the creativity, losing sight of our purpose, and tripping over complexity. Focus on what you want to achieve - the outcome, the experience, the why. Trust the team to discover the how. Describe what soup to serve, not how to stir the pot.

That’s the recipe for long term QA success.

July 22, 2025

Connect Four Game for QA

Connect Four2025

As a QA Engineer, your day revolves around precision, problem-solving, and spotting patterns before they become issues. It's mentally taxing work that demands sharp focus and creative thinking. While coffee breaks and team huddles are great, there's an unexpected hero for keeping your mind in top shape: a Connect Four game in the break room. Here's why this classic game is a must-have for QA teams.

Hones Strategic Thinking

Connect Four is deceptively simple - drop discs, get four in a row. But winning requires anticipating your opponent's moves, planning several steps ahead, and adapting on the fly. These are the same skills QA engineers use when designing test cases or debugging complex systems. A quick game sharpens your ability to think strategically under pressure, a critical skill when hunting for elusive bugs or optimizing test coverage.

Boosts Pattern Recognition

QA engineers excel at spotting anomalies and patterns in software behavior. Connect Four trains your brain to recognize alignments - horizontal, vertical, or diagonal - while predicting how the board evolves. This mirrors the process of identifying edge cases or potential failure points in code. A few rounds during a break can fine-tune your knack for seeing the bigger picture, helping you catch defects others might miss.

Encourages Quick Decision-Making

In Connect Four, every move counts, and hesitation can cost you the game. This fast-paced decision-making translates directly to QA work, where you often need to prioritize test scenarios or make judgment calls on defect severity. Playing Connect Four builds confidence in making swift, calculated choices, which is invaluable when deadlines loom or production issues arise.

Relieves Stress While Keeping You Sharp

QA work can be intense, with tight deadlines and high stakes. A quick Connect Four match offers a mental reset without numbing your brain like scrolling social media might. It's engaging enough to distract from work stress but light enough to leave you refreshed. Plus, the friendly competition fosters camaraderie among team members, boosting morale and collaboration.

Fosters Team Bonding and Communication

QA teams thrive on clear communication and collaboration, whether it's discussing test plans or triaging bugs. Connect Four brings colleagues together in a low-pressure setting, encouraging banter and teamwork. These interactions build trust and improve how the team collaborates on complex projects, all while having fun.

A Low-Cost, High-Impact Investment

Unlike fancy break room perks like ping pong tables (which we had) or VR setups, Connect Four is compact, affordable, and requires no maintenance. It's easy to set up, quick to play, and accessible to everyone, regardless of gaming experience. A single game can fit into a 10-minute break, making it a practical addition to any QA team's workspace.

Conclusion

A Connect Four game in the break room isn't just a fun diversion - it's a brain-sharpening tool that aligns perfectly with the skills QA engineers use daily. From strategic planning to pattern recognition and quick decision-making, this classic game keeps your mind agile while fostering team spirit. So, next time you're stocking the break room, skip the vending machine upgrades and grab a Connect Four set. Your QA team's sharpness (and maybe their win streak) will thank you.

Want to level up your break room game? Try tracking team Connect Four tournaments to see who's the ultimate QA strategist!

July 15, 2025

Blocked ARIA-Hidden Errors: Why Focus Matters for Accessibility

What’s the Deal with `aria-hidden`?

The `aria-hidden` attribute is like a cloak of invisibility for assistive technologies (AT) like screen readers. When you slap `aria-hidden="true"` on an element, you’re telling AT to ignore it and its descendants completely. Sounds handy for hiding decorative images or off-screen modals, right? But here’s the catch: if an element (or its descendant) can still receive focus—like a button or input field—applying `aria-hidden` creates a conflict. Why? Because users relying on assistive tech need to know where their focus is. Hiding a focused element is like blindfolding someone in the middle of a maze—not cool.

Why Focus Matters for Accessibility

Picture this: a visually impaired user navigates your site using a screen reader. They tab to a button inside a modal, but because you’ve set `aria-hidden="true"` on the modal (thinking it’s hidden), the screen reader stays silent. The user is stuck, confused, and frustrated. Focus is their lifeline—it’s how they interact with your site. Hiding a focused element from assistive tech breaks that lifeline, making your site inaccessible.

This isn’t just about compliance with standards like WCAG (Web Content Accessibility Guidelines). It’s about empathy—ensuring everyone can use your site, regardless of how they navigate. So, how do we avoid this trap?

The Problem with `aria-hidden` on Focused Elements

When you apply `aria-hidden="true"` to an element, you’re essentially telling assistive technologies to pretend it doesn’t exist. But if a descendant of that element—like a text input or a link—can still be focused (via keyboard navigation or programmatically), you’ve created a paradox. The browser detects this and blocks the `aria-hidden` attribute to protect users, triggering the error message.

Here’s a quick example of what not to do:

<div aria-hidden="true">
 <button>Click me!</button>
</div>

If that button is focusable (spoiler: it is, unless you’ve explicitly disabled it), the `aria-hidden` attribute is ignored, and you’ll see the error. So, what’s the fix? Enter the `inert` attribute.

Meet the `inert` Attribute: Your New Best Friend

The `inert` attribute is like a superhero for accessibility. Introduced in HTML5, it makes an element and all its descendants non-interactive and invisible to assistive technologies. Unlike `aria-hidden`, which only hides content from screen readers, `inert` prevents focus entirely—no clicking, no tabbing, no interaction. It’s a one-stop shop for making content truly “unavailable.”

Here’s how you’d fix the example above:

<div inert>
 <button>Click me!</button>
</div>

With `inert`, the button can’t receive focus, so there’s no conflict. Assistive technologies skip the entire `div`, and your users aren’t left stranded. Plus, it’s cleaner and more intuitive than juggling `aria-hidden` with other hacks like `tabindex="-1"`.

Why Choose `inert` Over `aria-hidden`?

  • Prevents Focus Entirely: No more worrying about descendants sneaking into the focus order.
  • Cleaner Code: One attribute does the job of multiple workarounds.
  • Better User Experience: Ensures assistive tech users aren’t misled by hidden-but-focusable elements.

Real-World Example: Modals Done Right

Let’s get practical. Modals are a common culprit for `aria-hidden` errors. When a modal pops up, you typically want to hide the background content from both sight and assistive tech. Here’s a Bootstrap 5 example of how to handle it correctly:

<!-- Main content -->
<div id="main-content" class="container my-5">
 <h1>Welcome to My Site</h1>
 <button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#myModal">Open Modal</button>
</div>
<!-- Modal -->
<div class="modal fade" id="myModal" tabindex="-1" aria-labelledby="modalLabel" aria-hidden="true">
 <div class="modal-dialog">
 <div class="modal-content">
 <div class="modal-header">
 <h5 class="modal-title" id="modalLabel">My Modal</h5>
 <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
 </div>
 <div class="modal-body">
 <p>This is the modal content.</p>
 <input type="text" class="form-control" placeholder="Enter text">
 </div>
 </div>
 </div>
</div>

When the modal opens, you want to make the main content inert. Here’s the JavaScript to toggle it using Bootstrap’s modal events:

const mainContent = document.getElementById('main-content');
const myModal = document.getElementById('myModal');
myModal.addEventListener('show.bs.modal', () => {
 mainContent.setAttribute('inert', '');
});
myModal.addEventListener('hide.bs.modal', () => {
 mainContent.removeAttribute('inert');
});

This ensures the main content is non-interactive and hidden from assistive tech while the modal is open. No `aria-hidden` errors, no focus conflicts—just smooth accessibility.

Browser Support and Fallbacks

The `inert` attribute is supported in modern browsers (Chrome, Edge, Firefox, and Safari as of 2025), but older browsers might not recognize it. For broader compatibility, you can use a polyfill like WICG/inert. Or, as a fallback, combine `aria-hidden="true"` with `tabindex="-1"` and CSS to visually hide content—but test thoroughly to avoid focus traps.

Pro Tips for Avoiding `aria-hidden` Pitfalls

  • Test with a Screen Reader: Tools like NVDA or VoiceOver can reveal focus issues you might miss.
  • Use `inert` for Non-Interactive Content: If an element shouldn’t be interacted with, `inert` is usually the better choice.
  • Audit Your Focus Management: Ensure only relevant elements are focusable when dialogs or modals are active.
  • Leverage Bootstrap’s Accessibility Features: Bootstrap 5’s modal component handles a lot of ARIA attributes for you, but always double-check.

Why This Matters to You

Accessibility isn’t just about checking boxes for compliance—it’s about building a web that welcomes everyone. By understanding errors like “Blocked aria-hidden” and embracing tools like the `inert` attribute, you’re creating a better experience for all users. Plus, accessible sites often rank better in SEO and perform better across devices. It’s a win-win!

So, next time you’re tempted to use `aria-hidden` on a container with focusable elements, pause and consider `inert` instead. Your users (and your console) will thank you.

July 8, 2025

Test in Production Meme

In QA, the phrase "test in production" is often met with a mix of laughter and dread. A popular meme circulating since July 9, 2019, captures this sentiment perfectly. Featuring a house engulfed in flames with the words "in this house we test in production," this image has become a humorous yet cautionary tale for QA professionals and developers alike.

Test In Production Meme

The Meme Explained

The image depicts a simple house drawing with flames surrounding it, symbolizing chaos or disaster. The text "in this house we" is overlaid on the structure, leading to the bold declaration "test in production" at the bottom, accompanied by more flames. The humor lies in the absurdity of testing software in a live environment, where real users are affected, rather than in a controlled setting.

The Reality of Testing in Production

While the meme is funny, for me, it touches on a real issue. Testing in production happens more often than an team would admit - due to tight deadlines, overlooked bugs, or inadequate testing environments. This practice can lead to crashes, data loss, or frustrated users, much like the fiery chaos depicted. QA professionals advocate for robust pre-launch testing, including unit tests, integration tests, and user acceptance testing, to avoid such scenarios.

Lessons for QA Teams

This meme serves as a reminder of the importance of a solid QA process:

  • Pre-Production Testing: Ensure all features are tested in a staging environment that mirrors production.
  • Monitoring: Implement real-time monitoring to catch issues early if production testing is unavoidable.
  • Culture Shift: Foster a culture where cutting corners is discouraged, and quality is prioritized.

Conclusion

The "test in production" meme is a lighthearted jab at a serious topic. While it may elicit a chuckle, it also underscores the need for diligence in quality assurance. Let's keep the flames where they belong - in the meme, not our live systems! Share your own QA horror stories or laughs in the comments below.

July 1, 2025

REST vs SOAP: A QA Perspective

As a QA Engineer, it's essential to understand the fundamental differences between REST and SOAP APIs. These two protocols are used for exchanging data across web services, but they have very different approaches, formats, and testing considerations.

What is SOAP?

SOAP (Simple Object Access Protocol) is a protocol designed to allow programs running on different operating systems to communicate using HTTP and XML. It is highly standardized and includes built-in error handling, security (WS-Security), and formal contracts via WSDL (Web Services Description Language).

Example SOAP Request

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                  xmlns:web="http://example.com/webservice">
  <soapenv:Header/>
  <soapenv:Body>
    <web:GetUser>
      <web:UserId>123</web:UserId>
    </web:GetUser>
  </soapenv:Body>
</soapenv:Envelope>
    

What is REST?

REST (Representational State Transfer) is an architectural style that uses standard HTTP methods such as GET, POST, PUT, and DELETE. RESTful APIs usually return data in JSON format, making them easier to consume and test. REST does not require strict contracts or WSDL files.

Example REST Request

GET /users/123 HTTP/1.1
Host: api.example.com
Accept: application/json
    
Example REST Response

{
  "id": 123,
  "name": "John Smith",
  "email": "john.smith@example.com"
}
    

Key Differences

Feature REST SOAP
Protocol Uses HTTP methods Uses XML-based protocol
Data Format JSON, XML XML only
Ease of Use Lightweight and faster More verbose and strict
Security Relies on HTTPS, OAuth WS-Security standard
Contract Optional (OpenAPI/Swagger) Required (WSDL)
Testing Tools Postman, curl, REST Assured SoapUI, JMeter, curl (limited)

QA Considerations

  • REST APIs are easier to test due to lightweight payloads and modern tools like Postman and REST Assured.
  • SOAP APIs require more setup, including parsing WSDL files and crafting XML envelopes, which can add overhead.
  • Automation for REST is generally faster to implement and maintain.
  • SOAP is better suited for enterprise environments with strict standards and security requirements.

Final Thoughts

Both REST and SOAP have their place in modern QA workflows. Understanding their strengths and weaknesses will help you choose the right testing strategy. When speed and simplicity are key, REST is usually preferred. When you need high security and strict contracts, SOAP might be the better choice.

June 24, 2025

Acceptance Criteria Best Practices

A common mistake inexperienced teams make when defining acceptance criteria for a story is combining the mechanics of test execution with the test's purpose. They attempt to describe both what they want to test and how it will be tested simultaneously, which often leads to confusion.

Example of Poorly Defined Acceptance Criteria

Below is an example of mixing test mechanics and purpose, which can lead to unclear requirements:


As a user, I want to log in so that I can access my account 
by entering my username and password into the login form, 
clicking the submit button, and verifying that the server 
responds with a 200 status code and a session token.
        

This example mixes the user's goal (logging in) with implementation details (server response, session token), making it harder to understand the core requirement.

Improved Acceptance Criteria

Focus on the purpose of the test, keeping it clear and concise:


Given a user with valid credentials
When they enter their username and password and submit the login form
Then they are granted access to their account
        

This version uses the Given-When-Then format to clearly separate the context, action, and expected outcome, avoiding technical implementation details.

Key Tips for Writing Acceptance Criteria

  • Focus on the user's intent and the desired outcome, not the implementation.
  • Use clear, simple language to ensure shared understanding.
  • Separate the "what" (purpose) from the "how" (test execution mechanics).
  • Use formats like Given-When-Then to structure criteria logically.

About

Welcome to QA!

The purpose of these blog posts is to provide comprehensive insights into Software Quality Assurance testing, addressing everything you ever wanted to know but were afraid to ask.

These posts will cover topics such as the fundamentals of Software Quality Assurance testing, creating test plans, designing test cases, and developing automated tests. Additionally, they will explore best practices for testing and offer tips and tricks to make the process more efficient and effective

Check out all the Blog Posts.

Listen on Apple Podcasts

Blog Schedule

TuesdayQA
WednesdayVeed
ThursdayBusiness
FridayMacintosh
SaturdayInternet Tools
SundayOpen Topic
MondayMedia Monday