Blog Listing

Quality Assurance Image Library

This is my carefully curated collection of Slack images, designed to perfectly capture those unique QA moments. Whether it's celebrating a successful test run, expressing the frustration of debugging, or simply adding humor to your team's chat, these images are here to help you communicate with personality and style.

May 13, 2025

Understanding Behavior-Driven Development in SQA Testing

What is Behavior-Driven Development (BDD)?

Behavior-Driven Development (BDD) is an agile software development methodology that enhances collaboration between developers, testers, and non-technical stakeholders. It focuses on defining the behavior of a system in a human-readable format, ensuring that all parties share a common understanding of requirements before development begins. In Software Quality Assurance (SQA), BDD helps create tests that align with the expected behavior of the application, making it easier to validate functionality and catch issues early.

BDD builds on Test-Driven Development (TDD) by using a structured language called Gherkin to write test scenarios in a "Given-When-Then" format. This approach ensures that tests are not only technical but also meaningful to business stakeholders.

A Story Reference: "Brief Jottings"

In an old newspaper clipping titled "Brief Jottings," we find a snippet about a community effort to build a Colosseum: "The Colosseum is progressing with the rapidity predicted by the warmest friends of the Musical Peace Festival upon the assurances of the gentlemen comprised in the committee which finally put the scheme into a practical basis. Mr. John R. Hall, who has been selected by the building committee to supervise the details of construction, is daily upon the scene of operations, giving his personal attention to the work..."

This story from "Brief Jottings" provides a great analogy for understanding BDD. Imagine the Musical Peace Festival as the software project, the committee as the stakeholders (developers, testers, and business analysts), and Mr. John R. Hall as the SQA team ensuring that the construction (development) aligns with the festival's goals (requirements). Using BDD, the committee would define the expected behavior of the Colosseum—such as ensuring it can accommodate several thousand people—before construction begins.

How Does BDD Work in SQA Testing?

BDD involves three main steps: discovery, formulation, and automation. Let’s break it down using the "Brief Jottings" analogy:

  • Discovery: Stakeholders collaborate to define the desired behavior. In the Colosseum story, the committee agrees that the structure must "provide for the accommodation of several thousand more people than could possibly have been seated by the original style of the roof."
  • Formulation: The behavior is written in a structured, human-readable format using Gherkin. For example:
    Scenario: Accommodating festival attendees
    Given the Colosseum is under construction,
    When the festival date arrives,
    Then the structure should accommodate at least 5,000 attendees.
  • Automation: These scenarios are automated into test cases using tools like Cucumber or SpecFlow. The SQA team (like Mr. John R. Hall) ensures that the Colosseum (software) meets the defined behavior by running these tests.

By focusing on behavior, BDD ensures that the software delivers value to the end user, just as the Colosseum committee ensured the structure met the festival’s needs.

Benefits of BDD in SQA Testing

  • Improved Collaboration: Stakeholders, developers, and testers work together to define requirements, reducing misunderstandings—like ensuring the Colosseum committee and builders are on the same page.
  • Clear Requirements: Gherkin scenarios provide a shared language, making requirements unambiguous.
  • Early Bug Detection: Testing behavior early catches issues before they become costly, much like Mr. Hall’s daily supervision prevented construction errors.
  • Focus on User Needs: BDD ensures the software meets user expectations, delivering a product that works as intended.

Conclusion

Behavior-Driven Development is a powerful approach in SQA testing that bridges the gap between technical and non-technical teams. By focusing on the behavior of the system, as illustrated by the collaborative efforts in the "Brief Jottings" story, BDD ensures that software meets user needs while maintaining high quality. Adopting BDD can lead to better communication, fewer defects, and a product that truly delivers value—just like a well-constructed Colosseum hosting a successful Musical Peace Festival.

May 6, 2025

Pytest vs Playwright vs Cypress

What are Pytest, Playwright, and Cypress?

Choosing the right testing framework depends on your project's needs. Pytest is a Python-based testing framework for unit and functional testing. Playwright (with TypeScript) is a modern end-to-end (E2E) testing tool for web applications, supporting multiple browsers. Cypress is a JavaScript-based E2E testing framework designed for simplicity and speed. Below, we explore their key differences and provide examples of visiting Google.com and validating its title.

1. What are the key differences between Pytest, Playwright, and Cypress?

Here's a comparison of the three frameworks:

Feature Pytest Playwright (TypeScript) Cypress
Primary Use Unit, functional, and API testing E2E web testing E2E web testing
Language Python TypeScript/JavaScript JavaScript
Browser Support Requires external libraries (e.g., Selenium) Chromium, Firefox, WebKit Chrome, Edge, Firefox
Architecture General-purpose, no built-in browser control Node.js-based, direct browser control Runs in-browser, direct DOM access
Speed Depends on external tools Fast, parallel execution Fast, but synchronous
Learning Curve Moderate (Python knowledge) Moderate (TypeScript + async/await) Easy (intuitive API)

2. How do you test visiting Google.com with Pytest?

Pytest requires an external library like selenium for browser automation. Below is an example using Selenium to visit Google.com and validate the title.


# test_google.py
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
def test_google_title():
    driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
    driver.get("https://www.google.com")
    assert driver.title == "Google"
    driver.quit()
                    

Note: Run with pytest test_google.py. Ensure selenium and webdriver_manager are installed.

3. How do you test visiting Google.com with Playwright and TypeScript?

Playwright supports TypeScript natively and provides robust browser automation. Below is an example.


// tests/google.spec.ts
import { test, expect } from '@playwright/test';
test('Google title test', async ({ page }) => {
  await page.goto('https://www.google.com');
  await expect(page).toHaveTitle('Google');
});
                    

Note: Run with npx playwright test. Requires Playwright setup with TypeScript configuration.

4. How do you test visiting Google.com with Cypress?

Cypress is designed for simplicity and runs directly in the browser. Below is an example.


// cypress/e2e/google.cy.js
describe('Google Title Test', () => {
  it('should have the correct title', () => {
    cy.visit('https://www.google.com');
    cy.title().should('eq', 'Google');
  });
});
                    

Note: Run with npx cypress run or open the Cypress Test Runner. Requires Cypress installation.

5. Which framework should you choose?

  • Pytest: Ideal for Python developers testing APIs, unit tests, or integrating with Selenium for browser testing.
  • Playwright: Best for cross-browser E2E testing with TypeScript, offering modern features and parallel execution.
  • Cypress: Great for JavaScript developers seeking a simple, fast E2E testing tool with an intuitive interface.

Consider your team's expertise, project requirements, and browser support needs when choosing.

April 29, 2025

Generating and Testing QR Codes for Inventory Management in QA

In quality assurance (QA), ensuring the accuracy and functionality of inventory management systems is critical. One effective way to test product IDs in an inventory application is by generating and validating QR codes. In this blog post, I’ll walk you through how I used Python to create custom QR codes for testing product IDs and share insights on integrating this into your QA testing workflow.

Why QR Codes for QA Testing?

QR codes are a robust way to encode product IDs, allowing for quick scanning and validation in inventory applications. By generating QR codes programmatically, QA teams can:

  • Simulate real-world product data.
  • Test scanning functionality and data accuracy.
  • Automate validation of product IDs against a database.
  • Ensure error handling for malformed or invalid IDs.

In this example, I focused on testing a set of product IDs for an inventory application, generating QR codes for each ID and saving them as images for further testing.

QR Code Sheet
Sample Sheet that I created. I used PhotoScape to combine the images and added the text below the QA Code.

The Python Code

Below is the Python script I used to generate QR codes for a list of product IDs. The script uses the qrcode library to create QR codes and save them as PNG files.

#!/usr/bin/python3
import datetime
import qrcode
# List of QR Code data (product IDs)
qr_codes = [
    "22001185000-TV825862",
    "46002897000-AV301140",
    "46002897000-AV320052",
    "46003686000-TS062286",
    "46200239000-HRB3019056",
    "46039182000-HRC0994569",
    "22031593000-0hfg7ddw502291r",
    "46003251000-BA30816015",
    "46002566000-VT5575px;32991",
    "22010473000-zt600331",
    "46200240000-MLSVTXLHQ-7",
    "46200240000-HRC2167455",
    "46074803000-MM0O05JGX-0",
    "46074803000-HRB2539864",
    "46088955000-kc1008395"
]
# Function to generate and save QR codes
def generate_qr_codes(codes):
    for i, code in enumerate(codes):
        # Create QR code instance
        qr = qrcode.QRCode(
            version=1,
            error_correction=qrcode.constants.ERROR_CORRECT_L,
            box_size=10,
            border=4,
        )
        qr.add_data(code)
        qr.make(fit=True)
        # Create an image from the QR Code instance
        img = qr.make_image(fill='black', back_color='white')
        # Save the image to a file
        img.save(f"{code}.png")
# Generate and save QR codes
generate_qr_codes(qr_codes)

How It Works

  1. Dependencies: The script uses the qrcode library, which you can install via pip install qrcode pillow. The Pillow library is required for image handling.
  2. Product IDs: The qr_codes list contains product IDs in various formats, simulating real-world data with different prefixes, suffixes, and lengths.
  3. QR Code Generation:
    • A QRCode instance is created with:
      • version=1 (smallest QR code size, sufficient for short IDs).
      • error_correction=ERROR_CORRECT_L (low error correction, suitable for most use cases).
      • box_size=10 (pixel size of each QR code module).
      • border=4 (white space around the QR code).
    • Each product ID is added to the QR code, and the make method generates the QR code matrix.
  4. Image Creation and Saving: The QR code is converted to a black-and-white PNG image and saved with the product ID as the filename (e.g., 22001185000-TV825862.png).

QA Testing Workflow

Here’s how I incorporated this script into my QA testing process:

  1. Generate QR Codes:
    • Run the script to create QR code images for all product IDs.
    • Verify that each image is created and named correctly.
  2. Test Scanning Functionality:
    • Use a QR code scanner (e.g., a mobile app or the inventory application’s scanner) to read each QR code.
    • Validate that the scanned data matches the original product ID.
  3. Validate Against Database:
    • Check that the scanned product ID exists in the inventory database.
    • Test edge cases, such as invalid or duplicate IDs.
  4. Error Handling:
    • Test malformed product IDs (e.g., too long, special characters).
    • Ensure the application handles scanning errors gracefully.

Lessons Learned

  • Data Diversity: Include a variety of product ID formats in your test data to mimic real-world scenarios.
  • Scalability: For large datasets, consider batch processing or parallelizing QR code generation to save time.
  • Validation: Always validate QR code content programmatically to catch discrepancies early.
  • Error Correction: Adjust the error_correction level (e.g., ERROR_CORRECT_M or ERROR_CORRECT_H) if QR codes will be printed or scanned in challenging conditions.
April 22, 2025

Managing a Remote QA Team

During my time leading a distributed QA team across the U.S. and India, I had the opportunity to work with a dedicated and talented group of engineers. While the collaboration across time zones was productive and often inspiring, it also revealed some key challenges — especially around maintaining consistent quality standards.


Great Team, But Growing Pains

The engineers I worked with overseas were skilled and enthusiastic, always delivering on their assigned tasks. However, I noticed a pattern — they sometimes became complacent with their testing. The definition of “done” didn’t always include deep scrutiny or thinking beyond pass/fail criteria.

One Bug That Changed Everything

There was a particular moment that crystallized the issue for me. I discovered a bug where a critical button wasn't visible in smaller-resolution browsers. When I brought it up, the team admitted they had seen it — but chose not to report it because, in their words, “it wasn’t breaking anything.”

That moment shifted our entire approach. I reminded the team that just because something isn’t technically broken, doesn’t mean it isn’t a problem. If users can’t see a button, they won’t click it — and that’s a functional failure, not just a UI glitch.

Raising the QA Standard

In response, I introduced “UX flags” to our testing process and implemented #UXflags in Jira tickets. I began holding weekly bug retrospectives. We focused on thinking like the end user — questioning visual hierarchy, accessibility, and usability in every test run. Over time, I saw a major shift: the team became more proactive, caught edge cases independently, and started reporting issues from a user-first mindset.


Takeaway

Managing a remote QA team goes far beyond assigning tickets. It requires cultivating a culture where quality is everyone’s responsibility — not just checking boxes, but caring about the final product. That button bug may have seemed minor, but it taught us a major lesson: high-quality software comes from high-quality thinking.

April 15, 2025

Unveiling the Hidden Gems of TestLink

As a seasoned QA engineer with over a decade of experience, I’ve relied on TestLink to manage manual regression testing for years. This web-based test management system is a powerhouse for organizing test cases, tracking execution, and generating insightful reports.

While TestLink’s core functionality is robust, its true potential shines when you tap into its lesser-known features. In this blog post, I’ll share some hidden gems from the TestLink 1.8 User Manual that can elevate your testing game, drawing from my hands-on experience and the manual’s insights.

1. Keyboard Shortcuts for Lightning-Fast Navigation

Shortcuts like ALT + h (Home), ALT + s (Test Specification), and ALT + e (Test Execution) allow quick navigation. On large test suites, I used ALT + t to create test cases efficiently. Tip: In Internet Explorer, press Enter after the shortcut.

2. Custom Fields for Flexible Test Case Metadata

Administrators can define custom parameters such as “Test Environment” or “Priority Level.” I used these to tag configurations like “Performance” or “Standard.” Note: Fields over 250 characters aren’t supported, but you can use references instead.

3. Inactive Test Cases for Version Control

Test cases marked “Inactive” won’t be added to new Test Plans, preserving version history. This is helpful when phasing out legacy tests while keeping results intact. However, linked test cases with results cannot be deactivated.

4. Keyword Filtering for Smarter Test Case Organization

Assign keywords like “Regression,” “Sanity,” or “Mobile Browser” to categorize tests. This made it easy to filter and generate targeted reports. Use batch mode or assign keywords individually for better test planning.

5. Importing Test Cases from Excel via XML

Export a sample XML, build your test cases in Excel, then import back into TestLink. I used this to quickly load dozens of test cases. Be sure to verify your XML format first to ensure a smooth import.

6. Requirements-Based Reporting for Stakeholder Insights

This feature ties test results to specific requirements. I used it to demonstrate requirement coverage to stakeholders. Just enable requirements at the Test Project level to get started.

7. Bulk User Assignment for Efficient Test Execution

Select a test suite and assign all test cases to a tester with a single click. Great for managing offshore teams and sending notifications. The visual toggles for selection make it intuitive to use.

Why These Features Matter

TestLink is a fantastic tool for manual regression testing, but mastering its hidden features unlocks its full potential. Keyboard shortcuts and bulk assignments save time, custom fields and keywords provide flexibility, and advanced reporting aligns testing with business goals.

Tips for Getting Started

  • Explore the Manual: Start with Test Specification (Page 9) and Import/Export (Page 41).
  • Experiment Safely: Use a sandbox project before applying features in production.
  • Engage the Community: Visit forums like www.teamst.org for updates.

By diving into these hidden features, you’ll transform TestLink from a reliable test case repository into a strategic asset for your QA process.

Have you discovered other TestLink tricks? Share them in the comments—I’d love to hear how you’re making the most of this versatile tool!

Note: All references are based on the TestLink 1.8 User Manual provided.

April 8, 2025

The Unsung Hero of MVP Success

When you hear "Minimum Viable Product" (MVP), you might picture a scrappy, bare-bones version of an app or tool - just enough to get it out the door and into users' hands. The idea is to test the waters, see if your concept has legs, and iterate based on real feedback. But here's the kicker: if your MVP doesn't work, you're not testing product-market fit - you're testing how much frustration your users can stomach before they hit "uninstall."

Enter Quality Assurance (QA), the unsung hero that can make or break your MVP's shot at success. In a recent episode of QA in a Box, host Chris Ryan and his CTO co-star unpack why QA isn't just a nice-to-have - it's a must-have, even for the leanest of MVPs. Let's dive into their insights and explore why rigorous QA could be the difference between a launch that soars and one that flops.

Why QA Isn't Optional - Even for an MVP

Chris kicks things off with a blunt reality check:

"You might think, 'It's just an MVP - why do we need rigorous QA?' And to that, I say: 'Have you ever used a broken app and immediately deleted it?'"
It's a fair point. An MVP might be "minimal," but it still needs to deliver on its core promise. If it crashes every time someone taps a button, as the CTO jokingly realizes, users aren't going to patiently wait around for Version 2.0 - they're gone.

QA's job isn't to make your MVP flawless; it's to ensure the key feature - the thing you're betting your product-market fit on - actually works. Without that, your MVP isn't a Product. It's just a Problem. And good QA doesn't slow you down - it speeds you up. By catching critical bugs before users do, preventing post-launch disasters, and keeping your early adopters from jumping ship, QA sets the stage for meaningful feedback instead of angry rants on X.

How QA Tackles MVP Testing the Smart Way

So, how does QA approach an MVP without turning it into a bloated, over-tested mess? Chris breaks it down: it's about smart testing, not exhaustive testing. Focus on three key areas:

  1. Core Features - Does the main value proposition hold up? If your app's selling point is a lightning-fast search, that search better work.
  2. Usability - Can users figure it out without needing a PhD? A clunky interface can tank your MVP just as fast as a bug.
  3. Stability - Will it hold up under minimal real-world use? Ten users shouldn't bring your app to its knees.

The goal isn't perfection - it's delivering what you promised. As the CTO puts it, QA isn't there to gatekeep releases with a big "No"; it's there to say, "Yes, but let's make sure this part works first." For founders, skipping QA doesn't save time - it just shifts the burden of bug-fixing onto your early users, who probably won't stick around to file a polite bug report.

MVP Horror Stories: When QA Could've Saved the Day

To drive the point home, Chris shares some real-world MVP fails that could've been avoided with a little QA love. Take the e-commerce app with a broken "Buy Now" button - 5,000 downloads turned into 4,999 uninstalls faster than you can say "lost revenue." The CTO dubs it a "Most Valuable Prank," and he's not wrong. A basic QA smoke test would've caught that in minutes.

Then there's the social app that worked like a charm… until two people tried using it at once. The database couldn't handle concurrent requests, and what seemed like a promising MVP crumbled under the weight of its own ambition. A quick load test from QA could've spared the team that ego-crushing lesson. The takeaway? Test early, test smart - or risk becoming a cautionary tale.

The Bottom Line: QA Is Your MVP's Best Friend

Wrapping up, Chris leaves us with a clear message:

"Your MVP needs QA. Not as an afterthought - but as a core part of the process."
It's not about delaying your launch or chasing perfection; it's about ensuring your idea gets a fair shot with users. The CTO, initially skeptical, comes around with a smirk: "Next time someone says 'We'll fix it in the next version,' I'll just forward them this podcast."

For founders, developers, and dreamers building their next big thing, the lesson is simple: QA isn't the party pooper - it's the wingman that helps you ship actual value. So, before you hit "launch," ask yourself: does this work? Is it usable? Will it hold up? A little QA now could save you a lot of headaches later.

April 1, 2025

Four Tips for Writing Quality Test Cases for Manual Testing

As Software Quality Assurance (SQA) professionals, we know that crafting effective test cases is both an art and a science. In his seminal 2003 paper, What Is a Good Test Case?, Cem Kaner, a thought leader in software testing, explores the complexity of designing test cases that deliver meaningful insights. Drawing from Kaner's work, here are four practical tips to elevate your manual test case writing, ensuring they are purposeful, actionable, and impactful.

1. Align Test Cases with Clear Information Objectives

A good test case starts with a purpose. Kaner emphasizes that test cases are questions posed to the software, designed to reveal specific information-whether it's finding defects, assessing conformance to specifications, or evaluating quality. Before writing a test case, ask: What am I trying to learn or achieve? For manual testing, this clarity is critical since testers rely on human observation and judgment.

Tip in Action: Define the objective upfront. For example, if your goal is to "find defects" in a login feature, craft a test case like: "Enter a username with special characters (e.g., @#$%) and a valid password, then verify the system rejects the input with an appropriate error message." This targets a specific defect class (input validation) and provides actionable insight into the system's behavior.

2. Make Test Cases Easy to Evaluate

Kaner highlights "ease of evaluation" as a key quality of a good test case. In manual testing, where testers manually execute and interpret results, ambiguity can lead to missed failures or false positives. A test case should clearly state the inputs, execution steps, and expected outcomes so the tester can quickly determine pass or fail without excessive effort.

Tip in Action: Write concise, unambiguous steps. Instead of "Check if the form works," specify: "Enter 'JohnDoe' in the username field, leave the password blank, click 'Login,' and verify an error message appears: 'Password is required.'" This reduces guesswork, ensuring consistency and reliability in execution.

3. Design for Credibility and Relevance

A test case's value hinges on its credibility-whether stakeholders (developers, managers, or clients) see it as realistic and worth addressing. Kaner notes that tests dismissed as "corner cases" (e.g., "No one would do that") lose impact. For manual testing, focus on scenarios that reflect real-world usage or critical risks, balancing edge cases with typical user behavior.

Tip in Action: Ground your test cases in user context. For a shopping cart feature, write: "Add 10 items to the cart, remove 2, and verify the total updates correctly." This mirrors common user actions, making the test credible and motivating for developers to fix any uncovered issues. Pair it with a risk-based test like "Add 1,000 items and verify system performance" if scalability is a concern, justifying its relevance with data or requirements.

4. Balance Power and Simplicity Based on Product Stability

Kaner defines a test's "power" as its likelihood of exposing a bug if one exists, often achieved through boundary values or complex scenarios. However, he cautions that complexity can overwhelm early testing phases when the software is unstable, leading to "blocking bugs" that halt progress. For manual testing, tailor the test's complexity to the product's maturity.

Tip in Action: Early in development, keep it simple: "Enter the maximum allowed value (e.g., 999) in a numeric field and verify acceptance." As stability improves, increase power with combinations: "Enter 999 in Field A, leave Field B blank, and submit; verify an error flags the missing input." This progression maximizes defect detection without overwhelming the tester or the process.

Final Thoughts

Kaner's work reminds us there's no one-size-fits-all formula for a "good" test case-context is everything. For SQA professionals engaged in manual testing, the key is to design test cases that are purposeful, executable, believable, and appropriately scoped. By aligning with objectives, ensuring clarity, prioritizing relevance, and adapting to the software's lifecycle, you'll create test cases that not only find bugs but also drive meaningful improvements. As Kaner puts it, "Good tests provide information directly relevant to [your] objective"-so define your goal, and let it guide your craft.

March 25, 2025

Is Your QA Team Following Dogma or Karma?

As QA teams grow and evolve, they often find themselves at a crossroads: Are they focusing on rigid, dogmatic practices, or are they embracing a more fluid, karmic approach that adapts to the moment? Let's dive into this philosophical tug-of-war and explore what it means for your QA team - and your software.

Dogma: The Comfort of the Rulebook

Dogma in QA is the strict adherence to predefined processes, checklists, and methodologies, no matter the context. It's the "we've always done it this way" mindset. Think of the team that insists on running a full regression test suite for every minor bug fix, even when a targeted test would suffice. Or the insistence on manual testing for every feature because automation "can't be trusted."

There's a certain comfort in dogma. It provides structure, predictability, and a clear path forward. For new QA engineers, a dogmatic framework can be a lifeline - a set of rules to follow when the chaos of software development feels overwhelming. And in highly regulated industries like healthcare or finance, dogmatic adherence to standards can be a legal necessity.

But here's the catch: Dogma can calcify into inefficiency. When a team clings to outdated practices - like refusing to adopt modern tools because "the old way works" - they risk missing out on innovation. Worse, they might alienate developers and stakeholders who see the process as a bottleneck rather than a value-add. Dogma, unchecked, turns QA into a gatekeeper rather than a collaborator.

Karma: The Flow of Cause and Effect

On the flip side, a karmic approach to QA is all about adaptability and consequences. It's the belief that good testing practices today lead to better outcomes tomorrow - less technical debt, happier users, and a smoother development cycle. A karmic QA team doesn't blindly follow a script; they assess the situation, weigh the risks, and adjust their strategy accordingly.

Imagine a team facing a tight deadline. Instead of dogmatically running every test in the book, they prioritize high-risk areas based on code changes and user impact. Or consider a team that invests in automation not because it's trendy, but because they've seen how manual repetition burns out testers and delays releases. This is karma in action: thoughtful decisions that ripple outward in positive ways.

The beauty of a karmic approach is its flexibility. It embraces new tools, techniques, and feedback loops. It's less about "the process" and more about the result - delivering quality software that meets real-world needs. But there's a downside: Without some structure, karma can devolve into chaos. Teams might skip critical steps in the name of agility, only to face a flood of bugs post-release. Karma requires discipline and judgment, not just good intentions.

Striking the Balance

So, is your QA team following dogma or karma? The truth is, neither is inherently "right" or "wrong" - it's about finding the sweet spot between the two.

  • Audit Your Dogma: Take a hard look at your current processes. Are there sacred cows that no one's questioned in years? Maybe that 50-page test plan made sense for a legacy system but not for your new microservices architecture. Challenge the status quo and ditch what doesn't serve the goal of quality.
  • Embrace Karmic Wisdom: Encourage your team to think critically about cause and effect. If a process feels like busywork, ask: What's the payoff? If a new tool could save hours, why not try it? Build a culture where decisions are tied to outcomes, not just tradition.
  • Blend the Best of Both: Use dogma as a foundation - standardized bug reporting, compliance checks, or a core set of tests that never get skipped. Then layer on karmic flexibility - tailoring efforts to the project's unique risks and timelines.

A Real-World Example

I heard of a QA team that swore by their exhaustive manual test suite. Every release, they'd spend two weeks clicking through the UI, even for tiny updates. Dogma ruled. Then a new lead joined, pushing for automation in high-traffic areas. The team resisted - until they saw the karma: faster releases, fewer late-night bug hunts, less late night testing, and happier devs. They didn't abandon manual testing entirely; they just redirected it where human intuition mattered most. The result? A hybrid approach that delivered quality without the grind.

The QA Crossroads

Your QA team's philosophy shapes more than just your testing - it influences your entire product lifecycle. Dogma offers stability but can stifle progress. Karma promises agility but demands discernment. The best teams don't pick a side; they dance between the two, guided by one question: Does this help us build better software? So, take a moment to reflect. Is your QA team stuck in the past, or are they sowing seeds for a better future? The answer might just determine whether your next release is a triumph - or a lesson in what could've been.

March 18, 2025

Overcoming Failures in Playwright Automation

Automation Marathon

Life, much like a marathon, is a test of endurance, grit, and the ability to push through setbacks. In the world of software testing, Playwright automation has become my long-distance race of choice - a powerful tool for running browser-based tests with speed and precision. But as any runner will tell you, even the most prestigious marathons come with stumbles, falls, and moments where you question if you'll make it to the finish line. This is a story about my journey with Playwright, the failures I encountered, and how I turned those missteps into victories.

The Starting Line: High Hopes, Hidden Hurdles

When I first adopted Playwright for automating end-to-end tests, I was thrilled by its promise: cross-browser support, and fast execution. My goal was to automate a critical path for an e-commerce website. The script seemed straightforward, and I hit "run" with the confidence of a marathoner at mile one.

Then came the first failure: a weird timeout error. The test couldn't locate the "Add to Cart" button that I knew was on the page. I double-checked the selector - .btn-submit - and it looked fine. Yet Playwright disagreed, leaving me staring at a red error log instead of a triumphant green pass. It was my first taste of defeat, and it stung.

Mile 5: The Flaky Test Trap

Determined to push forward, I dug into the issue. The button was dynamically loaded via JavaScript, and Playwright's default timeout wasn't long enough. I adjusted the script with a waitForSelector call and increased the timeout. Success - at least for a moment. The test passed once, then failed again on the next run. Flakiness had entered the race.

Flaky tests are the headace of automation: small at first, but they'll increase in size you if ignored them. I realized the page's load time varied depending on network conditions, and my hardcoded timeout was a Band-Aid, not a fix. Frustration set in. Was Playwright the problem, or was I missing something fundamental?

Mile 13: Hitting the Wall

The failures piled up. A test that worked in Chrome crashed in Firefox because of a browser-specific rendering quirk. Screenshots showed elements misaligned in Webkit, breaking my locators. And then there was the headless mode debacle - tests that ran perfectly in headed mode failed silently when I switched to testing in CI. I'd hit the marathon "wall," where every step felt heavier than the last.

I considered giving up on Playwright entirely. Maybe Pytest, Selenium or Cypress would be easier. (Even Ghost Inspector looked good!) But just like a champion marathoner doesn't quit during the race, I decided to rethink my approach instead of abandoning it.

The Turnaround: Learning from the Stumbles

The breakthrough came when I stopped blaming the tool and started examining my strategy. Playwright wasn't failing me - I was failing to use it effectively. Here's how I turned things around:

  1. Smarter Waiting: Instead of relying on static timeouts, I used Playwright's waitForLoadState method to ensure the page was fully interactive before proceeding. This eliminated flakiness caused by dynamic content. (Huge Win!)

    await page.waitForLoadState('networkidle');
    await page.click('.btn-submit');
  1. Robust Selectors: I switched from fragile class-based selectors to data attributes (e.g., [data-test-id="submit"]), which developers added at my request. This made tests more resilient across browsers and layouts.
  2. Debugging Like a Pro: I leaned on Playwright's built-in tools - screenshots, traces, and the headed mode - to diagnose issues. Running npx playwright test --headed became my go-to for spotting visual bugs.
  3. CI Optimization: For headless failures, I added verbose logging and ensured my CI environment matched my local setup (same Node.js version, same dependencies). Playwright's retry option also helped smooth out intermittent network hiccups.

Crossing the Finish Line

With these adjustments, my tests stabilized. The login flow passed consistently across Chrome, Firefox, and Safari. The critical path testing hummed along, and the user login - a notorious failure point - became a reliable win. I even added a celebratory console.log("Victory!") to the end of the suite, because every marathon deserves a cheer at the finish. (Cool little Easter Egg!)

The failures didn't disappear entirely - automation is a living process, after all - but they became manageable. Each stumble taught me something new about Playwright's quirks, my app's behavior, and my own habits as a tester. Like a marathoner who learns to pace themselves, I found my rhythm.

The Medal: Resilience and Results

Looking back, those early failures weren't losses - they were mile markers on the road to learning Playwright capabilities. Playwright didn't just help me automate tests; it taught me resilience, problem-solving, and the value of persistence. Today, my test suite runs like a well-trained runner: steady, strong, and ready for the next race.

So, to anyone struggling with automation failures - whether in Playwright or elsewhere - keep going. The finish line isn't about avoiding falls; it's about getting back up and crossing it anyway. That's the true marathon memory worth keeping.

March 11, 2025

ISO 14971 Risk Management

In the world of medical device development, risk management is not just a regulatory requirement - it's a critical component of ensuring patient safety. ISO 14971, the international standard for risk management in medical devices, provides a structured approach to identifying, evaluating, and controlling risks throughout the product lifecycle. While traditionally applied to hardware, this standard is equally essential in Software Quality Assurance (SQA), especially as medical devices become increasingly software-driven.

In this blog post, we'll explore the key principles of ISO 14971, how it applies to software development, and why integrating risk management into SQA is crucial for compliance and safety.

Understanding ISO 14971 in Medical Device Development

ISO 14971 provides a systematic framework for manufacturers to identify hazards, estimate risks, implement risk control measures, and monitor residual risks throughout the medical device lifecycle. The standard is recognized by regulatory bodies like the FDA (U.S.) and MDR (EU) as the primary guideline for medical device risk management.

The core steps of ISO 14971 include:

  1. Risk Analysis - Identifying potential hazards associated with the device (including software).
  2. Risk Evaluation - Assessing the severity and probability of each identified risk.
  3. Risk Control - Implementing measures to reduce risks to an acceptable level.
  4. Residual Risk Assessment - Evaluating the remaining risks after controls are applied.
  5. Risk-Benefit Analysis - Determining if the device's benefits outweigh the residual risks.
  6. Production & Post-Market Monitoring - Continuously assessing risks after product deployment.

Since software plays an increasingly vital role in medical devices, ISO 14971 explicitly requires manufacturers to evaluate software-related risks, making it an essential part of Software Quality Assurance (SQA).

How ISO 14971 Relates to Software Quality Assurance

Software Quality Assurance (SQA) focuses on ensuring that medical device software meets regulatory and safety standards while minimizing errors and failures. Because software failures can directly impact patient safety, ISO 14971's risk-based approach is crucial in SQA.

Key Ways ISO 14971 Supports SQA in Medical Devices

1. Identifying Software-Related Risks

Software in medical devices can present unique risks, including:
- Incorrect data processing leading to wrong diagnoses or treatments
- Software crashes that disable critical functions
- Cybersecurity vulnerabilities leading to data breaches or device manipulation

Using ISO 14971's risk assessment methods, SQA teams can identify these hazards early in development.

2. Integrating Risk-Based Testing in SQA

ISO 14971 emphasizes risk reduction, which aligns with risk-based testing (RBT) in SQA. Instead of treating all software components equally, RBT prioritizes high-risk areas (e.g., critical safety functions) for more rigorous testing.

For example, a software bug in an infusion pump that miscalculates dosage could have life-threatening consequences, requiring extensive validation and verification.

3. Risk Control Measures in Software Development

ISO 14971 recommends implementing risk control measures, which in software development may include:
- Fail-safe mechanisms (e.g., automatic shutdown on error detection)
- Redundancy (e.g., backup systems for critical functions)
- User alerts and warnings (e.g., error messages guiding corrective actions)

4. Regulatory Compliance & Documentation

Regulatory agencies require comprehensive documentation to prove compliance with ISO 14971. For software, this includes:
- Software Hazard Analysis Reports
- Traceability Matrices (linking risks to design & testing)
- Verification & Validation (V&V) Evidence

SQA teams must ensure every risk-related software decision is documented, making audits and approvals smoother.

5. Post-Market Software Risk Management

Software risks don't end at release - ISO 14971 mandates continuous monitoring. SQA teams must establish:
- Bug tracking & risk assessment updates
- Incident reporting mechanisms
- Software patches & cybersecurity updates

By aligning with ISO 14971, software teams can proactively address risks throughout the product's lifecycle, reducing regulatory and safety concerns.

Final Thoughts: ISO 14971 and the Future of Software Quality Assurance

As medical devices become more software-dependent, ISO 14971's risk management framework is essential for ensuring software safety and reliability. By integrating risk-based testing, robust control measures, and continuous monitoring, SQA teams can align with international regulations and safeguard patient health.

For medical device manufacturers, embracing ISO 14971 in software quality assurance isn't just about compliance - it's about building safer, more reliable medical technologies.

About

Welcome to QA!

The purpose of these blog posts is to provide comprehensive insights into Software Quality Assurance testing, addressing everything you ever wanted to know but were afraid to ask.

These posts will cover topics such as the fundamentals of Software Quality Assurance testing, creating test plans, designing test cases, and developing automated tests. Additionally, they will explore best practices for testing and offer tips and tricks to make the process more efficient and effective

Check out all the Blog Posts.

Listen on Apple Podcasts

Blog Schedule

WednesdayPytest
ThursdayPlaywright
FridayMacintosh
SaturdayInternet Tools
SundayOpen Topic
MondayMedia Monday
TuesdayQA