About me

Hello!, I'm

Milan Joshi

I am a Quality Assurance Engineer with 6+ years of experience in manual and automation testing, ensuring the delivery of high-quality, scalable, and reliable software across web, mobile, and API-driven applications. I specialize in designing and executing end-to-end QA strategies that enhance product stability and accelerate release cycles.

My core expertise includes Playwright with Python, Selenium, and building automation frameworks using the Page Object Model (POM). I have designed optimized, maintainable, and reusable automation architectures that significantly improve test coverage, reduce execution time, and support seamless integration with CI/CD pipelines.

I bring deep experience across multiple testing methodologies, including:

  • Functional, Regression, Integration, Smoke & Sanity Testing
  • Cross-browser & Cross-platform Testing
  • API Testing (Postman, Python Requests)
  • Performance Testing (JMeter)
  • Database Validation (MySQL, SQL Server)

I am skilled in creating detailed test plans, test cases, and QA documentation, while identifying critical defects that drive quality improvements. I collaborate closely with development, product, and DevOps teams in Agile/Scrum environments to ensure timely and high-quality releases.

Passionate about test automation, QA process optimization, and building scalable automation systems, I constantly evaluate new tools and techniques to improve efficiency, reliability, and product performance.

With strong analytical thinking, attention to detail, and a quality-driven mindset, I aim to contribute to teams that value innovation, automation, and customer-focused product excellence.

What i'm doing

  • STD, STP, STR writing documentation

    creating clear and structured test documents. They define the test plan (STP), test cases (STD), and report test results (STR) for organized and effective QA processes.

  • Bug Reporting & Tracking

    documenting defects with clear details and tracking their status until resolved. It helps teams prioritize, fix, and verify bugs using tools like Jira, Bugzilla, or Azure DevOps.

  • Functional Testing

    Validates that the software works according to specified requirements. Covers tests like unit, integration, system, and user acceptance testing.

  • Non-Functional Testing

    Checks how the system performs rather than what it does. Includes performance, security, usability, compatibility, and scalability testing.

  • Maintenance & Post-Deployment Testing

    Testing done after release to ensure stability and functionality in the live environment. It includes smoke tests, regression checks, and real-time monitoring of the system.

  • Specialized Testing

    focuses on specific areas like APIs, databases, mobile, or security based on the application’s needs. It ensures deeper coverage and quality using targeted tools like Postman and Appium.

Resume

Experience

  1. Senior Quality Engineer

    Ravarem Technologies Private Limited · Full-time
    Aug 2024 – Present · 1 yr 4 mos

    Leading end-to-end QA processes across web, mobile, and API platforms. Responsible for creating and executing test plans, designing automation frameworks, and improving test coverage using Playwright with Python and POM. Collaborating closely with development and product teams to identify defects, track issues, verify fixes, and ensure high-quality releases. Driving QA strategy, CI/CD integration, and overall test process optimization.

  2. Freelance QA Engineer

    Self-Employed / Freelance
    Dec 2023 – Jul 2024 · 8 mos

    Worked as a freelance QA tester for multiple clients, performing functional, regression, and exploratory testing on web applications. Responsible for identifying bugs, documenting defects with clear steps and evidence, and ensuring high-quality user experiences. Gained hands-on exposure to real-world testing workflows, bug reporting, and collaborating with clients to verify fixes and refine product usability.

  3. Software QA Tester

    Techtic Solutions Inc. · Full-time
    Sep 2021 – Nov 2023 · 2 yrs 3 mos

    Specialized in QA for mobile applications (iOS & Android), Shopify storefronts (UI/UX), Shopify admin backend, and WordPress-based applications. Performed comprehensive manual and automation testing across web and mobile platforms, including functional, regression, UI/UX validation, cross-browser, and API testing. Executed detailed testing on Shopify frontend for user flows, responsiveness, and design consistency, and validated Shopify admin portal features such as product management, order flows, customer data, and app integrations. Created clear bug reports, detailed test cases, and collaborated closely with developers to verify fixes. Worked extensively with Jira, Postman, Git, and contributed to Agile/Scrum cycles to ensure smooth and quality-driven releases.

  4. Software Test Engineer

    Logilite Technologies · Full-time
    Sep 2018 – Sep 2021 · 3 yrs 1 mo

    Executed QA responsibilities including requirement analysis, test planning, test case development, and comprehensive functional and integration testing. Automated key test flows, improving coverage and execution efficiency. Strengthened QA documentation and contributed to process improvements.

My skills & Tools

Soft Skills

  • Communication

    Clear reporting of test results and defects

  • Attention to Detail

    Identifying subtle issues and edge cases

  • Problem Solving

    Analytical approach to debugging and testing

  • Collaboration

    Working effectively with developers and stakeholders

  • Organization

    Structured approach to test planning and execution

Tools & Technologies

  • browserstack

    Browserstack

    Cross-browser and cross-platform testing on real devices.

  • jira

    Jira

    Bug tracking and test case management

  • playwright

    Playwright

    Web application automated testing

  • postman

    Postman

    API testing and validation

  • git

    Git

    Version control and collaboration

  • selenium

    Selenium

    Web automation and testing framework

  • jmeter

    JMeter

    Performance and load testing

    Skills

  • Manual Testing
    90%
  • Automation Testing (Playwright / Selenium)
    80%
  • API Testing (Postman / Python Requests)
    85%
  • Bug Reporting & Tracking (Jira / Excel)
    85%
  • SQL (Database Validation)
    70%
  • Mobile App Testing (iOS & Android)
    80%
  • Shopify & WordPress Testing
    75%

Projects

Blog

Blog

Drones

One of my biggest passions is drones - not just flying them, but building them from scratch. There's something incredible about assembling the parts, programming the flight controller, and then watching it take off. I enjoy capturing breathtaking aerial footage and experimenting with FPV (First Person View) racing.


Here's a glimpse of my hobby:

  • Custom FPV Drone
  • Custom FPV Drone
  • Custom Pixhawk Drone
  • Custom Pixhawk Drone
  • Custom Pixhawk Drone
  • 3DR Iris+ Autonomous multicopter
  • Mini FPV Drone

Blog

Thailand

Thailand was one of the most unforgettable trips I've ever taken. The combination of bustling streets of Bangkok to the serene rice paddies in the north and the dreamy beaches in the south, made it a dream destination.

Bangkok – The City That Never Sleeps

Bangkok was my first stop. The Grand Palace was breathtaking, and the floating markets were a unique experience. At night, the Khao San Road was buzzing with energy - full of music, street performances, and endless food stalls.

Chiang Mai – The Cultural Heart

In Chiang Mai, I explored ancient temples, visited ancient villages, went on a jeep tour through breathtaking landscapes saw elephants and dive into the bustling Chiang Mai Night Bazaar.

The Islands – Paradise on Earth

Thailand's islands are like something out of a postcard. I visited Phuket, Ko Samui, Ko Phangan, and Koh Phi Phi, where I enjoyed snorkeling, boat trips, and breathtaking sunsets. The beaches were amazing, the water was crystal-clear, and the island life was pure relaxation.

Would love to go back one day!


Here's some photos:

  • Bangkok
  • Bangkok
  • Chiang Mai
  • Chiang Mai
  • Koh Phangan
  • Koh Phangan

Blog

Building a Scalable Playwright Automation Framework for Real Projects

As a Senior QA Engineer, I believe automation is more than just writing test scripts—it's about building robust, maintainable engineering solutions that solve real problems. In this post, I'll share how I designed and implemented a production-ready Playwright automation framework that became the foundation for reliable, scalable testing in a live project.

Why This Framework Was Built

When I joined the project, the team was facing several critical challenges. Manual regression testing was taking days, and the existing automation was fragmented—different tools, inconsistent patterns, and no clear strategy. Test execution was slow, flaky tests were common, and maintaining test code was becoming a bottleneck.

The real problems I needed to solve were:

  • Slow test execution – Tests were taking too long, blocking releases
  • Authentication overhead – Every test was logging in fresh, wasting time
  • Fragile selectors – Tests broke with every UI change
  • No API integration – UI tests couldn't leverage backend APIs for setup
  • Environment management – Switching between dev, staging, and production was manual and error-prone

Key Architecture Decisions

I chose Playwright as the core framework because of its reliability, cross-browser support, and excellent API testing capabilities. Combined with TypeScript, it provided type safety and better IDE support, making the codebase more maintainable.

Page Object Model (POM)

I implemented a clean POM architecture where each page or component had its own class. This separation of concerns made tests readable and maintainable. When the UI changed, I only needed to update one file.

Custom Fixtures and Hooks

Playwright's fixture system allowed me to create reusable test contexts. I built custom fixtures for authentication, API clients, and test data management. This eliminated code duplication and ensured consistent test setup across the suite.

Authentication and Performance Optimization

One of the biggest wins was implementing storage state for authentication. Instead of logging in for every test, I created a setup script that authenticates once and saves the session state. Tests then reuse this state, cutting authentication time from 5-10 seconds per test to milliseconds.

This optimization reduced our test suite execution time by over 40%. Tests that previously took 45 minutes now complete in under 25 minutes, making CI/CD pipelines much faster and more efficient.

API + UI Testing Integration

I integrated Playwright's API testing capabilities directly into the framework. Tests could now use API calls to set up test data, verify backend state, and perform actions that would be slow or complex through the UI.

For example, instead of navigating through multiple UI screens to create test data, tests could make API calls to create users, orders, or configurations. This hybrid approach made tests faster, more reliable, and easier to maintain.

Environment Management and CI/CD Readiness

I built a configuration system that manages multiple environments seamlessly. Using environment variables and config files, the framework automatically adapts to dev, staging, or production environments. Test data, API endpoints, and credentials are all environment-aware.

The framework was designed from day one to run in CI/CD pipelines. It includes proper reporting, screenshot capture on failures, video recording for debugging, and integration with test reporting tools. Every test run produces actionable results that help developers fix issues quickly.

Real Impact and Outcomes

The framework I built transformed how the team approached testing:

  • 40% reduction in test execution time – Faster feedback loops for developers
  • 95%+ test stability – Eliminated flaky tests through proper waits and selectors
  • 300+ automated test cases – Comprehensive coverage across critical user flows
  • Zero manual regression cycles – Full automation of smoke and regression suites
  • Seamless CI/CD integration – Tests run automatically on every deployment

The framework became the standard for all new automation work. Other QA engineers could easily contribute new tests following the established patterns, and developers appreciated the fast, reliable feedback during development.

Closing Summary

Building this Playwright automation framework was a complete engineering effort—from architecture design to implementation, optimization, and integration. I owned every aspect: the technical decisions, the code structure, the performance optimizations, and the team adoption.

This project showcases my expertise in test automation architecture, my ability to solve real-world problems, and my commitment to building solutions that scale. The framework continues to evolve, and I'm proud of the impact it has on product quality and team efficiency.

If you're facing similar automation challenges or want to discuss framework design patterns, feel free to reach out. I'm always happy to share insights and learn from other automation engineers.


Here's a glimpse of the framework implementation:

  • Playwright Framework Architecture
  • Playwright Test Implementation
  • Playwright Automation Setup
  • Playwright CI/CD Integration

Blog

Playwright vs Selenium: What QA Engineers Should Really Care About

If you're a QA engineer researching automation tools, you've probably seen countless articles comparing Playwright and Selenium. Most of them focus on technical specifications, feature lists, or performance benchmarks. But as someone who has worked with both tools in real projects, I want to share what actually matters when making this decision.

This isn't about declaring a winner. Both tools are excellent and serve different needs. Instead, let's focus on the practical factors that should guide your decision: speed, stability, learning curve, and team adoption. These are the things that will impact your day-to-day work and long-term success with automation.

Introduction

The Playwright vs Selenium debate has become one of the most discussed topics in QA automation circles. However, much of this discussion misses the point. The real question isn't "Which tool is better?" but rather "Which tool is better for your specific situation?"

I've implemented automation frameworks using both tools, and I've seen teams succeed and struggle with each. The difference often comes down to factors that aren't immediately obvious: how quickly your team can become productive, how stable your tests are in practice, and how well the tool fits into your existing workflow.

Let's cut through the noise and focus on what QA engineers should really care about when choosing between Playwright and Selenium.

Speed

Playwright's Advantage

Playwright is generally faster than Selenium, especially for modern web applications. This speed comes from its architecture—Playwright communicates directly with browser engines through the DevTools Protocol, bypassing the WebDriver protocol that Selenium uses. In practice, this means tests often run 20-30% faster, sometimes more depending on your application.

However, speed isn't just about execution time. Playwright's auto-waiting mechanism means you write less code for waiting and retrying. This reduces test development time and makes tests more reliable. You spend less time debugging timing issues and more time writing meaningful test cases.

Selenium's Reality

Selenium can be fast enough for most projects. If your test suite runs in 30 minutes instead of 20 minutes, that's usually acceptable. The real question is whether the speed difference matters for your CI/CD pipeline and release cycles.

Where Selenium sometimes struggles is with complex modern applications that use heavy JavaScript frameworks. The WebDriver protocol adds overhead, and you may need more explicit waits, which can slow things down. But for traditional web applications or if you're already using Selenium effectively, the speed difference might not justify switching.

The Practical Takeaway

Speed matters, but it's not everything. If you're building a new automation framework from scratch, Playwright's speed advantage is a real benefit. If you have an existing Selenium suite that works well, the speed gain alone probably isn't worth the migration effort. Consider your test execution time, CI/CD constraints, and whether speed is actually a bottleneck in your current workflow.

Stability

Playwright's Built-in Reliability

One of Playwright's strongest advantages is its built-in stability features. Auto-waiting means elements are automatically waited for until they're ready—visible, enabled, and stable. This eliminates a huge class of flaky tests that plague Selenium projects.

Playwright also handles network conditions, browser contexts, and multiple tabs more reliably. Its architecture is designed for modern web applications, which means fewer surprises when testing complex UIs. In my experience, teams using Playwright report significantly fewer flaky tests compared to Selenium projects.

Selenium's Maturity

Selenium has been around for over 15 years, and this maturity brings stability of a different kind. The tool is battle-tested across millions of projects, and you'll find solutions to almost any problem you encounter. The community is massive, documentation is extensive, and there are established patterns for handling common issues.

However, Selenium requires more discipline to write stable tests. You need to implement proper wait strategies, handle timing issues carefully, and understand the nuances of the WebDriver protocol. Teams with strong automation practices can write very stable Selenium tests, but it requires more upfront knowledge and careful implementation.

The Practical Takeaway

If you're new to automation or have struggled with flaky tests, Playwright's built-in stability features can be a game-changer. If you have experienced automation engineers who understand Selenium well, you can achieve similar stability—it just requires more careful implementation. Consider your team's experience level and whether stability has been a problem in your current projects.

Learning Curve

Playwright's Modern Approach

Playwright has a steeper initial learning curve, especially if you're coming from Selenium. The API is different, the concepts are different, and you need to understand modern JavaScript/TypeScript practices. However, once you get past the initial learning phase, many engineers find Playwright more intuitive.

The documentation is excellent, and the API is well-designed. Playwright's approach to waiting, network interception, and browser contexts is more aligned with how modern web applications work. If your team is comfortable with modern JavaScript development, the learning curve is manageable.

Selenium's Familiarity

Selenium has a gentler learning curve, especially for teams already familiar with it. The concepts are well-established, and there's a wealth of tutorials, courses, and examples available. If you're hiring, you're more likely to find candidates with Selenium experience.

However, Selenium's learning curve can be deceptive. Writing basic tests is easy, but writing stable, maintainable tests requires understanding WebDriver internals, wait strategies, and common pitfalls. Many teams struggle because they don't invest enough in learning Selenium properly.

The Practical Takeaway

Consider your team's background. If you have JavaScript/TypeScript developers or engineers comfortable with modern web development, Playwright's learning curve is reasonable. If your team is more comfortable with traditional programming languages or has existing Selenium knowledge, sticking with Selenium might be more practical. The key is investing in proper training regardless of which tool you choose.

Team Adoption

Playwright's Momentum

Playwright is gaining rapid adoption, especially in modern development teams. It's backed by Microsoft and has strong community support. If you're building a new automation framework or starting fresh, Playwright offers a modern, well-supported option that many engineers are excited to learn.

However, Playwright is still relatively new compared to Selenium. You might find fewer team members with prior experience, and some organizations prefer the proven track record of Selenium. The ecosystem is growing but not as mature as Selenium's.

Selenium's Ecosystem

Selenium has the largest ecosystem in test automation. You'll find integrations with almost every tool, framework, and service. Hiring is easier because more candidates have Selenium experience. The community is massive, and you can find help for almost any problem.

However, Selenium's age can also be a disadvantage. Some teams find it feels outdated compared to modern tools. The WebDriver protocol, while stable, can feel clunky when working with modern web applications. Teams looking for something fresh and modern might prefer Playwright.

The Practical Takeaway

Team adoption is about more than just the tool—it's about buy-in, enthusiasm, and long-term sustainability. Consider your team's preferences, your hiring strategy, and your organization's culture. If your team is excited about modern tools and willing to learn, Playwright can be a great choice. If you need proven stability and a large talent pool, Selenium remains a solid option.

Conclusion

The Playwright vs Selenium debate doesn't have a universal answer. Both tools are capable, both have their strengths, and both can help you build effective automation frameworks. The right choice depends on your specific context: your team's experience, your application's complexity, your timeline, and your organization's preferences.

If you're starting a new project and your team is comfortable with modern JavaScript development, Playwright offers speed, stability, and a modern approach that's worth considering. If you have an existing Selenium framework that works well, or if your team has strong Selenium expertise, there's no urgent need to switch.

The most important thing is to make an informed decision based on your actual needs, not on hype or marketing. Both tools can help you achieve your automation goals when used properly. Focus on building good automation practices, writing maintainable tests, and investing in your team's skills—these factors matter more than the specific tool you choose.

Remember, the best automation tool is the one your team can use effectively to deliver value. Whether that's Playwright, Selenium, or something else entirely, the tool is just a means to an end. Your expertise, your approach, and your commitment to quality are what truly make the difference.


Blog

Ravarem's First Annual Trip to Dubai

Ravarem's first annual trip to Dubai was the kind of trip that sets the bar way too high for anything that comes next. From the moment we landed, the energy was different. We weren't just a team on a trip — we were a group ready to squeeze every drop out of Dubai and Abu Dhabi.

Exploring the Iconic Landmarks

Our journey took us through some of the most breathtaking landmarks in the UAE. We visited the Dubai Frame, where we captured stunning views of both old and new Dubai. The Burj Khalifa left us in awe as we stood at the world's tallest building, taking in the panoramic cityscape below.

In Abu Dhabi, the Grand Mosque was a spiritual and architectural marvel. The intricate designs, the peaceful atmosphere, and the sheer scale of the structure left a lasting impression on all of us. It was a moment of reflection and appreciation for the culture and craftsmanship.

Adventure and Thrills

The Desert Safari was an absolute highlight. From dune bashing to camel rides, we experienced the raw beauty of the Arabian desert. The evening ended with traditional entertainment, delicious food, and stargazing under the clear desert sky.

At Ferrari World, the adrenaline junkies in our team had the time of their lives. The high-speed rides, the Formula One experience, and the sheer excitement of being in a Ferrari-themed park created memories that we'll talk about for months to come.

Cultural Experiences and Local Flavors

We dove deep into the local culture through shopping and bargaining in traditional markets. The art of negotiation became a team sport, with everyone sharing tips and celebrating each successful deal. It wasn't just about buying souvenirs — it was about the interactions, the laughter, and the shared victories.

Our cafe hopping and food exploration adventures took us through Dubai's diverse culinary scene. From traditional Emirati dishes to international cuisines, every meal became a discovery. We bonded over shared plates, tried new flavors together, and created our own food map of the city.

Professional Connections with a Personal Touch

What made this trip even more special were the partner meetings that felt friendly and personal. These weren't just business interactions — they were genuine connections with people who share our vision. The relaxed setting of Dubai made these conversations flow naturally, strengthening both professional relationships and personal bonds.

The Moments That Made It Special

But what truly made this trip unforgettable wasn't the places we visited — it was the moments between them. The full-day walks that turned into inside jokes, the non-stop laughter from morning till night, and those spontaneous "Bhai kal jaldi nikalna hai" moments that became part of our shared language.

The trip was filled with:

  • Teasing and banter that kept the energy high throughout the day
  • Shared photos that captured not just places, but genuine moments of joy
  • Late-night plans that turned into the next day's adventures
  • Spontaneous decisions that led to some of our best experiences

Reflection on Team Culture

This trip showed us what team culture really means. It's not about the fancy locations or the perfect itinerary — it's about how we support each other, how we laugh together, and how we create memories that go beyond the workplace.

The bonds we strengthened during this trip are the kind that translate directly into better collaboration at work. When you've shared adventures, navigated new cities together, and created inside jokes, the trust and camaraderie naturally extend to how we work on projects and solve problems together.

Dubai and Abu Dhabi: Amazing Destinations

Dubai and Abu Dhabi were absolutely amazing — the architecture, the culture, the food, the energy. But as incredible as these cities were, the real highlight was our team. The way we came together, supported each other, and made the most of every moment is what made this trip truly special.

We returned not just with souvenirs and photos, but with stronger relationships, shared memories, and a renewed sense of what it means to be part of the Ravarem family.

Looking Forward

As we look back on this incredible journey, we're already excited about Trip No. 2. The bar has been set high, but we know that wherever we go next, it's the people that make the experience. Dubai gave us the backdrop, but our team gave us the story.

Here's to more adventures, more laughter, and more moments that turn colleagues into a close-knit team. Until the next trip!


Here's a glimpse of our Dubai adventure:

  • Dubai Trip Photo 1
  • Dubai Trip Photo 2
  • Dubai Trip Photo 3
  • Dubai Trip Photo 4
  • Dubai Trip Photo 5
  • Dubai Trip Photo 6
  • Dubai Trip Photo 7
  • Dubai Trip Photo 8
  • Dubai Trip Photo 9
  • Dubai Trip Photo 10

Projects

Send365 Chrome Extension – Multi-CRM Manual QA (Salesforce, Monday.com, HubSpot)

As a Senior QA Engineer, I am responsible for the complete manual testing of the Send365 Chrome Extension. While the full Send365 web application includes multiple roles such as System Admin, Deployment Admin, Vendor, and Sender, the Chrome Extension is exclusively used by Sender users only.

The purpose of the extension is to allow Senders to send physical gifts and eGift cards directly from their CRM workflow without needing to open the Send365 web application. The extension integrates with Salesforce, Monday.com, and HubSpot and automatically detects the active contact or lead, pre-fills recipient details, and connects to Send365 APIs for order creation.

Key Manual QA Responsibilities

  • Validated extension installation, updates, permissions, and Chrome storage behavior.
  • Tested sender-specific workflows for sending Gifts and eGifts from all three CRMs.
  • Verified CRM context detection (Contact → Lead → Account) across Salesforce, Monday.com, and HubSpot.
  • Performed comprehensive UI/UX validation of the Send365 embedded panel (iframe) inside each CRM.
  • Ensured correct catalog loading, personalization options, delivery selections, and order creation steps.
  • Tested multi-environment behavior (Stage / Production) and role-based access restrictions.
  • Identified and reproduced defects related to iframe rendering, CRM DOM variations, and API failures.

Sample Manual Test Scenario – Sending an eGift from HubSpot


Test Case: Verify Sender can send an eGift from HubSpot Contact View

Preconditions:
- Sender user logged into Send365
- Send365 Chrome Extension installed and enabled
- HubSpot contact contains a valid email

Steps:
1. Open HubSpot → Contacts → Select any contact.
2. Allow the Send365 panel to load inside the CRM.
3. Click the "Send eGift" option.
4. Verify:
   - Contact email auto-populates from HubSpot.
   - Gift catalog loads correctly.
   - Sender can choose amount, design, and message.
5. Click "Send eGift".
6. Verify:
   - Confirmation message is displayed.
   - Order is created successfully in Send365 backend.
   - Optional CRM activity logging is updated (if enabled).

Expected Result:
- The full eGift flow works smoothly.
- Contact information syncs properly.
- The order appears in the Sender's Order List on Send365 web.
              

Similar manual testing is performed on Salesforce and Monday.com to ensure consistent extension behavior, accurate contact detection, and reliable sender workflows across all supported CRMs.

Projects

API Testing Suite – Payment Gateway Integration

In this project, I created a complete API testing suite for a payment gateway integrated into an e-commerce platform. The objective was to validate transaction flows such as payment authorization, capture, refund, and error handling.

I used Postman for exploratory and manual API checks and Python (Requests + Pytest) to build a maintainable, automated API regression suite.

Key responsibilities & highlights:

  • Analyzed API documentation and defined test coverage for all critical endpoints.
  • Created Postman collections for smoke and regression testing.
  • Built automated API tests in Python, validating status codes, response schemas, and business rules.
  • Implemented negative scenarios (invalid tokens, expired cards, insufficient funds).
  • Integrated API tests into CI to execute on every deployment to staging.

Sample Python API Test – Payment Authorization:


# tests/test_payment_authorization.py
import requests
import pytest

BASE_URL = "https://api-stage.payment-gateway.example.com"
API_KEY = "sk_test_dummy_api_key"

headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

@pytest.mark.api
def test_successful_payment_authorization():
    payload = {
        "amount": 4999,
        "currency": "INR",
        "card_number": "4111111111111111",
        "card_exp_month": "12",
        "card_exp_year": "2028",
        "card_cvv": "123",
        "order_id": "ORDER-12345"
    }

    response = requests.post(f"{BASE_URL}/v1/payments/authorize",
                             json=payload,
                             headers=headers)

    assert response.status_code == 200
    body = response.json()

    assert body["status"] == "authorized"
    assert body["currency"] == "INR"
    assert body["amount"] == 4999
    assert body["order_id"] == "ORDER-12345"
              

Sample Negative Test – Invalid Card:


@pytest.mark.api
def test_payment_authorization_invalid_card_number():
    payload = {
        "amount": 2999,
        "currency": "INR",
        "card_number": "0000000000000000",
        "card_exp_month": "12",
        "card_exp_year": "2028",
        "card_cvv": "123",
        "order_id": "ORDER-INVALID-CARD"
    }

    response = requests.post(f"{BASE_URL}/v1/payments/authorize",
                             json=payload,
                             headers=headers)

    assert response.status_code == 400
    body = response.json()

    assert body["status"] == "declined"
    assert "invalid card number" in body["message"].lower()
              

This project reflects my ability to design robust API test coverage, handle happy paths and edge cases, and support secure, reliable payment flows in production-like environments.

Projects

Shopify QA – Tumble Living eCommerce Store

I worked as a QA Engineer on Tumble Living, a live Shopify-based eCommerce store: https://www.tumbleliving.com/.

My role covered front-end and Shopify admin backend, including custom themes and multiple apps/plugins. I ensured smooth buying experience, correct configuration of products, and proper behavior of installed plugins.

Key responsibilities & coverage:

  • Tested homepage, collection pages, product detail pages, cart, and checkout flows.
  • Verified responsive behavior across desktop and mobile using BrowserStack.
  • Validated discount codes, shipping options, and tax calculations in different scenarios.
  • Tested integration of Shopify apps (reviews, recommendations, analytics, marketing popups).
  • Checked inventory updates, order status, refunds, and email notifications from the admin panel.

Example Test Scenario – Add to Cart & Checkout:


Test Case: Guest user can purchase a product successfully

1. Open https://www.tumbleliving.com/
2. Navigate to a collection (e.g., "Beddings").
3. Open a product detail page.
4. Verify:
   - Product title, price, images, and variants are displayed.
   - "Add to Cart" button is enabled.
5. Select a variant (size/color) if applicable and click "Add to Cart".
6. Open the Cart page and verify:
   - Correct product, quantity, price, subtotal, and estimated shipping/taxes (if shown).
7. Click "Checkout" and proceed as guest.
8. Enter valid shipping details and continue.
9. Select a shipping method and continue to payment.
10. Verify:
    - Order summary is correct.
    - Total includes products + shipping + taxes (if applicable).
11. Complete payment using test card (in staging) or real flow (in production sandbox).
12. Confirm:
    - Thank you / order confirmation page is shown.
    - Order appears in Shopify admin with correct status and line items.
              

This project demonstrates my experience with real production Shopify stores, including UI/UX validation, functional flows, cross-browser testing, and admin/back-office verification.

Projects

SaaS Platform – Full QA Documentation (STP, STD, STR)

I created a complete QA documentation package for a multi-module SaaS web platform, including STP (Software Test Plan), STD (Test Design / Test Cases), and STR (Execution Report). This documentation follows industry-standard formats used in tools like Jira and TestRail.

Documentation Covered

  • Feature-level Test Plans with clear scope & objectives
  • End-to-end functional & regression scenarios
  • Detailed step-by-step test cases with priorities and risk tags
  • Entry/Exit criteria, test strategy, timelines, and responsibilities
  • Execution reports mapped to defects and release decisions

Sample Test Case (Excerpt)


Test Case ID: TC_LOGIN_012
Module: Authentication
Title: Verify login using valid email & password

Preconditions:
- User exists in DB
- Email is verified

Steps:
1. Open the login page
2. Enter valid email
3. Enter valid password
4. Click on the "Login" button

Expected Result:
- User is redirected to Dashboard
- Valid JWT/session token is created
- User name appears in header

Status: Passed
              

These QA assets were used as a single source of truth across QA, Development, and Product teams to standardize testing, improve traceability, and support audit and release readiness.

Projects

Performance Testing – JMeter for API Load Benchmark

I performed performance and load testing using Apache JMeter for a high-traffic API platform handling payment transactions and order processing. The goal was to identify performance bottlenecks and validate SLAs under realistic load.

Key Performance Tests Executed

  • Load Test (scaling from 1k to 10k concurrent virtual users)
  • Stress Test (finding the system breaking point)
  • Spike Test (sudden traffic bursts to simulate campaigns)
  • Endurance Test (long-running sessions to detect memory leaks)

Sample JMeter Thread Group Configuration


Thread Group:
- Users: 5000
- Ramp-Up Period: 120 seconds
- Loop Count: 5

Additional Config:
- HTTP Header Manager for auth tokens
- CSV Data Set Config for dynamic test data
- Think Time set using Uniform Random Timer
              

Sample API Request Payload (used in JMeter HTTP Request)


POST /api/v1/payments/authorize
Content-Type: application/json

{
  "amount": 2499,
  "currency": "USD",
  "card_number": "${cardNumber}",
  "expiry": "${expiry}",
  "cvv": "${cvv}",
  "order_id": "${orderId}"
}
              

Results were analyzed using:

  • Aggregate Report (avg, min, max, 95th, 99th percentile)
  • Response Time Over Time & Throughput graphs
  • Error % and failed sampler analysis

Based on the findings, we optimized DB indexes, tuned API timeouts, and added caching layers, which significantly improved response times and stability under peak load.

Projects

React Native Mobile App – Cross-Platform Manual QA (iOS & Android)

I worked as the primary QA for a React Native mobile application released on both iOS and Android. The app included authentication, dashboard analytics, push notifications, deep links, and in-app settings. My focus was on manual testing across real devices and simulators/emulators.

Key Testing Responsibilities

  • Created end-to-end test scenarios for core flows (onboarding, login, profile, notifications).
  • Executed functional, UI/UX, regression, and smoke testing on multiple OS versions.
  • Validated behavior on different screen sizes, orientations, and device types.
  • Verified push notifications, deep links, and app background/foreground state transitions.
  • Logged detailed defects with screenshots, videos, and clear reproduction steps in Jira.

Device & OS Coverage (Example)


iOS:
- iPhone 13 (iOS 17.x)
- iPhone 12 (iOS 16.x)
- iPad (latest iPadOS, landscape & portrait)

Android:
- Google Pixel (Android 14)
- Samsung Galaxy A series (Android 13)
- OnePlus device (Android 12)
              

Sample Test Scenario – Push Notifications


Test Case: Verify push notification open behavior

1. Login with a valid user on a real device.
2. Put the app in background.
3. Trigger a push notification from test backend / Firebase console.
4. Verify:
   - Notification appears in notification tray with correct title & message.
5. Tap on the notification.
6. Expected:
   - App opens from background.
   - User is navigated to the correct detail screen.
   - No duplicate screens are stacked in the navigation history.
              

This project demonstrates my ability to handle real-device mobile testing for cross-platform React Native apps, identify platform-specific issues, and collaborate with developers to verify fixes and ensure a smooth release on both App Store and Play Store.