Home Blog GenAI This week’s AI Bite: How AI Accelerated end-to-end test automation in a project - A QA engineer’s perspective

This week’s AI Bite: How AI Accelerated end-to-end test automation in a project - A QA engineer’s perspective

In this article, I’ll walk you through my experience introducing AI into end-to-end test automation at TeamAlert - how it helped me learn Playwright faster, improve test coverage, and speed up our testing process.

Weekly AI Bites is a series that gives you direct access to what’s happening in our day-to-day AI work. Every post comes straight from our team’s meetings and Slack, sharing insights, tests, and experiences we’re actively applying to real projects.

What models are we testing, what challenges are we tackling, and what’s really working in products? You’ll find all of this in our bites. Want to know what’s buzzing in AI? Check out Boldare’s channels every Monday for the latest weekly AI bite.

This week’s AI Bite: How AI Accelerated end-to-end test automation in a project - A QA engineer’s perspective

Table of contents

The initial challenge

When I joined the TeamAlert team as a QA Engineer and took over the end-to-end (E2E) test automation from previous engineer, one of the first challenges was understanding the existing structure and framework. The tests at TeamAlert were written in Playwright, while my previous experience was mainly with Cypress, so my knowledge of Playwright was fairly basic and came mostly from online courses.

First steps with AI

That’s when an opportunity arose to try something new. Milena Cylińska (specializing in Playwright, Selenium, CI/CD, scalable test architecture, and AI in QA), showed in her project how she implemented Playwright MCP + Copilot, an AI-assisted tool for creating tests. We decided to try it in our team as well. After a few short meetings, we managed to set everything up, and the results were visible immediately.

The pace of creating new tests increased significantly – repetitive elements were automated, and new AI-generated tests were consistent with the existing ones. The most valuable part for me was learning Playwright on a “live project,” without analyzing every line of code or documentation, simply writing new tests and seeing the results instantly. Additionally, the AI analyzed our repository and pointed out areas that were insufficiently covered by tests, which we had previously overlooked.

Of course, AI sometimes makes mistakes – it can “hallucinate” or suggest solutions that don’t work. That’s why I constantly supervise it, refining instructions and prompts to make it as useful as possible.

Results and takeaways

Creating tests now takes roughly half the time compared to writing them from scratch. The greatest value, however, is the ability to quickly get up to speed with Playwright and learn through practice – without needing to study all the code or documentation in detail. AI in end-to-end testing at TeamAlert not only speeds up the process but also helps maintain consistency, detect gaps, and allows the team to focus on more important, creative tasks. Combining human expertise with AI capabilities makes the work faster, smarter, and more effective.