Quality Assurance (QA) is one of the most misunderstood roles in game development.
For many beginners, QA is seen as a final checkpoint, a phase where testers “find bugs before launch.” In reality, modern game QA plays a far more strategic role. It sits at the intersection of decision-making, risk management, production timelines, and player experience, especially on large-scale AAA titles.
To understand how QA really functions behind the scenes, we spoke with Harjeet Singh Jite, one of Game Insider’s mentors and a seasoned QA professional who has worked on major titles at Codemasters, Ubisoft, and Electronic Arts, leading QA teams across regions.
From handling milestone-blocking issues under intense pressure to mentoring junior testers, navigating global QA workflows, and adapting to AI-driven testing, this conversation offers a grounded look at how quality is built — not inspected — in modern game development.
Q: During your time working on large racing titles at Codemasters, were there moments when QA raised serious concerns close to a milestone? How did you approach those situations?
Late-stage development pressure is inevitable. Issues that initially appear minor, such as a save corruption triggered by a specific race condition or a critical online desync, can suddenly reveal catastrophic scope just before Gold Master submission.
Harjeet explains that his first step was always personal validation. Before escalating, the issue had to be 100% reproducible. Once confirmed, the focus shifted immediately to understanding its impact and scope.
Escalation was never emotional — it was data-driven.
Discussions with leadership were supported by clear, concise information:
- Exact reproduction steps
- Risk and probability of player impact
- Systems affected, such as save data, leaderboards, or multiplayer
Strong QA doesn’t stop at identifying problems. Harjeet worked closely with engineering leads to estimate fix complexity and presented multiple solution paths, including:
- An immediate fix with higher risk
- A temporary workaround or hotfix
- Delaying the milestone to ensure stability
Throughout this process, his role was to remain the calm, objective voice in the room. Panic ruins trust. Clear facts and structured options preserve QA credibility during high-pressure moments.
Q: You’ve mentored QA testers across multiple levels. What mistakes do juniors commonly make, and how do you help them grow?
Most early mistakes, Harjeet notes, come from lack of context rather than lack of ability.
One of the most common issues is vague or incomplete bug reporting. Juniors often describe what happened but miss key details such as context, reproduction steps, or severity. Harjeet addresses this by teaching the “Five Ws of Bug Reporting”:
- What happened
- Where it happened
- When it occurred
- Why it may have occurred (suspected cause)
- Who it affects (build, user state, environment)
He emphasizes that a bug report is essentially a sales pitch for a fix.
Another frequent mistake is testing only the “happy path.” This often stems from fear of breaking the game or a limited understanding of system interactions. To counter this, Harjeet introduces exploratory testing as a formal practice, challenging juniors to intentionally break systems in unusual ways. This helps them think like both players and attackers.
Many juniors also fail to ask “why.” They log bugs without understanding which underlying system is failing: physics, networking, UI, or design logic. Harjeet pushes system-level thinking by encouraging testers to identify what a bug reveals about the system itself.
Prioritization is another key learning curve. Juniors often log large volumes of low-impact issues. Harjeet teaches the distinction between:
- Severity — technical impact
- Priority — player and business impact
Through practical exercises, testers learn to justify delaying cosmetic issues in favor of fixing lower-severity problems with higher player impact.
Q: You’ve worked with QA teams in both India and the UK. What differences did you observe?
The differences Harjeet observed were organizational and cultural, not related to talent.
In the UK, QA teams were deeply integrated with development. The focus leaned toward exploratory and system-level testing, with testers acting as advocates for design quality. Communication was informal, with frequent face-to-face discussions and rapid feedback loops.
In India, QA teams were often structured around high-volume regression testing and checklist-based execution. Communication was more formal, relying heavily on written documentation and structured meetings, with clear expectations around test cases and direction.
The most effective model combined both approaches. Offshore teams handled exhaustive regression testing, while co-located teams focused on investigative testing and direct developer collaboration.
To maintain consistency, Harjeet enforced written, asynchronous communication for all critical bugs and decisions. The QA Lead played a vital role as a knowledge bridge, translating high-level strategy into clear, actionable instructions across regions.
Q: From your early days as a tester to now leading teams, what has evolved the most in QA?
The most significant shift has been the move from end-of-cycle QA to Continuous Quality Engineering.
In older models, QA acted as a bottleneck. Builds were handed over late, and critical issues were discovered too close to release, leading to crunch and instability.
Today, QA is involved from day one. Testers participate in design discussions, sprint planning, and the creation of testable requirements. Features are tested within hours of development, not weeks later.
This shift was driven by live-service models and agile workflows, where late discovery is no longer viable. Quality is no longer QA’s responsibility alone — it’s shared across the entire team.
Q: How is AI changing QA roles, and what should young testers focus on to stay relevant?
AI, Harjeet explains, is fundamentally an efficiency tool, not a replacement for human judgment.
AI will eliminate much of repetitive regression testing. It excels at validation, checking whether code works. This frees human testers to focus on verification, determining whether the experience is actually right for players.
For aspiring QA professionals, this means developing:
- Automation fluency (Python, C#, scripting)
- The ability to manage and audit AI-driven tools
- Strong experiential testing skills — flow, fun, accessibility, and narrative coherence
Another critical area is data analytics. AI produces large volumes of test data, and QA professionals who can interpret that data to assess risk and guide decisions will be highly valuable.
The future QA professional is not just a tester, but a quality strategist and player advocate, supported by automation.
Modern game QA is no longer about finding bugs at the end.
It’s about preventing risk, shaping decisions, mentoring teams, and protecting player experience throughout development.
We hope you enjoyed this conversation with Harjeet Singh Jite. Keep following the GI Blog as we bring more expert-driven conversations around Game Development, Esports, and Streaming.
If you’re a beginner or early-career QA professional looking to scale your career, keep an eye out for Harjeet Singh Jite’s upcoming Specialized QA Course on Game Insider World, where these real-world practices will be explored in depth.
