Skip to main content

"It’s a Copilot, Not a Pilot”: How to Use GenAI Responsibly in Software Quality Engineering

 


 

Felix Kortmann – CTO at Ignite by FORVIA HELLA

Follow Felix on iconmonstr-linkedin-3

  • Role: CTO at Ignite by FORVIA HELLA, a subsidiary of the global automotive supplier FORVIA HELLA.
  • Focus Areas: Overseeing development across a broad software and hardware portfolio, including automotive software and high-integrity systems.
  • Perspective: Stresses disciplined use of AI ("Copilot, not Pilot") in complex, safety-critical environments. Strong emphasis on integration into robust toolchains and shift-left quality measures.

“We should treat it like a copilot. The name is Copilot, not pilot.”

AI is advancing fast. But in quality-critical industries like automotive and embedded systems, it can’t just move fast. It has to move responsibly. My experience with AI began in academic research, working on computer vision for road damage detection and autonomous driving systems. That’s where I learned something important:

it’s not about what AI can do, it’s about what we let it do, and how we guide it.

In both my past role and my current role as CTO at Ignite by FORVIA HELLA, I’ve focused on building strong foundations: good tooling, strong review practices, and consistent processes. That’s why, when it comes to GenAI, I don’t chase the hype. I prioritize responsible adoption that scales across engineering teams and enhances software quality, not just speed.

Early Adoption of GitHub Copilot in QA Workflows

We were among the earliest to explore GitHub Copilot for automotive-grade and embedded software development. Used wisely, tools like this can significantly boost productivity. But they can also generate incorrect code faster than ever unless engineers know how to prompt, interpret, and critically review outputs.

That’s why I constantly remind our teams:

“The name is Copilot. Not pilot. Take suggestions, but use your brain. Review holistically.”

We later extended support to GitLab’s tooling and trained around 3,000 engineers at Ignite by FORVIA HELLA on how to use GenAI not only for coding but also for thinking more clearly about requirements, test coverage, and even naming conventions.

Where GenAI Fits in Software QA

One of the most promising areas for GenAI is requirements engineering. In automotive software, a typical project might start with 5,000 customer requirements and balloon to 10,000+ software requirements. Manually detecting inconsistency or redundancy is a nightmare.

The earlier you catch these things, the better. Errors caught at the Integrated Development Environment (IDE) level are 10x cheaper than those found in QA. That’s why I advocate for integrated tooling: linters (static code analysis tools that automatically check source code for errors, stylistic issues, and potential bugs), MISRA compliance checkers, and AI-powered suggestions right inside the IDE. However, this doesn’t mean GenAI replaces tools like static code analyzers or Software Composition Analysis platformsGenAI complements these tools by operating where traditional rules-based engines may fall short, especially in handling unstructured inputs like requirement documents or freeform code comments. 

From Guidelines to Guardrails

Tooling alone isn’t enough. Engineers need guidelines to understand how to use GenAI responsibly. During our rollout, one common question from leadership was: “How do we ensure high-quality code?”

Our answer: give developers prompting techniques, unit test suggestion guides, and clear boundaries. We don’t let Copilot write our unit tests. We let it suggest, and then we decide.

So, the  behavioral enablement would include:

  • Training developers in prompt engineering

  • Establishing clear GenAI usage boundaries

  • Encouraging AI-assisted unit test suggestions without letting AI write tests unsupervised.

As I often say:

“If you know exactly what you want, a great prompt will get you there fast. If you're unsure, AI can help you explore but the judgment still belongs to you.

What’s Next: The Future of GenAI in QA

Looking ahead, I’m excited about GenAI’s role in automated architecture reviews, inline documentation generation, and eventually release document analysis where AI might pre-read 700-page PDFs and flag quality concerns before a human reviewer ever opens them.

But we’re not there yet.

There’s still no mature way to verify the quality of AI-generated output. We need better mechanisms to assess when AI is truly adding value."

Until then, my advice is simple: focus on making every developer a quality engineer. Equip them with the right mix of traditional tools, AI augmentation, and domain knowledge so GenAI becomes an accelerator, not a shortcut.

TL;DR for QA Leads and Dev Teams

  • Use Copilot-style tools for exploration, not final code production.

  • Integrate GenAI into your GUI test automation, not just your IDE.

  • Automate requirement analysis to reduce downstream QA debt.

  • Don’t replace your static analysis or SCA tooling. Augment it.
  • Build guardrails before you scale AI. 

---

This blog is part of a three-part series authored by leading QA and Test Automation experts: Peter Schneider – Principal, Product Management at Qt Group, Maaret Pyhäjärvi – Director of Consulting at CGI, and Felix Kortmann – CTO at Ignite by FORVIA HELLA. Together, they bring a wealth of experience and unique perspectives on modern testing strategies, automation frameworks, and the role of quality assurance in software development.

Subscribe to the "QA Orbit" to be the first one to read upcoming blogs

What's Next

Panel SQF: Maximize the Potential of AI in Quality Assurance

 

 

Comments