Explore Code Coverage with AI: Find Risky Code in Minutes
Online
14:00 Jun 1, 2026 UTC +3 (14:00 UTC +3)
Most software teams measure code coverage but few use it to make decisions.
Do you what hides behing a code coverage percentage? A percentage tells you that tests ran, however, it does not tell you which functions are dangerously undertested, which changes introduced unverified code, or where one new test would reduce the most risk before you deploy.
This session is for teams building desktop applications, embedded systems, or connected devices who want to move from coverage gut-feel to a ranked list of their highest-risk functions, with the knowledge of how to generate it automatically from their own codebase on every sprint.
In this session, Marius Schmidt, Solutions Engineer at Qt Group with over 10 years of experience in code coverage, testing, and quality assurance, will walk you through the full picture on real code, with live demos and Q&A built in throughout.
Who Should Attend
- QA leads and test managers who want to bring a ranked list to sprint or verification planning rather than a percentage
- Senior developers and embedded engineers who want an objective answer to where the next test should go, especially where hardware test execution time makes every run count
- Engineering managers and CTOs who want release decisions backed by evidence, not instinct
Relevant whether your team writes C, C++, C#, or QML for desktop, microcontroller, or connected device targets.
You do not need to be using Coco code coverage already. You need to recognise the gap between what your coverage report says and what your planning meetings actually need to hear.
What We Cover
What code coverage actually means. Line, branch, and MC/DC coverage explained, where most tools stop short, and why that gap matters more than teams realise.
The CRAP score. A metric that combines cyclomatic complexity with coverage gap to rank every function by risk. The result is a sorted list your team can act on in sprint planning without debate.
Why MC/DC changes what coverage means. Coco supports MC/DC, the condition-level standard required in safety-critical software. You will see the difference live on a real codebase, and why branch coverage alone can miss risk entirely.
AI-assisted test generation. How to feed your Coco reports into an AI assistant to generate the tests you are still missing, and how to verify the coverage metric actually improved afterwards.
A live demonstration on real code. Every topic includes a live demo in Coco on a real codebase, with Q&A built in throughout so your questions get answered in context.
After the session, you will know how to get a first ranked testing list
Instrument one component against your existing build and tests and get real CRAP scores on your own code the same day, whether that component runs on a desktop machine or a cross-compiled embedded target. Connect to CI/CD and the ranked list updates automatically on every build.
Supported languages: C, C++, C#, QML.
Compatible AI tools: GitHub Copilot, Cursor, Claude Code, Mistral AI.
Bring Two People
Bring the person who owns testing priorities and the person who owns your build pipeline or CI/CD configuration. Watching the session together means you can act on what you saw the same day, with no handoff meeting required.
Can’t Make It Live?
Register anyway and the recording will be shared with all registrants after the session.
-
Location
Online
-
Starts
14:00 Jun 1, 2026 UTC +3
(14:00 UTC +3) -
Ends
14:40 Jun 1, 2026 UTC +3
(14:40 UTC +3) -
Type
Live Webinars
-
Cost
Free
-
Language
English
-
Email
info@qt.io