To make it easier to access the latest AI capabilities, we have updated pre-configured LLMs to newer variants.
GPT 5.3- Codex
GPT 5.3-Codex represents a significant leap forward in unaided QML coding. The latest OpenAI LLM for software engineering achieves a 75% success rate in the QML100 benchmark. That compares to 64% for GPT 5.2-Codex and 58% for GPT 5.1.
OpenAI is making significant progress in QML coding. This latest LLM version brings GPTs back into being a good option for Qt software development, while Gemini 3 models still holds the leadership title.

GPT 5.3-Codex, nor its predecessors, are great at code completion. This capability should be considered as experimental at best.
Claude Sonnet 4.6
Claude Sonnet 4.6 is a mixed bag when it comes to QML performance. Compared to Sonnet 4.5, its performance in unaided QML coding decreased significantly to 64% on the QML100 benchmark. Neither adaptive thinking nor high effort really improved coding performance in single-turn code generation without additional skills or web or linter access.
We do not know the cause of this regression, but we did notice that Sonnet 4.6 has been trained to provide longer, more comprehensive answers. It also seems to be “overthinking” problems in agentic, multi-turn tests. Most failures in the QML100 benchmark are related to using fixed sizing when QML objects are managed by layout positioners. This is not new to Sonnet (or Opus), but it now happens at a much greater scale. The second most common failures are related to using custom property names that are already reserved for the QML object. Both issues are easily fixed with additional skills or an embedded linter, but when unaided, the LLM fails more than its predecessor. We decided to support Claude Sonnet 4.6 as a pre-configured LLM to give users a choice.
Gemini 3.1 Pro Preview
The latest Gemini model extends the lead of Google in QML programming with a score of 88% in the QML100 benchmark. The model responds with concise and efficient outputs to the tasks saving time and tokens for the user. Compared to Anthropic models, which are consuming significantly more output token, the latest Gemini model is not only a better QML expert but also more cost-efficient.
Meanwhile…