The latest release now supports two additional LLMs for code completion and prompts: DeepSeek v3 for our friends in China and 3.7 Sonnet for fans of Anthropic’s coding skills. The new release includes also an enhancement for the /fix functionality.
Applying of Fixed Code
It is now possible to apply changes suggested by LLMs directly to the code editor as a response to the /fix command.

DeepSeek v3 Support
Based on popular requests from developers in China, we added DeepSeek v3 support. DeepSeek v3 can be used for code completion and prompts.
DeepSeek v3 scores an impressive 87% success rate for QML code completions, taking over the leadership position in the QML100FIM Benchmark.

Table: QML100FIM Code Completion Performance - May 2025
According to the QML100 Benchmark, it scores a 57% success rate for code generation through prompts. Do note that according to our tests, the underlying LLM DeepSeek v3 scores higher than the related reasoning model DeepSeek R1, which scores “only” 54% in prompt-based code generation.
Claude 3.7 Sonnet Support
While QML coding performance stayed similar, we added support for 3.7 Sonnet for code completion and prompts.
Claude 3.7 Sonnet maintains the leadership position with a 66% successful completion rate for prompt-based code completion.

QML100 Prompt-based Coding Performance - May 2025
Claude 3.7 Sonnet scores a 76% success rate for code completion while still sometimes struggling with correct indentation like its predecessor.
How to Upgrade
If you are already using the Qt AI Assistant, you can click the Update button in the Extensions view (remember that you need to upgrade first to Qt Creator 16.0.1).
If you are new to the Qt AI Assistant, you need to activate the use of external repositories in the extension view to fetch new extensions such as the Qt AI Assistant.
Meanwhile… the following smaller enhancements have been made:
- Linux on ARM support (experimental)
- LLM error codes from OpenAI GPT displayed in General Messages