Skip to main content
Video Thumbnail

AI-Powered UI Testing With Squish

Ready to Experience how the AI Features for Squish can Enhance Your GUI Testing Workflows?

In our AI Meets GUI Testing webinar we showcase several ways on how to enable AI-powered approaches to boost your productivity when working with Squish. We show how to enable the Squish AI Assistant inside the Squish IDE, work with the Squish MCP server, and give a sneak peek at Squish Vision, a brand-new approach to GUI testing powered by computer vision coming soon to Squish. 

Here you will find all the information you need to get started with your AI setup, as we walk you through the AI-enhanced testing workflows shown in our webinar. If you are new to Squish, you can start your free trial today.

Built-In Squish AI Assistant

Squish AI assistant is built directly into the Squish IDE. It helps you write test scripts, understand error messages, and fix failures, right where you're already working. No switching tools, no copy-pasting logs into a chat window. Watch the video below to see how Squish AI Assistant works.

 

Getting Started With Squish AI Assistant

If you don't have Squish yet, explore the Squish Tour and start your own trial with the Squish Evaluation.

Both the new and existing Squish users can download the free Squish AI Assistant Extension in Qt User Portal.

If you need help, visit our documentation on Downloading and Installing the Squish AI Assistant Extension.

More info on setting up the Squish AI Assistant with the LLM of your choice can be found in the Squish AI Assistant documentation.

 

AI-Powered GUI Test Script Generation 

If your team is already using Cursor, GitHub Copilot, Windsurf, or similar tools, Squish MCP Server is for you. In the video below we'll show how the Squish MCP Server lets those tools write, run, and iterate on Squish test scripts, so AI-assisted testing fits naturally into the workflow you already have.


 

Getting Started With Squish MCP Server

To get a good idea of how to get started with test script generation, see the Squish MCP Demo.

The setup shown in the demo consists of the following:

Note! Squish MCP Server is still in technology preview, with further development still ongoing. This example gives only one view on what can be achieved through connecting AI code assistants with the GUI testing capabilities of Squish via MCP. 

Coming Soon: Squish Vision

A Brand-New Approach to GUI Testing Powered by Computer Vision

Many teams invest heavily in UI test automation — and then end up spending more time fixing tests than writing them. Squish Vision solves this problem by running UI tests visually rather than programmatically. This keeps tests stable even when the UI changes. That means less maintenance, faster releases, and a QA process that truly scales. 

Want to hear more about Squish Vision? Contact us to hear what we have on our future roadmap.

Ready to AI-Powered UI Testing With Squish?

Step 1: Register for your Squish Evaluation

Step 2: Download trial and install

Step 3: Read our "Getting Started Guide" or check documentation

Step 4: Follow the guidance on this page to experience the AI features in action

Step 5: Test your own application 

Frequently Asked Questions

Does using AI for UI testing work in regulated industries such as automotive, medical, or aerospace?

Yes, and the design of Squish's AI features (AI Assistant, Squish MCP, Squish Vision) specifically accounts for the constraints of regulated environments. All three capabilities can run entirely on-premise with no data leaving your environment. The AI Assistant is human-in-the-loop: it suggests changes but never modifies test scripts without your review and approval, preserving the auditability required under standards such as ISO 26262, IEC 62304, and DO-178C. For teams subject to tool qualification requirements, Squish's Tool Qualification Kit is available separately and covers the core Squish framework.

Can I use the AI features without sending data to a cloud provider?

Yes. All three capabilities can run entirely on-premise. Configure the AI Assistant with Ollama or PrivateGPT for local inference. The MCP Server works with any locally hosted model. Squish Vision runs entirely on your machine and no screenshots or UI data ever leave your environment. This was a design requirement from the outset, not an afterthought, reflecting the needs of teams in automotive, medical, and aerospace development.

Is Squish Vision the same as image-based GUI testing?

No, and the distinction is significant. Traditional image-based testing works by matching pixel patterns against stored reference screenshots. It is brittle: change the theme, adjust font scaling, or switch platforms and the references break. Squish Vision uses purpose-trained computer vision models to detect UI elements by their visual structure and surrounding context, the way a human recognises a button regardless of whether it is in light mode or dark mode, scaled, or rendered in a different framework. Tests written with Squish Vision do not store pixel references and do not require updates when the visual design changes.

Does Squish Vision require a GPU, and what are the hardware requirements?

Squish Vision runs on CPU-only hardware, which covers most CI environments. For significantly faster inference, an NVIDIA GPU with CUDA support is recommended. Apple Silicon Macs are supported via Core ML acceleration. The minimum tested configuration is an Intel Core i5 with 16 GB of RAM.

What is the Squish MCP Server and how does it work?

The Squish MCP Server is an open-source implementation of the Model Context Protocol that exposes Squish's test execution capabilities as callable tools for AI coding agents. When connected to an agent in Cursor, VS Code with Copilot, Claude Code, or Windsurf, the agent can generate a Squish test script, run it via squishrunner, read back the full pass/fail results and error logs, and then iterate on failures automatically. The result is a workflow that goes from a product specification to a verified, passing test script with minimal human intervention. The server is currently in technology preview with ongoing development.

Which AI models does the Squish AI Assistant support?

The assistant is model-agnostic. It supports OpenAI (GPT-4 series, o4-mini, and GPT-5 series), Mistral AI (Small 3.2, Magistral Small, and Devstral Small), and any model served via an OpenAI-compatible local endpoint such as Ollama or PrivateGPT, enabling fully offline, air-gapped setups. The exact list of available models is shown in IDE Preferences.

Can AI generate Squish test scripts automatically from requirements?

Yes. With the Squish MCP Server connected to an AI coding agent, you provide product requirements sourced from JAMA, Jira, Confluence, or a plain text document, and the agent generates test scripts, executes them, reads the results, fixes failures, and repeats until the tests pass. A human reviews and approves the final scripts before they enter the test suite. The critical difference from generic AI code generation is that the agent receives the actual squishrunner output, not just a compile check, which is what enables grounded iteration rather than guesswork.

What happens to the AI Assistant suggestions? Are they applied automatically?

No. The assistant is human-in-the-loop by design. Suggestions appear in the AI pane inside the Squish IDE, and you review them before clicking Insert Snippet to apply. Nothing is modified in your test code without an explicit action on your part. This matters particularly in regulated industries where auditability is required.

How does the Squish AI Assistant help with test failure analysis?

When a test fails, the Squish AI Assistant reads the full test result log and runner output directly inside the IDE, with no copy-pasting into an external chat window. It explains the root cause in plain language, identifying common causes such as stale object references, version mismatches between the application and object map, broken verifications, and timing issues. It can also suggest corrected script code, which you review and apply with a single click. All context is gathered and sent to the model automatically.

Do the AI features work with my existing test suites and object maps, or do I need to start over?

Your existing test suites, object maps, scripts, and test history are precisely what makes the AI useful. They provide the context that general-purpose AI tools lack. You do not need to migrate or rewrite anything. In fact, the more history you have in Squish, the better the AI output becomes.

What UI frameworks and platforms does the Squish MCP Server support for test generation?

The MCP Server works across all Squish-supported frameworks: Qt/QML, Java, .NET/WinForms, Web, Android, iOS, and embedded targets. Object map snapshot generation is currently optimised for Qt/QML, with broader framework support actively in development.

Can I try the AI features before buying a full Squish license?

Yes. The Squish 30-day evaluation trial includes access to the AI Assistant. The Squish MCP Server is open source and available immediately with no signup required. To explore Squish Vision, contact Qt to discuss the pre-release beta programme.

Strengthen Every Step of Your Software Quality Process

From code analysis to test execution and reporting, these tools work together to help QA teams improve coverage, detect issues early, and maintain long-term software quality