Coin - Continuous Integration for Qt
Testing is important for every software product, and Qt is no different. It is quite intriguing how much work goes into testing to ensure that a Qt release is the very best. Although I'm not directly involved with getting the release into your hand, I've lately learned a lot about the infrastructure (Continuous Integration or COIN) used by The Qt Company to build, test, and package. SHA-1s in this post are made up and resemble real SHA-1s only by coincidence (Coin-cidence?).
Qt is supported on a variety of platforms, and we need to make sure that it actually works on all of them. It's no longer realistic to expect that everyone contributing to Qt has all supported platforms available all the time. It would also take a tremendous amount of time to check that one patch does not break any code on any of the platforms. Therefore, we chose to build a centralized infrastructure commonly known as a continuous integration system that allows us to build and test changes on all platforms.
Let's assume we have a patch (or "change" in Gerrit terminology) that is approved, and we want this change to become part of the official Qt release. After approval, Gerrit offers to "stage" each change that is targeted to a particular branch.
This is where the continuous integration infrastructure becomes active. The system starts testing changes by moving them from the "staged" state to "integrating" in Gerrit. Integrating changes are being built on a variety of platforms and the automated tests of the module are run in succession. The system will finally approve or reject the tested change(s), which is again visible in Gerrit.
We looked at various tools that allow continuous integration to be run in a convenient and easy fashion and eventually concluded that none of the existing tools really fit our needs. I'd like to go into more details of what we noted down as requirements, but in this post, I'll focus on just one important aspect: modularization. With the advent of Qt 5, we modularized our code base (you can find many modules on code.qt.io), but until recently, we still tested as if Qt was just one monolithic blob.
We improved the time for integrations by taking advantage of modularization. The idea is to build the module to be tested and its dependencies as needed. Let's assume we want to change Qt Declarative, the module containing the Qt QML and Qt Quick libraries. Coin keeps bare clones (see
--bare in the git docs) of Qt's Git repositories around and updates them as they change. With the copies of the repositories it can quickly find out about dependencies and provide the source code to the build machines. It checks the module for a "sync.profile" file containing the module's dependencies (some of the details, such as the name, "sync.profile" and its syntax, will change in the future as we're trying to make the files describing the dependencies nicer). In the case of qtdeclarative, we find qtxmlpatterns and qtbase are required to build the module. Both of these modules are then checked for the latest state of the respective branch.
In the end, we have a list of modules and their SHA-1s as a tree structure. We find that we'll need qtbase at abcdef, qtxmlpatterns at def123 and qtdeclarative at badbab.
As Coin runs the integrations for qtbase, it's bound to have built and tested qtbase/abcdef before. We keep a cache of recent builds that are tested successfully, so instead of re-building qtbase, we can simply get it from the cache, skipping the build entirely. For qtxmlpatterns, we check if a build of qtxmlpatterns/def123 with the exact same qtbase/abcdef is around, but assuming qtbase recently changed, it's unlikely, even though qtxmlpatterns might be unchanged for a while. We want to guarantee that all modules are consistently built on top of the same base artifacts, thus qtxmlpatterns/def123 gets rebuilt, if it hasn't already been built for that SHA-1 of qtbase. The qtdeclarative SHA-1 comes from the staging branch and will be new, so it will be built.
Now the dependencies of the Git modules are clear. Coin looks up a list of platforms that the change is to be tested on; these are the reference platforms for the branch that we target. It then creates a lot of jobs - called work items inside the Coin code. Each work item will be a build or test of a particular module on a platform. Build items create artifacts of the result, the compiled libraries and headers of the module are then added to the cache. For our example, the first round of build items will be qtbase/abcdef on all 27 platforms that are currently supported for the 5.7 branch. Then, there is a round of 27 qtxmlpatterns/def123 builds, each of them dependent on the build of qtbase/abcdef. After that, there are 27 qtdeclarative/badbab builds based on the qtxmlpatterns/def123 builds. Once the building is complete, testing for qtdeclarative/badbab finally begins on each respective build. For the three module plus one round of testing we get (3 + 1) * 27 = 108 jobs which need to pass for a single change to make it into Qt. At this point we have all work items which need to be processed.
The next step is the actual running of the work items. It starts with launching the qtbase builds. In our example these are done from the start (Coin finds previous artifacts that can be used). We just finished the first 27 items in no time (stat'ing a few files on disk). We create these jobs in order to have a system that can start with an empty hard disk; it will in that case start by creating the missing artifacts by compiling qtbase.
When a work item is finished, items depending on it will start immediately. Coin creates virtual machines in our VSphere instance (or waits until it has the capacity to create and run them). Once we have a VM with the right OS running, we can start to launch the build on it. The build is just a list of instructions (if you look at our build logs (https://testresults.qt.io/coin), you'll see a list of instructions: set some environment variables, go to the right directory, run configure/make/nmake/jom and others.
Once all instructions have run, the result is compressed and uploaded. Some builds will finish earlier, for them we move on to the next module on the same platform while still waiting for others to finish building.
The qtdeclarative builds run in the same fashion, starting as soon as each dependency is done. Once the builds are done, the testing can start. It's quite similar to the build step: Coin downloads the module which has now been built along with its dependencies. Test machines end up downloading qtbase, qtxmlpatterns and qtdeclarative, get the qtdeclarative sources (also a compressed file fresh from our git repository cache). After getting the needed artifacts, the machine runs instructions, along the lines of "make check". Assuming all of our 108 jobs finished successfully, Coin approves the build branch in Gerrit and the qtdeclarative repository is updated.
Subscribe to our newsletter
Try Qt 6.0 Now!
Download the latest release here: www.qt.io/download.
Qt 6 was created to be the productivity platform for the future, with next-gen 2D & 3D UX and limitless scalability.
Check out all our open positions here and follow us on Instagram to see what it's like to be #QtPeople.
Näytä tämä julkaisu Instagramissa.
Want to build something for tomorrow, join #QtPeople today! We have loads of cool jobs you don’t want to miss! http://qt.io/careers #builtwithQt #software #developers #coding #framework #tool #tooling #C++ #QML #engineers #sales #tech #technology #UI #UX #CX #Qt #Qtdev #global #openpositions #careers #job
Henkilön Qt (@theqtcompany) jakama julkaisu