Introducing Project QtiPi

After setting up a number of small Yocto side-projects using Qt on a Raspberry Pi in the past, I realized that I was recreating some base “plumbing” over and over again. So I figured that it was time to create a separate layer where I could put this work so it could be easily reused, and thus Project QtiPi was born. Currently it consist of a Yocto meta-layer, a manifest for Googles repo tool that can be used to fetch all the needed layers, and a Dockerfile for creating a docker container suitable for building the project.

Qt Customization

Since basically all my projects are either headless or single-app only, the first thing I wanted to make sure Qt was built with eglfs support, and that it is the default platform plugin so running apps with -platform eglfs is not needed. Nowadays the meta-raspberrypi layer now enables eglfs backend automatically if DISTRO_FEATURES contains opengl, so all that is needed from meta-qtipi is to make sure eglfs is the default platform.

Network Configuration

Another thing I found myself doing was creating some custom network configuration recipes in order to set up how the device was supposed to interact with the world around it. Usually it was one of the following:

  • Connect as a client to a predefined wifi network, using DHCP
  • Connect as a client to a wired network, using DHCP
  • Serve as a stand-alone Wifi access point, using static IP

And since the two client options uses DHCP to get an IP adress, I wanted to configure avahi so that I could access the device using qtipi.local. By doing this I can just start up a new device and connect to it using e.g. ssh root@qtipi.local instead of having to look up the IP adress either on the device or using the DHCP server.

So what QtiPi now provide is a recipe called network-config that provides three different packages: network-config-eth-client, network-config-wifi-client and network-config-wifi-ap. You specify the wifi credentials by overriding the QTIPI_SSID and QTIPI_PASSKEY variables for the recipe. This can be done using a bbappend in a separate project layer, or from local.conf using something like:

QTIPI_SSID_pn-network-config = "MySSID"
QTIPI_PASSKEY_pn-network-config = "supersecret"

QML Live for Rapid UI Development

Something I’ve found to be quite useful when working on a Qt/QML project is the ability to quickly see how the application look on the actual device, and luckily some of my previous colleagues have create a tool for this called QML Live that is now part of the Qt Project. One thing lacking in the upstream recipe though is a systemd service so that the runtime (the part running on the device) is automatically started on boot. So this is added using a simple bbappend.

Convenience Quick-Start packagegroup

Another thing that is useful is the qtipi-bundle packagegroup that pulls in a wide set of Qt packages, so that the image has a lot of functionality from the start. This is especially useful if you develop the application using QtCreator setup to cross-compile for the target image, then you don’t need to worry about having the right Qt modules on the device. When the application is mature enough so it’s properly packaged into it’s own recipe, the packagegroup can be removed in favor of properly defined RDEPENDS in the application recipe to reduce the image size.

Simple Example Application

When I started this project I was mainly using the official 7″ touch screen, so I packaged a simple QML application called touchpoints that just draw colored rectangles under the four first detected fingers. It serves as a quick function tester of the device, and as a reference of how to create a recipe with a systemd service file for launching a simple QML application.

My Wishlist

One thing I have on my todo wishlist that I think could often be useful, would be a ready solution for connecting to an “unconfigured” device from your phone, and there have the possibility to scan for existing wifi network and provide the needed credentials. A lot of IoT products today come with such solutions, but I don’t think there’s any ready-to-use open-source components that provide this. All the different parts are available, but it needs some extra glue to make it an easy to use solution.

Final Thoughts

When writing this post I also took a good look at what was in the meta-qtipi layer to make sure it was up-to-date and working with the newly released dunfell version of the Yocto Project. I was glad to see that a lot of the things I had to manually customize a few years ago was now actually handled in upstream layers using smart PACKAGECONFIG customization based on DISTRO_FEATURES etc. This allowed me to clean up some deprecated stuff, leaving the meta-qtipi layer a bit slimmer.

As often before I end up reflecting on how amazing it is that so many awesome open source projects exist, that enables both organizations and individuals to create cool things reusing the previous work of others.

Quick-Tip: Use host ssh-agent in Docker


As I described in another post, I usually do all my Yocto builds inside a docker container. This worked well, but when I was in a project where some of the recipes needed to clone git repositories using ssh-keys I realized I needed a nice way to share my host systems keys automatically. Luckily, this is just what ssh-agent do, so all that had to be done was to make sure that the system inside the docker container could access the hosts ssh-agent socket.

The Solution

The environment variable SSH_AUTH_SOCK is used to determine the path to the socket used for communicating with ssh-agent. This means that on the host system we can use this to find the path to the socket, so that it can be bind-mounted into the container. Then we tell docker to set SSH_AUTH_SOCK inside the container to the path where it was mounted.

In the previous post I set up my alias used to start the containers to:

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}'

If we now pass an extra -v option for the bind-mount, and a -e for setting up SSH_AUTH_SOCK, all should be good.

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} -v ${SSH_AUTH_SOCK}:/ssh.socket -e SSH_AUTH_SOCK=/ssh.socket --workdir ${PWD}'

The Aftermath

And with this, I lived happily ever after, right? Not really. All was working well until one day when I was working remotely and wanted to start a Yocto build over ssh. When I connect to my host system over ssh there’s nothing that starts an ssh-agent automatically, so SSH_AUTH_SOCK was empty and then fetching the sources would fail inside the container.

So I figured I should make sure ssh-agent is start on ssh logins, and then all would be good again. Said and done, and it worked fine until that time I started my build in a screen session and realized that the ssh-agent was killed when I logged out. Actually it took me time to realize what was going on, and one or more bad words might have been uttered. In the end I realized that the easiest way was to just use the socket from my local session (I basically always have a local session running), which is available under /run/user/1000/keyring/ssh, by running export SSH_AUTH_SOCK=/run/user/1000/keyring/ssh after login in via ssh, but before starting the docker container.

Using QtCreator for non-Qt CMake projects with Yocto generated SDKs


Many of the embedded projects I work on use the Yocto Project to create an image for some embedded target. Usually there’s a base system that doesn’t change that often, and then there’s one or more project specific components that runs on this base system.

In most cases those components are can be decoupled from the actual target and target system enough so that the development can be done natively on the developers systems, and then just integrated into the Yocto build once the changes are done. But sometimes it is just more convenient to build and test things directly on the target, or maybe a bug only showing on target needs to be debugged, and then it can be quite nice to use the same IDE I use for most development which is QtCreator. I think QtCreator is quite nice even for projects not using Qt.

The Problem

When using the Yocto Project to build Linux images for a target, you basically get the SDK generation for free. It’s just a matter of running the populate_sdk task for the image, and you get a nicely packaged SDK-installer which contains everything you need to build for that system: toolchain, sysroot and a selection of native tools that might be needed in the process. It also includes a script that sets the shell up for cross-compilation by setting environment variables like CC, CFLAGS, CONFIGURE_FLAGS etc.

This script however doesn’t work out-of-the-box with QtCreator. The issue is that this scripts sets up CC/CXX/CPP so that the machine specific flags are part of those variables instead of CFLAGS/CXXFLAGS/CPPFLAGS, and when you set up the SDK in QtCreator those flags are not picked up properly so the compiler doesn’t generate compatible binaries.

The Solution

You can solve this by manually editing the script, but in the spirit of “fix things once” I prefer to address this directly in the SDK generation. The script that sets up the environment has some support for this kind of modifications by looking in two different environment-setup.d/ directories and pulling in all .sh files in those directories. So all that needs to be done is provide a script that takes all -m<something> options set in CC/CXX/CPP and add those to CFLAGS/CXXFLAGS/CPPFLAGS.

So to automate this I add a recipe to the projects meta-layer which installs this script, and then a small change is needed in the image recipe to pull in this package into the SDK. And yes, the sed part can probably be done in a much nicer way, but it works.


CC_MFLAGS=`echo $CC | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`

CXX_MFLAGS=`echo $CXX | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`

CPP_MFLAGS=`echo $CPP | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`


DESCRIPTION = "Fixes for SDK use in QtCreator"

SRC_URI = "file://"

inherit nativesdk

do_configure[noexec] = "1"
do_compile[noexec] = "1"

do_install() {
    install -Dm0644 ${WORKDIR}/ ${D}${SDKPATHNATIVE}/environment-setup.d/

FILES_${PN} = "${SDKPATHNATIVE}/environment-setup.d"

And then add the following line to the image recipe used for SDK generation , to make sure that the script is included in the generated SDKs.

TOOLCHAIN_HOST_TASK_append = " nativesdk-qtcreator-fix"

The End

I hope that this can be of help for anyone wanting to use QtCreator with Yocto generated SDK. I’m might do more tutorial like follow-up up posts on this, where I walk you through the process of setting building an SDK for Raspberry Pi that works with both qmake and CMake based projects. It’s even possible to ship a script that help you set up QtCreator, so you don’t have to click around as much in the UI.

Simplify your Yocto builds using docker


Yocto is a great set of tools which makes it easy to build and maintain your custom linux images for a big variety of hardware. But even if it is continually improved with regards to reproduceability etc, there is still always some dependencies toward the host system used when building. As a user of Debian unstable I sometimes find that I have too new versions of packages, causing issues when building older Yocto releases and sometimes even the latest release. I could of course solve this by sticking to a stable Debian release, or maybe the latest Ubuntu LTS version, but then I have to wait longer for new improvements in e.g. gnome or other projects I use.

Using docker containers

Luckily there’s a great way to address this, so that I can continue to use a very modern distro and still be able to work with older Yocto releases. A sub-project within the Yocto project called CROPS develops tools making Yocto more cross-platform. One thing that they do is that they provide docker images which contain everything you need to build using docker.

You can launch one of those base container for Yocto builds using e.g.

docker run --rm -it crops/yocto:ubuntu-16.04-base

To make it a bit more useful I usually make sure that the current directory and all it’s content is available in docker, that way I can easily access all the sources and build artifact from my regular host system as well.

docker run --rm -it -v ${PWD}:${PWD} crops/yocto:ubuntu-16.04-base --workdir ${PWD}

User id mismatch

The solution above works well in most single user systems, since the first user of the host and the first user of the docker container will likely share the same uid and thus be seen as the same user from a file system perspective. This will however be a problem if multiple users share a build server, but luckily the CROPS people has a solution for this as well. In addition to the crops/yocto docker images they have also created crops/poky which is based on crops/yocto but adds some scripts which allows you to have the same user id inside of docker as inside. The way that it works is that it sets the user id in the container to be the same as the owner of the directory passed in as –workdir.

To use this container instead, use:

docker run --rm -it -v ${PWD}:${PWD} crops/poky:ubuntu-16.04 --workdir ${PWD}

Extending with additional tools

The images provided by CROPS are quite minimalistic, and I usually end up wanting some extra tools in there like vim for editing files and rpm for inspecting generated packages. Luckily this is quite easy to do using docker.

Here’s an example of a Dockerfile extending the crops/poky image with vim and rpm:

FROM crops/poky:ubuntu-16.04

USER root
RUN apt-get update && apt-get install -y vim rpm
RUN rm -rf /var/lib/apt/lists/*

You can then build this image using:

docker build -t pokyextended /path/to/dir/of/dockerfile/

When the image is built you can use it by just replacing crops/poky:ubuntu-16.04 in the command from before:

docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}

In order to make it a bit more convenient I use an alias in my ~/.bash_aliases do to this:

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}'

With this my workflow for starting a build is quite simple:

  • Create a new directory: mkdir my_new_build && cd my_new_build
  • Get all sources, I usually use googles “repo” tool to fetch multiple repositories.
  • Launch container: pokydocker
  • Source oe-init-build-env and start building

So the only overhead of using docker is running “pokydocker” from the directory to start the docker container, and I’m guaranteed to have the same environment regardless of my build host.