Building Yocto Images using a Docker Container

Building a Yocto image requires a specific set of Operating System dependencies that might make it challenging to run on a non-dedicated machine. Over time it can be difficult to maintain a known-good environment that will work with a number of different versions of the Yocto project. One solution to this is to use a Docker container to create a known-good build environment, containing the necessary packages and Operating System version.

Table of Contents

Introduction

The creation of a Docker container can be achieved in a number of ways. This specific guide shows how to install the bare minimum infrastructure to build Yocto images for AMD hardware while storing the generated build artifacts outside of the container itself. The Docker container will mount local folders on the host machine, and no persistent data will be stored within the Docker container filesystem. This has the advantage of not requiring containers to be part of a backup strategy, and allows for a very lightweight build environment.

Installation of Docker Engine on a Linux machine

The process to install Docker on a host Linux machine can vary based on the operating system that is being run natively. Please follow the instructions at https://docs.docker.com/engine/install/ to install the Docker engine. Be sure to also follow the post-installation steps at https://docs.docker.com/engine/install/linux-postinstall/ in order to allow users to run Docker without root/sudo access. The remaining sections assume that this has all been completed successfully, and as a user, you are able to successfully run the “hello world” docker test image:

$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world c1ec31eb5944: Pull complete Digest: sha256:4bd78111b6914a99dbc560e6a20eab57ff6655aea4a80c50b0c5491968cbc2e6 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly.

Building a Docker Image

On your host machine, create a new directory, and and create the following contents as a file “Dockerfile”. This Dockerfile will configure the build of a container suitable for 2023.1 and 2023.2 Yocto builds (Langdale):

FROM ubuntu:20.04 MAINTAINER Kris Chaplin <kris.chaplin@amd.com> # Modified from Cliff Brake's Ubuntu container <cbrake@bec-systems.com> ARG DEBIAN_FRONTEND=noninteractive RUN \ dpkg --add-architecture i386 && \ apt-get update && \ apt-get install -yq sudo build-essential git nano vim\ python3-yaml libncursesw5 libncursesw5:i386 \ python python3 man bash diffstat gawk chrpath wget cpio \ texinfo lzop apt-utils bc screen libncurses5-dev locales \ libc6-dev-i386 doxygen libssl-dev dos2unix xvfb x11-utils \ g++-multilib libssl-dev:i386 zlib1g-dev:i386 \ libtool libtool-bin procps python3-distutils pigz socat \ zstd iproute2 lz4 iputils-ping \ curl libtinfo5 net-tools xterm rsync u-boot-tools unzip zip && \ rm -rf /var/lib/apt-lists/* && \ echo "dash dash/sh boolean false" | debconf-set-selections && \ dpkg-reconfigure dash RUN curl https://storage.googleapis.com/git-repo-downloads/repo > /bin/repo && chmod a+x /bin/repo RUN sed -i "1s/python/python3/" /bin/repo RUN groupadd build -g 1000 RUN useradd -ms /bin/bash -p build build -u 1028 -g 1000 && \ usermod -aG sudo build && \ echo "build:build" | chpasswd RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \ locale-gen ENV LANG en_US.utf8 USER build WORKDIR /home/build RUN git config --global user.email "build@example.com" && git config --global user.name "Build"

This Dockerfile shall create a base OS image, and install the necessary packages and additional tools required to build AMD Yocto Images. A user “build” is also created within the container, and shall be used for the build process.

Build your Docker image using the following command, making sure you are in the directory containing the Dockerfile. You can choose a different name to “yoctocontainer” if you prefer. Ensure you also change the name appropriately when running the docker run command if you do.

docker build -t yoctocontainer .

Building will take around a minute, depending on bandwidth to download the dependencies. On subsequent builds, downloads will be cached and so will be substantially quicker. The build command only needs to be run once, unless you modify the Dockerfile to include new packages or tools.

Once the image is created, you can change to any directory on your host. There is now no dependency on the Dockerfile being in the current directory (or even existing), as the configuration of “yoctocontainer” is centrally managed on your machine.

The docker run command can now be executed. The -v options below share folders between the host and the docker image. This is important to note as the docker container will be run in a mode where no data local to the docker machine will survive a relaunch!

As a result, the example below uses the current working directory (pwd) and mounts it as a folder /home/build/work. As long as you change directory and run all of your Yocto commands as a folder within work, this will remain on your target system after docker has finished. Feel free to add additional paths to the example shown below, and remove unused paths:

Once the docker command is successfully run, you will have a shell prompt for the “build” user and you can run a test invocation of bitbake to build an image targeting the ZCU102 development kit. Ensure that you do this in a mapped directory (such as work in the example below) to ensure your generated files persist outside of the docker container.

This Docker image is purposefully minimal - it is intended to be a build environment only! This means, you will not find common editing tools such as vi or nano, as files should be modified on the host, and the docker image used to build the image. Always work in a mounted filesystem (such as ~/build/work) to ensure that your data persists across sessions!

 

Once completed, you can quit back to your host shell using the exit command. The next invocation of the container will be “fresh” as no data persists across runs, outside of that mounted from the host Operating System.

Further Considerations

Matching Username UID in Docker

The example Dockerfile above has the following line to add the “build” user:

This creates a user “build” with a UID of 1028 and a GID of 1000. The UID here matches the authors UID on their Linux machine, which can be found from the /etc/passwd command:

The benefit of matching the “build” user to these UID and GID values is that once the docker build is done, and the user returns to a host shell, all files are accessible and owned by the user, rather than an unknown UID. This makes it much easier to modify configuration files and output binaries after the docker commands have been run. If you would like to do the same, modify the Dockerfile accordingly and rebuild. On the next invocation of the docker container, the GID and UID will match yours, and the files generated will be owned by your username on the host machine.

Adding Host Directories for Caching to Improve Performance

The Yocto build infrastructure has smart caching of downloads and build state. This allows for a build to be vastly accelerated in cases where the artifacts have previously been built in the exact configuration required. In order to take advantage of this, the build infrastructure allows the user to point to centralized SSTATE and Download directories or servers. In the case of local directories, this cache will be written to as well as read from, so it will increase efficiency over time.

Create a local directory on your machine or network drive, and update your Dockerfile to point to this as a mapped drive:

Update your Dockerfile to include the following lines at the bottom of the file:

And finally rebuild your Docker image:

The next invocation of bitbake in the container will use the specified directories for the SSTATE_DIR and DL_DIR to cache the data. This cache state will survive across builds as it is attached to a mounted volume that exists on the host machine/network.

 

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy