Docker Containers and Kubernetes Orchestration on Zynq UltraScale+

Table of Contents

Introduction to Docker Containers and Kubernetes Orchestration

Docker defines a container as a "standard unit of software".  Container images package an application and all of its dependencies enabling it to run quickly and reliably across platforms.  A container is isolated from processes and other containers running on the same platform unless interfaces are explicitly defined.  Docker containers provide a standard, lightweight and secure virtualization solution when a full hypervisor is overkill.  Unlike hypervisors that virtualize the hardware, containers are lightweight because only the OS is virtualized.  Containers rely on kernel features such as namespaces, cgroups and unionfs.

Docker provides:

  • Docker Engine for container runtime
  • Docker tools for creating, deploying and managing containers
  • Docker Hub repository for hosting container images

This wiki assumes you have a working knowledge of Yocto.  It will walk you through how to build and deploy Docker on Zynq Ultrascale+ in the following four configurations:

  • systemd on a flash filesystem
  • systemd on a ramdisk
  • sysvinit on a flash filesystem
  • sysvinit on a ramdisk

Kubernetes is an open source container orchestration engine for deploying and managing containerized applications.  Once you have Docker up and running, this note also shows how to enable Kubernetes and setup a single node cluster.

Xilinx Technical Support

This is unsupported from Xilinx Technical Support Service Request (SR).  Please do your due diligence when implementing Docker/Kubernetes and regression test against your system requirements.

Requirements

  1. ZCU102 Eval Board
  2. Linux host machine or VM
    1. Install Yocto dependencies or CROPS/Poky-container
    2. Clone Xilinx Yocto and checkout release version
  3. Internet

Testing

Ubuntu 18.04 (Host)

rel-v2020.2 (Yocto 3.0 Zeus)

ZCU102

Compatibility

This wiki has been updated to support Yocto 3.x.  It will not build on earlier Yocto 2.x based version.

Meta-virtualization Layer

The Docker recipe is included in the meta-virtualization layer.  The Xilinx Yocto manifest instructs repo to automatically clone meta-virtualization, so there is no need to clone it manually unless you are using an unsupported Yocto flow.  You can verify that the virtualization layer is installed in the sources directory and that it's included in the bblayers.conf.  From this layer we will be building and installing the docker and docker-contrib packages.  We will add this to a Docker machine configuration later.

Docker packages
IMAGE_INSTALL_append = " docker docker-ce-contrib"
# Add extra space (in KB) for Docker images (10Gib)
IMAGE_ROOTFS_EXTRA_SPACE = "10485760"

Meta-xilinx-docker Layer

Throughout this wiki we will be populating a custom layer, meta-xilinx-docker, that reflects the listing below.  Creating a Custom Yocto Layer shows how to create a base layer using the Yocto scripts.

meta-xilinx-docker
meta-xilinx-docker
├── conf
│   ├── distro
│   │   └── petalinux-systemd.conf
│   └── layer.conf
├── COPYING.MIT
├── README
├── recipes-containers
│   └── docker
│       └── docker-ce_git.bbappend
└── recipes-kernel
    └── linux-xlnx
        ├── cfg
        │   └── docker.cfg
        └── linux-xlnx_%.bbappend

Linux Configuration

The default ZCU102 kernel configuration does not have all the required CONFIG options for Docker, so we will need to turn them on through a configuration fragment.  If you don't already have a kernel recipe directory in your layer, create the directory structure as shown below in your custom layer.

Next edit the linux-xlnx_%.bbappend file as shown below and add the docker.cfg.  Note the wildcard "%" is used to match any kernel version.

linux-xlnx_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/cfg:"

SRC_URI_append = " file://docker.cfg"

Add a docker.cfg file in the cfg directory and add the CONFIG requirements shown in the listing below.  The script from the Moby Project was used to determine these (more on this later).

docker.cfg
CONFIG_NAMESPACES=y
CONFIG_NET_NS=y
CONFIG_PID_NS=y
CONFIG_IPC_NS=y
CONFIG_UTS_NS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_SCHED=y
CONFIG_CPUSETS=y
CONFIG_MEMCG=y
CONFIG_VETH=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y
CONFIG_NETFILTER_XT_MATCH_IPVS=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_IP_NF_NAT=y
CONFIG_CGROUP_BPF=y

CONFIG_USER_NS=y
CONFIG_SECCOMP=y
CONFIG_CGROUP_PIDS=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_BLK_CGROUP=y
CONFIG_BLK_DEV_THROTTLING=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_NET_CLS_CGROUP=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_VS=y
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_RR=y
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_VXLAN=y
CONFIG_VLAN_8021Q=m
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_INET_ESP=y
CONFIG_IPVLAN=y
CONFIG_MACVLAN=y
CONFIG_DUMMY=y
CONFIG_NF_NAT_FTP=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_NAT_TFTP=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_AUFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_THIN_PROVISIONING=y
CONFIG_OVERLAY_FS=y

CONFIG_MD=y
CONFIG_NET_SCHED=y
CONFIG_NET_L3_MASTER_DEV=y
CONFIG_BRIDGE=m

# CONFIG_EXT2_FS is not set
CONFIG_EXT4_USE_FOR_EXT2=y

CONFIG_NF_NAT_IPV4=y
CONFIG_NF_NAT_NEEDED=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_INET_XFRM_MODE_TRANSPORT=y

If you want to boot with an initramfs, add the variables in the listing below to your local.conf and follow the ramdisk info note below when running Docker.

Initramfs
# Remove wic because it causes circular dependencies with bundled initramfs
IMAGE_FSTYPES_remove = "wic.qemu-sd"
INITRAMFS_IMAGE = "petalinux-image-minimal"
INITRAMFS_IMAGE_BUNDLE = "1"

Distro Configuration with Systemd

Optional

If you are using the default sysvinit, you may skip this section.

Systemd is a replacement init system for sysvinit which is used by many advanced Linux Distributions.  While Docker works with sysvinit, it works best with systemd.  When systemd is enabled, systemd will start the docker daemon during boot.  To enable systemd in your image, you need to patch an existing distro configuration or create a new one.  We will create a new one here based on the Petalinux Distro.  Create the directory structure shown below in your layer and add the petalinux-systemd.conf

petalinux-systemd.conf
require conf/distro/petalinux.conf

DISTRO = "petalinux-systemd"
DISTRO_NAME = "PetaLinux with systemd"

INIT_MANAGER = "systemd"

IMAGE_INSTALL_append = " connman connman-client"

Kernel Config for Systemd

Systemd has been tested with Docker using the default kernel configs including the Docker support configs on ZCU102.  If you are planning on using systemd in production, please make sure that the kernel configs meets your system requirements as recommended by freedesktop.org.  Systemd configuration is beyond the scope of this wiki.

Build Docker

The bitbake command below assumes you are building a zcu102-zynqmp.conf machine which includes the configurations for Docker on a ZCU102.

Build with systemd
$ MACHINE=zcu102-zynqmp bitbake petalinux-image-minimal

If you want to build Docker with sysvinit, then you may create another machine docker-zcu102-zynqmp.conf and comment out the systemd lines as noted in the docker-systemd-zcu102-zynqmp.conf listing above.

Install Docker

Docker expects to run from a non-RAM based root filesystem since it uses pivot_root to jail the container.  For that reason it's recommended that you setup an SD card with VFAT and Ext4 partitions.  Once your SD card is partitioned, copy the boot images to the VFAT partition and extract the rootfs to the Ext4 partition.  These images are available in the deploy/images directory of the machine you built.  Copy the images as shown below noting that the VFAT partition is mounted on boot and the Ext4 partition is mounted on rootfs.  This example is using the dtb that is built from the kernel tree.  If you are using a custom HDF, you will want to use the dtb compiled from the DTG so make sure that uEnv.txt is using the correct dtb.

SD Card Installation
$ cd tmp/deploy/images/docker-systemd-zcu102-zynqmp
$ cp boot.bin /media/<user>/boot/
$ cp boot.scr /media/<user>/boot/ 
$ cp Image /media/<user>/boot/
$ sudo tar xf petalinux-image-minimal-docker-systemd-zcu102-zynqmp.tar.gz -C /media/<user>/rootfs/

Run Docker

Insert the SD card into you ZCU102, connect the Ethernet to an internet connected router and boot the board.  To verify that the kernel is configured properly, you can run the Moby script check-config.sh.  Your output should look similar to the listing below.

Options

Some options are not enabled which is expected either due to the advanced kernel version or not required.


check-config.sh
root@zcu102-zynqmp:~# /usr/share/docker/check-config.sh
info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: missing
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: missing
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: enabled
    (cgroup swap accounting is currently enabled)
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: missing
- CONFIG_CFQ_GROUP_IOSCHED: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_VS: enabled
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled
      - CONFIG_CRYPTO_GCM: enabled
      - CONFIG_CRYPTO_SEQIV: enabled
      - CONFIG_CRYPTO_GHASH: enabled
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled
      - CONFIG_XFRM_ALGO: enabled
      - CONFIG_INET_ESP: enabled
      - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
  - "ipvlan":
    - CONFIG_IPVLAN: enabled
  - "macvlan":
    - CONFIG_MACVLAN: enabled
    - CONFIG_DUMMY: enabled
  - "ftp,tftp client in container":
    - CONFIG_NF_NAT_FTP: enabled (as module)
    - CONFIG_NF_CONNTRACK_FTP: enabled
    - CONFIG_NF_NAT_TFTP: enabled (as module)
    - CONFIG_NF_CONNTRACK_TFTP: enabled
- Storage Drivers:
  - "aufs":
    - CONFIG_AUFS_FS: missing
  - "btrfs":
    - CONFIG_BTRFS_FS: enabled
    - CONFIG_BTRFS_FS_POSIX_ACL: enabled
  - "devicemapper":
    - CONFIG_BLK_DEV_DM: enabled
    - CONFIG_DM_THIN_PROVISIONING: enabled (as module)
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled
  - "zfs":
    - /dev/zfs: missing
    - zfs command: missing
    - zpool command: missing

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

Systemd

If you are running Docker with systemd and connman, then systemd will start dockerd and your network adapter will come up automatically during boot.

Ramdisk

If you want to run Docker with systmed on a ramdisk, then you need the bbappend below in order to set the DOCKER_RAMDISK variable during boot.

docker-ce_git.bbappend
PACKAGES =+ "${PN}-conf"
FILES_${PN}-conf = "${sysconfdir}/systemd/system/docker.service.d/docker-ramdisk.conf"
ALLOW_EMPTY_${PN}-conf = "1"
RDEPENDS_${PN} += "${PN}-conf"

do_install_append() {
         if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
                install -d ${D}${sysconfdir}/systemd/system/docker.service.d
                if [ "${INITRAMFS_IMAGE}" ]; then
                        printf "[Service]\nEnvironment=\"DOCKER_RAMDISK=true\"" > ${D}${sysconfdir}/systemd/system/docker.service.d/docker-ramdisk.conf
                fi
         fi
}

Sysvinit

If you want to continue running your system with sysvinit, then you will need to manually start the dockerd in the background.

dockerd
root@zcu102-zynqmp:~# dockerd &

Ramdisk

If you want to run Docker with sysvinit on a ramdisk, then you must export the DOCKER_RAMDISK environment variable and set it to true before launching dockerd.

root@zcu102-zynqmp:~# export DOCKER_RAMDISK=true; dockerd &

You can verify that the Docker daemon is running by issuing the "ps" command.   Next, test that you can connect to it by issuing "docker info".

docker info
root@zcu102-zynqmp:~# docker info
Client:
 Debug Mode: false

Server:
 Containers: 28
  Running: 14
  Paused: 0
  Stopped: 14
 Images: 12
 Server Version: 19.03.2-ce
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: fd103cb716352c7e19768e4fed057f71d68902a0.m
 runc version: 652297c7c7e6c94e8d064ad5916c32891a6fd388-dirty
 init version: fec3683-dirty (expected: fec3683b971d9)
 Kernel Version: 5.4.0-xilinx-v2020.2
 Operating System: PetaLinux with systemd 2020.2 (zeus)
 OSType: linux
 Architecture: aarch64
 CPUs: 4
 Total Memory: 3.843GiB
 Name: zcu102-zynqmp
 ID: CWBI:2YLW:U7AW:NV7N:D5VX:Z7F2:KXP2:RHM5:VJH6:RYBO:SIUP:3JAX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

With the ZCU102 booted and connected to the internet, run the standard hello-world container image to test your embedded Docker installation.  Your console should look similar to the hello-world listing.  Since this is the first time you are running the container, the image must be pulled from the Docker Hub repository.  Any subsequent runs will pull the image from the local repository.

hello-world
root@zcu102-zynqmp:~# docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
109db8fad215: Pull complete
Digest: sha256:7d91b69e04a9029b99f3585aaaccae2baa80bcf318f4a5d2165a9898cd2dc0a1
Status: Downloaded newer image for hello-world:latest
[  934.113183] IPv6: ADDRCONF(NETDEV_CHANGE): veth1d298ac: link becomes ready
[  934.120301] IPv6: ADDRCONF(NETDEV_CHANGE): vethda84d83: link becomes ready
[  934.129072] docker0: port 1(veth1d298ac) entered blocking state
[  934.135110] docker0: port 1(veth1d298ac) entered disabled state
[  934.141293] device veth1d298ac entered promiscuous mode
[  934.146700] docker0: port 1(veth1d298ac) entered blocking state
[  934.152669] docker0: port 1(veth1d298ac) entered forwarding state
[  934.158983] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[  936.156094] docker0: port 1(veth1d298ac) entered disabled state
[  936.163164] eth0: renamed from vethda84d83
[  936.194648] docker0: port 1(veth1d298ac) entered blocking state
[  936.200601] docker0: port 1(veth1d298ac) entered forwarding state

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

[  936.515970] docker0: port 1(veth1d298ac) entered disabled state
[  936.522176] vethda84d83: renamed from eth0
[  936.575748] IPv6: ADDRCONF(NETDEV_CHANGE): veth1d298ac: link becomes ready
[  936.582886] docker0: port 1(veth1d298ac) entered blocking state
[  936.588864] docker0: port 1(veth1d298ac) entered forwarding state
[  936.608257] docker0: port 1(veth1d298ac) entered disabled state
[  936.618962] device veth1d298ac left promiscuous mode
[  936.624009] docker0: port 1(veth1d298ac) entered disabled state

Once you have verified that Docker is working with hello-world, you can try running more advanced containers such as ubuntu, centosgcc, etc.

Ubuntu container
root@zcu102-zynqmp:~# docker run -it ubuntu bash

Now you should be in a Ubuntu bash shell.  You can now install applications with apt-get.  Type exit to quit the container.

Kubernetes

Yocto supports Kubernetes with the meta-virtualization layer.

Version

At the time of this writing Zeus was the latest release version so this note implements k8s.  Hardknott and later versions support k3s which is the recommended version for embedded systems.

Single Node Cluster on ZU+

We are going to create a very simple single node cluster on ZU+ to show how to bring up a control plane node.

Configuration

This sections assumes a system configured with systemd and persistent rootfs (SD).

Add Kubernetes to your build by including it in your image in local.conf.  Then bitbake and deploy your image.

Kubernetes
IMAGE_INSTALL_append = " kubernetes"

Once your system boots, we will restrict connman from managing the DNS because coredns will detect infinite loops (CrashLoopBackOff) that occur since connman points to itself (localhost) for DNS resolution.

No DNS Proxy
root@zcu102-zynqmp:~# mkdir /etc/systemd/system/connman.service.d
root@zcu102-zynqmp:~# cat <<EOF | tee /etc/systemd/system/connman.service.d/disable_dns_proxy.conf
{
  [Service]
  ExecStart=
  ExecStart=/usr/sbin/connmand -n --nodnsproxy
}
EOF
root@zcu102-zynqmp:~# systemctl daemon-reload
root@zcu102-zynqmp:~# systemctl restart connman.service

Now you can make sure that your resolver configuration is pointing to the expected DNS servers.

resolv.conf
root@zcu102-zynqmp:~# cat /etc/resolv.conf
# Generated by Connection Manager
nameserver 8.8.8.8
nameserver 8.8.4.4

Next we will init our kube and specify the API server IP address which is the IP address of our zcu102 and the network CIDR which is the address of our network overlay provider which is Flannel in this case.

Initialization
root@zcu102-zynqmp:~# kubeadm init --apiserver-advertise-address 10.0.1.6 --pod-network-cidr=10.244.0.0/16
I0913 18:57:53.943718     855 version.go:251] remote version is much newer: v1.22.1; falling back to: stable-1.16
[init] Using Kubernetes version: v1.16.15
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [zcu102-zynqmp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.1.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [zcu102-zynqmp localhost] and IPs [10.0.1.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [zcu102-zynqmp localhost] and IPs [10.0.1.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[  543.916443] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.005767 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node zcu102-zynqmp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node zcu102-zynqmp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lq974o.i836dtli0jx896ih
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.1.6:6443 --token lq974o.i836dtli0jx896ih \
    --discovery-token-ca-cert-hash sha256:b595992b84e6af385e48f92eabd8dff9477c1686dc2dfe1f54c8c06afc43d198

Before we proceed we need to setup administration privileges for the user per the init message.

Admin
root@zcu102-zynqmp:~# mkdir -p $HOME/.kube
root@zcu102-zynqmp:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@zcu102-zynqmp:~# chown $(id -u):$(id -g) $HOME/.kube/config

Next apply the Flannel network overlay.

Network Overlay
root@zcu102-zynqmp:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Network Overlay

You may also experiment with other network overlay providers.  Flannel is a simple overlay and known to work with ARM devices.

Once the network is configured, lets check some status.  Run get nodes until the status indicates ready.

Get Nodes
root@zcu102-zynqmp:~# kubectl get nodes
NAME            STATUS     ROLES    AGE   VERSION
zcu102-zynqmp   NotReady   master   56m   v1.16.2-dirty
root@zcu102-zynqmp:~# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
zcu102-zynqmp   Ready    master   56m   v1.16.2-dirty

Next lets check the pods in the kube-system name space.  All should be ready and running (this may take some time).

Pods
root@zcu102-zynqmp:~# kubectl --namespace kube-system get pod
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-f47xs                1/1     Running   0          56m
coredns-5644d7b6d9-zfbtz                1/1     Running   0          56m
etcd-zcu102-zynqmp                      1/1     Running   0          56m
kube-apiserver-zcu102-zynqmp            1/1     Running   3          56m
kube-controller-manager-zcu102-zynqmp   1/1     Running   2          56m
kube-flannel-ds-ksvzd                   1/1     Running   0          92s
kube-proxy-4wm9d                        1/1     Running   0          56m
kube-scheduler-zcu102-zynqmp            1/1     Running   2          56m

Then we can run describe node to see a description of our control plane node.

Describe Node
root@zcu102-zynqmp:~# kubectl describe node zcu102-zynqmp
Name:               zcu102-zynqmp
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=zcu102-zynqmp
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"1e:42:6a:de:45:4b"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.1.6
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 13 Sep 2021 19:03:06 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 13 Sep 2021 19:59:30 +0000   Mon, 13 Sep 2021 19:59:30 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Mon, 13 Sep 2021 20:00:39 +0000   Mon, 13 Sep 2021 19:03:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 13 Sep 2021 20:00:39 +0000   Mon, 13 Sep 2021 19:03:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 13 Sep 2021 20:00:39 +0000   Mon, 13 Sep 2021 19:03:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 13 Sep 2021 20:00:39 +0000   Mon, 13 Sep 2021 19:59:38 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.1.6
  Hostname:    zcu102-zynqmp
Capacity:
 cpu:                4
 ephemeral-storage:  2711376Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 hugepages-32Mi:     0
 hugepages-64Ki:     0
 memory:             4029196Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  2498804118
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 hugepages-32Mi:     0
 hugepages-64Ki:     0
 memory:             3926796Ki
 pods:               110
System Info:
 Machine ID:                 ebc1228e4ad0458ba9e7302a18dd4754
 System UUID:                ebc1228e4ad0458ba9e7302a18dd4754
 Boot ID:                    0d96cb50-2a6c-4a70-aff9-bf4b399a6370
 Kernel Version:             5.4.0-xilinx-v2020.2
 OS Image:                   PetaLinux with systemd 2020.2 (zeus)
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://19.3.2
 Kubelet Version:            v1.16.2-dirty
 Kube-Proxy Version:         v1.16.2-dirty
PodCIDR:                     10.244.0.0/24
PodCIDRs:                    10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                     ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-5644d7b6d9-f47xs                 100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     58m
  kube-system                coredns-5644d7b6d9-zfbtz                 100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     58m
  kube-system                etcd-zcu102-zynqmp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                kube-apiserver-zcu102-zynqmp             250m (6%)     0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                kube-controller-manager-zcu102-zynqmp    200m (5%)     0 (0%)      0 (0%)           0 (0%)         57m
  kube-system                kube-flannel-ds-ksvzd                    100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      2m58s
  kube-system                kube-proxy-4wm9d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58m
  kube-system                kube-scheduler-zcu102-zynqmp             100m (2%)     0 (0%)      0 (0%)           0 (0%)         57m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%)  100m (2%)
  memory             190Mi (4%)  390Mi (10%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                       Message
  ----    ------                   ----               ----                       -------
  Normal  Starting                 59m                kubelet, zcu102-zynqmp     Starting kubelet.
  Normal  NodeAllocatableEnforced  59m                kubelet, zcu102-zynqmp     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  59m (x8 over 59m)  kubelet, zcu102-zynqmp     Node zcu102-zynqmp status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    59m (x8 over 59m)  kubelet, zcu102-zynqmp     Node zcu102-zynqmp status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     59m (x7 over 59m)  kubelet, zcu102-zynqmp     Node zcu102-zynqmp status is now: NodeHasSufficientPID
  Normal  Starting                 58m                kube-proxy, zcu102-zynqmp  Starting kube-proxy.
  Normal  NodeReady                2m1s               kubelet, zcu102-zynqmp     Node zcu102-zynqmp status is now: NodeReady

We can also verify the version of our components.

Version
root@zcu102-zynqmp:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2-dirty", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"dirty", BuildDate:"2021-07-30T20:57:27Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"}

root@zcu102-zynqmp:~# kubelet --version
Kubernetes v1.16.2-dirty

root@zcu102-zynqmp:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2-dirty", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"dirty", BuildDate:"2021-07-30T20:58:04Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.15", GitCommit:"2adc8d7091e89b6e3ca8d048140618ec89b39369", GitTreeState:"clean", BuildDate:"2020-09-02T11:31:21Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/arm64"}

For security reasons, pods are not able to be scheduled on the control node by default.  You may run the command below to enable running pods on the control plane for experimentation.

Untaint
root@zcu102-zynqmp:~# kubectl taint nodes --all node-role.kubernetes.io/master-
node/zcu102-zynqmp untainted

Congrats

You now have a Kubernetes control plane node up and ready to orchestrate containers on zcu102!

Related Links

Creating a Custom Yocto Layer

Adding a Hardware Platform to a Xilinx Yocto Layer

Customizing Device Trees in Xilinx Yocto

Xilinx Yocto Builds without an Internet Connection

References

Yocto Mega Manual

BitBake User Manual

Docker

Moby Project

What is Kubernetes

Systemd - freedesktop.org

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy