Embedded module TQMa8MPxL - YOCTO Linux BSP documentation


Prerequisites

The default-BSP has already been built and the build-environment is set up, ci-meta-tq is the current directory.
Refer to Quickstart BSP for more information.

The goal of this guide is to make use of the i.MX Machine Learning features as described in i.MX Machine Learning User's Guide.

Those features are cumulated inside the yocto layer meta-ml, provided by meta-imx.
It adds the i.MX Machine Learning features and includes the eIQ Software Stack which enables the use of specialized hardware accelerators (NPU/GPU).

At first, move into your sources directory and clone meta-imx aswell as its dependencys meta-freescale-distro and meta-python2.

Each layer needs to be checked out with your specific branch, this example is referenced to BSP-revision 0085
cd sources

git clone https://github.com/nxp-imx/meta-imx.git
cd meta-imx/
git checkout kirkstone-5.15.71-2.2.0
cd ..

git clone https://github.com/Freescale/meta-freescale-distro.git
cd meta-freescale-distro/
git checkout kirkstone
cd ..

git clone git://git.openembedded.org/meta-python2
cd meta-python2/
git checkout hardknott
cd ..

cd ..


Next, move into the conf-Folder in your buildspace directory and edit the bblayers.conf

cd tqma8mpxl_build/conf/
nano bblayers.conf

now add the following layers to the BBLAYERS variable:

${BSPDIR}/sources/meta-imx/meta-ml
${BSPDIR}/sources/meta-imx/meta-bsp
${BSPDIR}/sources/meta-imx/meta-sdk
${BSPDIR}/sources/meta-freescale-distro
${BSPDIR}/sources/meta-python2
meta-ml requires meta-bsp and meta-sdk (also provied by meta-imx) to be present aswell

The contents of BBLAYERS should look something like this when you're done:

BBLAYERS = "\
	${BSPDIR}/sources/poky/meta \
	${BSPDIR}/sources/poky/meta-poky \
	\
	${BSPDIR}/sources/meta-openembedded/meta-oe \
	${BSPDIR}/sources/meta-openembedded/meta-python \
	${BSPDIR}/sources/meta-openembedded/meta-multimedia \
	\
	${BSPDIR}/sources/meta-freescale \
	\
	${BSPDIR}/sources/meta-qt5 \
	\
	${BSPDIR}/sources/meta-tq \
	${BSPDIR}/sources/meta-dumpling \
	\
	${BSPDIR}/sources/meta-imx/meta-ml \
	${BSPDIR}/sources/meta-imx/meta-bsp \
	${BSPDIR}/sources/meta-imx/meta-sdk \
	${BSPDIR}/sources/meta-freescale-distro \
	${BSPDIR}/sources/meta-python2 \
"


Next, you have to append the required packages to your image, done in the local.conf here. You may simply add packagegroup-imx-ml to the IMAGE_INSTALL_append variable.

To find out which packages are included in packagegroup-imx-ml, for example if you don't need all packages this packagegroup contains and you want to append only a few of them, refer to sources/meta-imx/meta-ml/recipes-fsl/packagegroup/packagegroup-imx-ml.bb

For running some of the included python demos, the python module pillow is required.
You may simply append the package python3-pillow, however if you need additional python modules anyway for python developement later on, we recommend appending python3-pip.
This will allow for easy installation of any python module directly on target (given an internet connection to https://pypi.org or a local python wheel).

The result, depending on the packages you want to append, should look something like this:

IMAGE_INSTALL_append = " packagegroup-imx-ml python3-pip python3-pillow"
Note: The packagegroup-imx-ml does apparently not include OpenCV DNN (described in i.MX Machine Learning User Guide Chapter 7 OpenCV machine learning demos), however since Models from the OpenCV DNN Module can not run on the specialized NPU/GPU-accelerator hardware, you are most likely going to ignore it anyway. If you need OpenCV DNN, you should build a imx-image-full because the majority of OpenCV DNN Demos require a Qt GUI

You may also want to build NXP eIQ Packages into the SDK. For this purpose add this line to your local.conf:

TOOLCHAIN_TARGET_TASK_append += " tensorflow-lite-dev onnxruntime-dev"


The hardware accelerators are (as of BSP-revision 0085) disabled by default, therefore a kernelpatch has to be created to active them in the device tree.

Without the patch, a warning [query_hardware_caps:66]Unsupported evis version will be thrown, and execution will automatically fall back CPU using the standard C floating point libary operations, which results in even slower inference than with the “used by default” XNNPACK delegate, as it implements optimized float-operations for CPU

To learn how to create kernel patches and add them to the kernel recipe, you may refer to section Creating and adding Linux kernel patch in our Yocto Build System Guide.

To activate the machine learning hardware accelerators, add the following lines in the device tree file arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts:

&ml_vipsi {
        status = "okay";
};


Now repeat building your image like described in Quickstart BSP (bitbake tq-image-weston-debug).

BSP revision 0085 - Known issues

  1. Despite accelerators being activated in the device tree, when trying to force acceleration using the GPU instead of NPU (by changing environmentvariable USE_GPU_INFERENCE=1), a [query_hardware_caps:66]Unsupported evis version will still be thrown with execution falling back to CPU.
  • Last modified: 2023/04/04 10:14