Build guide#

To successfully build the Intel® Tiber™ Broadcast Suite, you need to follow a series of steps involving BIOS configuration, driver installation, host machine setup, and package installation. Depending on your preference, you can install the suite as a Docker application (the recommended method) or directly on a bare metal machine.

Table of contents#

1. Prerequisites#

Steps to perform before running Intel® Tiber™ Broadcast Suite on a host with Ubuntu operating system installed.

1.1. BIOS Settings#

Note: It is recommended to properly set up BIOS settings before proceeding. Depending on the manufacturer, labels may vary. Please consult an instruction manual or ask a platform vendor for detailed steps.

The following technologies must be enabled for Media Transport Library (MTL) to function properly:

1.2. Install Docker#

Note: This step is optional if you want to install Intel® Tiber™ Broadcast Suite locally.

1.2.1. Install Docker Build Environment#

To install the Docker environment, please refer to the official Docker Engine on Ubuntu installation manual’s Install using the apt repository section.

Note: Do not skip docker-buildx-plugin installation, otherwise the build.sh script may not run properly.

1.2.2. Setup Docker Proxy#

Depending on the network environment, it could be required to set up the proxy. In that case, please refer to Configure the Docker client section of Configure Docker to use a proxy server guide.

1.3. Install GPU Driver#

1.3.1. Intel Flex GPU Driver#

To install the Flex GPU driver, follow the 1.4.3. Ubuntu Install Steps part of the Installation guide for Intel® Data Center GPUs.

Note: If prompted with Unable to locate package, please ensure the repository key intel-graphics.key is properly dearmored and installed as /usr/share/keyrings/intel-graphics.gpg.

Use the vainfo command to check the GPU installation:

sudo vainfo

1.3.2. Nvidia GPU Driver#

In case of using an Nvidia GPU, please follow the steps below:

sudo apt install --install-suggests nvidia-driver-550-server
sudo apt install nvidia-utils-550-server

In case of any issues please follow Nvidia GPU driver install steps.

Note: Supported version of Nvidia driver compatible with packages inside Docker container is

  • Driver Version: 550.90.07

  • CUDA Version: 12.4

1.5. Configure VFIO (IOMMU) required by PMD-based DPDK #

If you have already enabled IOMMU, you can skip this step. To check if IOMMU is enabled, please verify if there are any IOMMU groups listed under the /sys/kernel/iommu_groups/ directory. If no groups are found, it indicates that IOMMU is not enabled.

ls -l /sys/kernel/iommu_groups/

Enable IOMMU(VT-D and VT-X) in BIOS#

The steps to enable IOMMU in your BIOS/UEFI may vary depending on the manufacturer and model of your motherboard. Here are general steps that should guide you:

  1. Restart your computer. During the boot process, you’ll need to press a specific key to enter the BIOS/UEFI setup. This key varies depending on your system’s manufacturer. It’s often one of the function keys (like F2, F10, F12), the ESC key, or the DEL key.

  2. Navigate to the advanced settings. Once you’re in the BIOS/UEFI setup menu, look for a section with a name like “Advanced”, “Advanced Options”, or “Advanced Settings”.

  3. Look for IOMMU setting. Within the advanced settings, look for an option related to IOMMU. It might be listed under CPU Configuration or Chipset Configuration, depending on your system. For Intel systems, it’s typically labeled as “VT-d” (Virtualization Technology for Directed I/O). Once you’ve located the appropriate option, change the setting to “Enabled”.

  4. Save your changes and exit. There will typically be an option to “Save & Exit” or “Save Changes and Reset”. Select this to save your changes and restart the computer.

Enable IOMMU in Kernel#

After enabling IOMMU in the BIOS, you need to enable it in your operating system as well.

Ubuntu/Debian#

Edit GRUB_CMDLINE_LINUX_DEFAULT item in /etc/default/grub file, append below parameters into GRUB_CMDLINE_LINUX_DEFAULT item if it’s not there.

sudo vim /etc/default/grub
intel_iommu=on iommu=pt

then:

sudo update-grub
sudo reboot
CentOS/RHEL9#
sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"
sudo reboot

For non-Intel devices, contact the vendor for how to enable IOMMU.

Double Check iommu_groups Creation by Kernel After Reboot#

ls -l /sys/kernel/iommu_groups/

If no IOMMU groups are found under the /sys/kernel/iommu_groups/ directory, it is likely that the previous two steps were not completed as expected. You can use the following two commands to identify which part was missed:

# Check if "intel_iommu=on iommu=pt" is included
cat /proc/cmdline
# Check if CPU flags have vmx feature
lscpu | grep vmx

Unlock RLIMIT_MEMLOCK for non-root Run#

Skip this step for Ubuntu since the default RLIMIT_MEMLOCK is set to unlimited already.

Some operating systems, including CentOS Stream and RHEL 9, have a small limit to RLIMIT_MEMLOCK (amount of pinned pages the process is allowed to have) which will cause DMA remapping to fail during the running. Please edit /etc/security/limits.conf, append below two lines at the end of the file, change to the username currently logged in.

<USER>    hard   memlock           unlimited
<USER>    soft   memlock           unlimited

Reboot the system to let the settings take effect.

1.6. (Optional) Configure PTP #

The Precision Time Protocol (PTP) facilitates global timing accuracy in the microsecond range for all essences. Typically, a PTP grandmaster is deployed within the network, and clients synchronize with it using tools like ptp4l. This library includes its own PTP implementation, and a sample application offers the option to enable it. Please refer to section Built-in PTP for instructions on how to enable it.

By default, the built-in PTP feature is disabled, and the PTP clock relies on the system time source of the user application (clock_gettime). However, if the built-in PTP is enabled, the internal NIC time will be selected as the PTP source.

Linux ptp4l Setup to Sync System Time with Grandmaster#

Firstly run ptp4l to sync the PHC time with grandmaster, customize the interface as your setup.

sudo ptp4l -i ens801f2 -m -s -H

Then run phc2sys to sync the PHC time to system time, please make sure NTP service is disabled as it has conflict with phc2sys.

sudo phc2sys -s ens801f2 -m -w

Built-in PTP#

This project includes built-in support for the Precision Time Protocol (PTP) protocol, which is also based on the hardware Network Interface Card (NIC) timesync feature. This combination allows for achieving a PTP time clock source with an accuracy of approximately 30ns.

To enable this feature in the RxTxApp sample application, use the --ptp argument. The control for the built-in PTP feature is the MTL_FLAG_PTP_ENABLE flag in the mtl_init_params structure.

Note: Currently, the VF (Virtual Function) does not support the hardware timesync feature. Therefore, for VF deployment, the timestamp of the transmitted (TX) and received (RX) packets is read from the CPU TSC (TimeStamp Counter) instead. In this case, it is not possible to obtain a stable delta in the PTP adjustment, and the maximum accuracy achieved will be up to 1us.

2. Install Intel® Tiber™ Broadcast Suite#

Option #1: Build Docker Image from Dockerfile Using build.sh Script#

Note: This method is recommended instead of Option #2 - layers are built in parallel, cross-compatibility is possible.

  1. Access the project directory.

cd Intel-Tiber-Broadcast-Suite
  1. Install Dependencies.

sudo apt-get update
sudo apt-get install meson python3-pyelftools libnuma-dev
  1. Run build.sh script.

Note: For build.sh script to run without errors, docker-buildx-plugin must be installed. The error thrown without the plugin does not inform about that fact, rather that the flags are not correct. See section 1.2.1. Install Docker build environment for installation details.

./build.sh

Option #2: Local Installation of The Applications from Sources#

  1. Navigate to the source directory of the project to begin the local installation process.

cd <project_dir>/src
  1. Run build_local.sh script.

./build_local.sh

This script is used to build the project with optional configurations. It accepts the following parameters:

Parameters:#

  • -ut: Enables building with unit tests. Use this option if you want to include unit tests in the build process.

  • --build_type <type>: Specifies the build type. Replace <type> with the desired build configuration, such as “Debug” or “Release”. This allows you to control the optimization and debugging settings of the build.

  • -h, --help: Displays the help message, showing usage instructions and available options. Use this option to understand how to use the script.

Option #3: Install Docker Image from Docker Hub#

Visit https://hub.docker.com/r/intel/intel-tiber-broadcast-suite/ Intel® Tiber™ Broadcast Suite image Docker Hub to select the most appropriate version.

Pull the Intel® Tiber™ Broadcast Suite image from Docker Hub:

docker pull intel/intel-tiber-broadcast-suite:latest

Option #4: Build Docker image from Dockerfile Manually#

Note: Below method does not require buildx, but lacks cross-compatibility and may prolong the build process.

  1. Download, Patch, Build, and Install DPDK from source code.

    1. Download and Extract DPDK and MTL:

      . versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
      
       . versions.env && curl -Lf https://github.com/DPDK/dpdk/archive/refs/tags/v${DPDK_VER}.tar.gz | tar -zx --strip-components=1 -C dpdk
      
    2. Apply Patches from Media Transport Library:

      # Apply patches:
      . versions.env && cd dpdk && git apply ${HOME}/Media-Transport-Library/patches/dpdk/$DPDK_VER/*.patch
      
    3. Build and Install DPDK:

      # Prepare the build directory:
      meson build
      
      # Build DPDK:
      ninja -C build
      
      # Install DPDK:
      sudo ninja -C build install
      
    4. Clean up:

      cd ..
      rm -drf dpdk
      
  2. Build image using Dockerfile:

    docker build $(cat versions.env | xargs -I {} echo --build-arg {}) -t video_production_image -f Dockerfile .
    
  3. Change the number of cores used to build by make can be changed with the flag --build-arg nproc={number of proc}

    docker build $(cat versions.env | xargs -I {} echo --build-arg {}) --build-arg nproc=1 -t video_production_image -f Dockerfile .
    
  4. Build the MTL Manager docker:

    cd ${HOME}/Media-Transport-Library/manager
    docker build --build-arg VERSION=1.0.0.TIBER -t mtl-manager:latest .
    cd -
    

3. (Optional) Install Media Proxy#

To use Media Communications Mesh as a transport layer, make sure that Media Proxy is available on the host.

To install Media Proxy, please follow the steps below.

Note: This step is required e.g. for the Media Proxy Pipeline:

Option #2: Local installation#

  1. Clone the Media Communications Mesh repository

    git clone https://github.com/OpenVisualCloud/Media-Communications-Mesh.git
    cd Media-Communications-Mesh
    
  2. Install Dependencies

    • gRPC: Refer to the gRPC documentation for installation instructions.

    • Install required packages:

      • Ubuntu/Debian

        sudo apt-get update
        sudo apt-get install libbsd-dev cmake make rdma-core libibverbs-dev librdmacm-dev dracut
        
      • CentOS stream

        sudo yum install -y libbsd-devel cmake make rdma-core libibverbs-devel librdmacm-devel dracut
        
    • Install the irdma driver and libfabric:

      ./scripts/setup_rdma_env.sh install
      
    • Reboot.

    [!TIP] More information about libfabric installation can be found in Building and installing libfabric from source.

  3. Build the Media Proxy binary

    ./build.sh
    

4. Preparation to Run Intel® Tiber™ Broadcast Suite#

BCS pod launcher starts once Media Proxy Agent instance (on one machine) and MCM Media Proxy instances on each machine. It enables to starts BCS ffmpeg pipeline with bound NMOS client node application.

        graph TD
  A[BCS Pod Launcher] -->|Starts| B[Media Proxy Agent Instance]
  A -->|Starts| C[MCM Media Proxy Instances]
  C -->|One per Machine| D[MCM Media Proxy Instance]
  A -->|Starts| E[BCS FFMPEG Pipeline]
  E -->|Bound to| F[NMOS Client Node Application]
    

BCS Pod Launcher is the central controller that starts all other components.
Media Proxy Agent Instance is a single instance started by the BCS Pod Launcher on one machine.
MCM Media Proxy Instances is a multiple instances started by the BCS Pod Launcher, one on each machine.
BCS FFMPEG Pipeline is a pipeline started by the BCS Pod Launcher whereas NMOS Client Node Application is bound to the BCS FFMPEG Pipeline for media management and discovery. And this pair of containers that are working seperate containers either standalone or within one pod.

There are 2 possible use cases:

  • run in containerized mode - <repo>/launcher/internal/container_controller/container_controller.go implements a DockerContainerController that is responsible for managing Docker containers based on the launcher configuration. The DockerContainerController is designed to:

    • Parse the launcher configuration file.

    • Create and run Docker containers based on the parsed configuration.

    • Handle container lifecycle operations such as checking if a container is running, removing existing containers, and starting new ones based on input configuration file in json format.

  • run in orchestrated mode in cluster using kuberenetes - <repo>/launcher/internal/controller/bcsconfig_controller.go implements the Kubernetes controller logic for managing BcsConfig custom resources. This controller is responsible for reconciling the state of the cluster with the desired state defined in the BcsConfig resources. The BcsConfigReconciler is designed to:

    • Watch for changes to BcsConfig custom resources.

    • Reconcile the desired state by creating or updating Kubernetes resources such as ConfigMaps, Deployments, and Services.

    • Ensure that the BcsConfig resources are correctly applied and maintained in the cluster.

Prerequisite is to build 4 images in advance if you do not have access to any registry with released images:

  • mesh-agent

  • media-proxy

  • tiber-broadcast-suite (in this scenario it is Broadcast Suite pipeline app - ffmpeg)

Prerequisite - build necessary images#

mesh-agent and media-proxy#

It is recommended to use Setup Guide Media Communications Mesh and complete the steps with runing script build_docker.sh. No need to execute docker run or kubectl apply/create. BCS launcher will do it for you.

tiber-broadcast-suite and tiber-broadcast-suite-nmos-node#

git clone https://github.com/OpenVisualCloud/Intel-Tiber-Broadcast-Suite.git Intel-Tiber-Broadcast-Suite
cd Intel-Tiber-Broadcast-Suite
./build.sh
# first_run.sh needs to be run after every reset of the machine
./first-run.sh 

NOTE MTL manager image is build together with tiber-broadcast-suite tiber-broadcast-suite-nmos-node images using script build.sh. MTL manager container is run due to first-run.sh. In this version BCS launcher has not supported running MTL manager container yet. In this version BCS launcher has already supported MTL manager deployment, but make sure before run of BCS launcher that MTL manager container is not running as a standalone app. MTL manager is necessary to run scenario with stream type ST2110.

Description

The tool can operate in two modes:

  • Kubernetes Mode: For multi-node cluster deployment.

  • Docker Mode: For single-node using Docker containers.

Flow (Common to Both Modes)

  1. Run MediaProxy Agent

  2. Run MCM Media Proxy

  3. Run BcsFfmpeg pipeline with NMOS

In case of docker, MediaProxy/MCM things should only start/run once and on every run of launcher, one should start the app according to input file. It does not store the state of apps, just check appropriate conditions.

In case of kuberenetes, MediaProxy/MCM things should only be run once and BCS pod launcher works as operator in the understanding of Kuberenetes operators within pod. That is way, input file in this way is CustomReaource called BcsConfig.

Additional necessary steps when Docker (containers) scenario/mode is used#

Then proceed to run.md instructions to run containers using BCS launcher.

Additional necessary steps when Kubernetes scenario/mode is used#

IMPORTANT NOTE! The prerequisite is to prepare cluster (for example the simplest one using the link below): Creating a cluster with kubeadm

Build image:

cd <repo>/launcher
docker build -t bcs_pod_launcher:controller .

NOTE! If you have issues with building, try to add proxy environment variables. --build-arg http_proxy=<proxy> and --build-arg https_proxy=<proxy>. Example: docker build --build-arg http_proxy=<proxy> --build-arg https_proxy=<proxy> -t bcs_pod_launcher:controller .

Then proceed to run.md instructions to deploy resources on the cluster.


No need to execute steps 4.2-4.4 because it refers to local installation on machines.

4.2. First Run Script#

Note: first_run.sh needs to be run after every reset of the machine.

Note: to obtain image mtl-manager:latest, you need to build image using script <repo_dir>/build.sh

From the root of the Intel® Tiber™ Broadcast Suite repository, execute the first_run.sh script that sets up the hugepages, locks for MTL, E810 NIC’s virtual controllers, and runs the MtlManager docker container:

sudo -E ./first_run.sh | tee virtual_functions.txt

Note: Please ensure the command is executed with -E switch, to copy all the necessary environment variables. Lack of the switch may cause the script to fail silently.

When running the Intel® Tiber™ Broadcast Suite locally, please execute first_run with the -l argument.

sudo -E ./first_run.sh -l | tee virtual_functions.txt

This script will start the Mtl Manager locally. To avoid issues with core assignment in Docker, ensure that the Mtl Manager is running. The Mtl Manager is typically run within a Docker container, but the -l argument allows it to be executed directly from the terminal.

Note: Ensure that MtlManager is running when using the Intel® Tiber™ Broadcast Suite locally. You can check this by running pgrep -l "MtlManager". If it is not running, start it with the command sudo MtlManager.

Note: In order to avoid unnecessary reruns, preserve the command’s output as a file to note which interface was bound to which Virtual Functions.

4.3. Test Docker Installation#

docker run --rm -it --user=root --privileged video_production_image --help

4.4. Test Local Installation#

ffmpeg --help

5. Running the Image#

Go to the Running Intel® Tiber™ Broadcast Suite Pipelines instruction for more details on how to run the image.