Build guide#

To successfully build the Intel® Tiber™ Broadcast Suite, you need to follow a series of steps involving BIOS configuration, driver installation, host machine setup, and package installation. Depending on your preference, you can install the suite as a Docker application (the recommended method) or directly on a bare metal machine.

Table of contents#

1. Prerequisites#

Steps to perform before run Intel® Tiber™ Broadcast Suite on host with Ubuntu operating system installed.

1.1. BIOS settings#

Note: It is recommended to properly setup BIOS settings before proceeding. Depending on manufacturer, labels may vary. Please consult an instruction manual or ask a platform vendor for detailed steps.

Following technologies must be enabled for Media Transport Library (MTL) to function properly:

1.2. Install Docker#

Note: This step is optional if you want to install Intel® Tiber™ Broadcast Suite locally.

1.2.1. Install Docker build environment#

To install Docker environment please refer to the official Docker Engine on Ubuntu installation manual’s Install using the apt repository section.

Note: Do not skip docker-buildx-plugin installation, otherwise the build.sh script may not run properly.

1.2.2. Setup docker proxy#

Depending on the network environment it could be required to set up the proxy. In that case please refer to Configure the Docker client section of Configure Docker to use a proxy server guide.

1.3. Install GPU driver#

1.3.1. Intel Flex GPU driver#

To install Flex GPU driver follow the 1.4.3. Ubuntu Install Steps part of the Installation guide for Intel® Data Center GPUs.

Note: If prompted with Unable to locate package, please ensure repository key intel-graphics.key is properly dearmored and installed as /usr/share/keyrings/intel-graphics.gpg.

Use vainfo command to check the gpu installation:

sudo vainfo

1.3.2. Nvidia GPU driver#

In case of using an Nvidia GPU, please follow the steps below:

sudo apt install --install-suggests nvidia-driver-550-server
sudo apt install nvidia-utils-550-server

In case of any issues please follow Nvidia GPU driver install steps.

Note: Supported version of Nvidia driver compatible with packages inside Docker container is

  • Driver Version: 550.90.07

  • CUDA Version: 12.4

2. Install Intel Tiber™ Broadcast Suite#

Option #1: Build Docker image from Dockerfile using build.sh script#

Note: This method is recommended instead of Option 2 - layers are built in parallel, cross-compability is possible.

Access the project directory.

cd Intel-Tiber-Broadcast-Suite

Run build.sh script.

Note: For build.sh script to run without errors, docker-buildx-plugin must be installed. The error thrown without the plugin does not inform about that fact, rather that the flags are not correct. See chapter 1.2. Install Docker build environment for installation details.

./build.sh

Option #2: Local installation from Debian packages#

You can install the Intel® Tiber™ Broadcast Suite localy on bare metal. This installation allows you to skip installing docker altogether.

./build.sh -l

Option #3: Install Docker image from Docker Hub#

Visit https://hub.docker.com/r/intel/intel-tiber-broadcast-suite/ Intel® Tiber™ Broadcast Suite image docker hub to select the most appropriate version.

Pull the Intel® Tiber™ Broadcast Suite image from Docker Hub:

docker pull intel/intel-tiber-broadcast-suite:latest

Option #4: Build Docker image from Dockerfile manually#

Note: Below method does not require buildx, but lacks cross-compability and may prolongate the build process.

Download, Patch, Build, and Install DPDK from source code.

  1. Download and Extract DPDK and MTL:

    . versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
    
     . versions.env && curl -Lf https://github.com/DPDK/dpdk/archive/refs/tags/v${DPDK_VER}.tar.gz | tar -zx --strip-components=1 -C dpdk
    
  2. Apply Patches from Media Transport Library:

    # Apply patches:
    . versions.env && cd dpdk && git apply ${HOME}/Media-Transport-Library/patches/dpdk/$DPDK_VER/*.patch
    
  3. Build and Install DPDK:

    # Prepare the build directory:
    meson build
    
    # Build DPDK:
    ninja -C build
    
    # Install DPDK:
    sudo ninja -C build install
    
  4. Clean Up:

    cd ..
    rm -drf dpdk
    

Build image using Dockerfile:

docker build $(cat versions.env | xargs -I {} echo --build-arg {}) -t video_production_image -f Dockerfile .

Change number of cores used to build by make can be changed by –build-arg nproc={number of proc}

docker build $(cat versions.env | xargs -I {} echo --build-arg {}) --build-arg nproc=1 -t video_production_image -f Dockerfile .

Build the mtl manager docker:

cd ${HOME}/Media-Transport-Library/manager
docker build --build-arg VERSION=1.0.0.TIBER -t mtl-manager:latest .
cd -

3. (Optional) Install Media Communications Mesh Media Proxy#

To use Media Communications Mesh as transport layer, make sure that Media Communications Mesh Media Proxy is available on host.

To install Media Communications Mesh Media Proxy, please follow below steps.

Note: This step is required e.g. for the Media Communications Mesh Media Proxy Pipeline:

Option #2: Local installation#

  1. Clone the Media Communications Mesh repository

    git clone https://github.com/OpenVisualCloud/Media-Communications-Mesh.git
    cd Media-Communications-Mesh
    
  2. Install Dependencies

    • gRPC: Refer to the gRPC documentation for installation instructions.

    • Install required packages:

      • Ubuntu/Debian

        sudo apt-get update
        sudo apt-get install libbsd-dev cmake make rdma-core libibverbs-dev librdmacm-dev dracut
        
      • Centos stream

        sudo yum install -y libbsd-devel cmake make rdma-core libibverbs-devel librdmacm-devel dracut
        
    • Install the irdma driver and libfabric:

      ./scripts/setup_rdma_env.sh install
      

[!TIP] More information about libfabric installation can be found in Building and installing libfabric from source. - Reboot.

  1. Build the Media Communications Mesh Media Proxy binary

    ./build.sh
    

4. Preparation to run Intel Tiber™ Broadcast Suite#

4.1. First run script#

Note: first_run.sh needs to be run after every reset of the machine.

From the root of the Intel® Tiber™ Broadcast Suite repository, execute first_run.sh script that sets up the hugepages, locks for MTL, E810 NIC’s virtual controllers and runs MtlManager docker container:

sudo -E ./first_run.sh | tee virtual_functions.txt

Note: Please ensure the command is executed with -E switch, to copy all the necessary environment variables. Lack of the switch may cause the script to fail silently.

When running the Intel Tiber™ Broadcast Suite locally, please execute first_run with the -l argument.

sudo -E ./first_run.sh -l | tee virtual_functions.txt

This script will start the Mtl Manager locally. To avoid issues with core assignment in Docker, ensure that the Mtl Manager is running. The Mtl Manager is typically run within a Docker container, but the -l argument allows it to be executed directly from the terminal.

Note: Ensure that MtlManager is running when using the Intel Tiber™ Broadcast Suite locally. You can check this by running pgrep -l "MtlManager". If it is not running, start it with the command sudo MtlManager.

Note: In order to avoid unnecessary reruns, preserve the command’s output as a file to note which interface was bound to which Virtual Functions.

4.2. Test docker installation#

docker run --rm -it --user=root --privileged video_production_image --help

4.3. Test local installation#

ffmpeg --help

5. Running the image#

Go to the Running Intel® Tiber™ Broadcast Suite Pipelines instruction for more details on how to run the image.