How to install FFmpeg with Harware Accelaration on AWS

Mirza BilalMirza Bilal
7 min read

Introduction

Video is not just a story; it is the storyteller. It weaves narratives, captures moments, and connects us in ways words alone cannot. The ever-increasing demand for video content in marketing, streaming and OTT platforms, social media, and mobile video consumption has increased the demand for bandwidth and server capacity. Cloud media processing has revolutionized the media industry, enhancing accessibility to video content beyond previous boundaries. The continuously increasing appetite for multimedia content and the growing prominence of deep learning have driven industry leaders like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform to offer GPU-enabled instances tailored for parallel processing excellence.

FFmpeg is an indispensable multimedia innovation champion, consistently delivering eminence in audio-video processing and transcoding. If you are new to FFmpeg or need to setup FFmpeg for optimal performance on CPU, you can refer to the first article of this series, How to install FFmpeg on Linux from Source where we discussed about FFmpeg and provided a step-by-step guide to make the most out of FFmpeg on CPU.

Now we will take a step further and look into how FFmpeg can leverage the power of hardware acceleration, which can significantly reduce processing time and help deliver content to users faster than ever before. This How-to guide will show you how to set up FFmpeg for multimedia transcoding and other multimedia task with hardware acceleration.

Problem

Traditional CPU-based video processing on cloud servers can be time-consuming and inefficient. As videos increase in resolution and size, the processing demand grows exponentially. Even with multi-core CPU configurations, the computational overhead can cause extended processing times, leading to delays in workflows and increased costs. Given cloud infrastructure providers charge per instance-hour, the longer your server runs, the more expensive your bill becomes. Failing to embrace GPU-powered machines means underutilizing their potential, especially since GPUs are specifically designed for parallel computational tasks. Thus, there is a pressing need to harness the GPU's power to optimize FFmpeg's performance and make the most out of it.

The Remedy

Know your Hardware

The first step is to figure out the GPU for your required architecture. For the scope of this guide, we will focus on Nvidia Tesla T4 and Nvidia Tesla T4G available with AWS Graviton2 G4g and Intel-based G4dn instances on AWS, but you can follow this guide for any Nvidia GPU by installing the desired driver yourself or by following this guide and getting the appropriate driver from the Nvidia driver download center.

Update and Install System Utilities

The following snippet updates your existing libraries and other utilities. It uses some variables that are defined in the final script.

echo "Installing utilities..."
dnf -y update
dnf -y groupinstall "Development Tools"
dnf install -y openssl-devel cmake3 amazon-efs-utils htop iotop yasm nasm jq

Setup Nvidia GPU, CUDA, and CUDNN

As the scope of this article is to demonstrate a working GPU-powered FFmpeg on AWS, the following will check whether this is the Graviton2-powered G4g instance which is based on ARM/AARCH64 architecture or an Intel-based G4dn instance. Once selected, script will install the appropriate drivers for the architecture.

if [ "$(uname -m)" = "aarch64" ]; then
    echo "System is running on ARM / AArch64"
    DRIVE_URL="https://us.download.nvidia.com/tesla/535.104.05/NVIDIA-Linux-aarch64-535.104.05.run"
    CUDA_SDK_URL="https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux_sbsa.run"
    CUDNN_ARCHIVE_URL="https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.5.29_cuda12-archive.tar.xz"
else
    DRIVE_URL="https://us.download.nvidia.com/tesla/535.104.05/NVIDIA-Linux-x86_64-535.104.05.run"
    CUDA_SDK_URL="https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run"
    CUDNN_ARCHIVE_URL="https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.5.29_cuda12-archive.tar.xz"
fi
echo "Setting up GPU..."
DRIVER_NAME="NVIDIA-Linux-driver.run"
wget -O "$DRIVER_NAME" "$DRIVE_URL"
TMPDIR=$LOCAL_TMP sh "$DRIVER_NAME" --disable-nouveau --silent

CUDA_SDK="cuda-linux.run"
wget -O "$CUDA_SDK" "$CUDA_SDK_URL"
TMPDIR=$LOCAL_TMP sh "$CUDA_SDK" --silent --override --toolkit --samples --toolkitpath=$USR_LOCAL_PREFIX/cuda-12.2 --samplespath=$CUDA_HOME --no-opengl-libs

CUDNN_ARCHIVE="cudnn-linux.tar.xz"
EXTRACT_PATH="$SRC_DIR/cudnn-extracted"
mkdir -p "$EXTRACT_PATH"

wget -O "$CUDNN_ARCHIVE" "$CUDNN_ARCHIVE_URL"
tar -xJf "$CUDNN_ARCHIVE" -C "$EXTRACT_PATH"
CUDNN_INCLUDE=$(find "$EXTRACT_PATH" -type d -name "include" -print -quit)
CUDNN_LIB=$(find "$EXTRACT_PATH" -type d -name "lib" -print -quit)
cp -P "$CUDNN_INCLUDE"/* $CUDA_HOME/include/
cp -P "$CUDNN_LIB"/* $CUDA_HOME/lib64/
chmod a+r $CUDA_HOME/lib64/*
ldconfig

By now, you should have a working system with an Nvidia device driver which can be checked by running nvidia-smi in the terminal.

Install FFmpeg Prerequisites

Before diving into FFmpeg's installation, it's crucial to understand and set up its dependencies, so FFmpeg can process most general process tasks for different formats and get an understanding of how to expand or limit the scope of FFmpeg for your multimedia task.

System Dependencies

FFmpeg is a great tool for text processing and text rendering in videos as well. To enable this feature, we will need to install libraries, that are used for text manipulation, embedding subtitles and other fancy operations like embedding the watermark of your brand in the video. These are essential system libraries that FFmpeg requires.

dnf install -y freetype-devel fribidi-devel harfbuzz-devel fontconfig-devel bzip2-devel

Installing Audio & Video Codecs

At its core, FFmpeg functions like a versatile framework, ready to be extended with various plugins and modules. One of the primary ways this extensibility shines is through its support for many audio and video codecs. These codecs are the building blocks that allow FFmpeg to decode and encode media in various formats.

Now we will install different codecs, FFmpeg is like a template that is built in a way anyone can write a plugin like a video codec and that can be used with FFmpeg

  1. ffnvcodec: Since the target instance is powered with GPU, ffnvcodec codec enables FFmpeg to leverage NVIDIA GPU acceleration, significantly speeding up video processing tasks for av1, H.264 and HEVC/H.265

  2. LIBASS: FFmpeg can use LIBASS for subtitle renderer to handle subtitles in various video formats.

  3. LIBAOM (AV1 Codec Library): FFmpeg uses LIBAOM to decode and encode AV1 videos, if you want to use software transcoding for AV1.

  4. Other Codecs: These include libraries for various audio and video formats (e.g., libmp3lame for MP3 audio, opus for the Opus audio codec, and libogg for the Ogg container). In this build we will also enable software encoding of H.264 and HEVC/H.265 using x264 and x265 libraries .

By ensuring all these prerequisites are installed and correctly set up, FFmpeg can provide its robust set of features, and users can harness the desired power out of this great utility.

Installing FFmpeg

Finally, we will compile GPU-accelerated FFmpeg from the source, with enabled support of all the codecs and libraries we have compiled or installed in previous steps.

wget https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 &&
tar -jxf ffmpeg-snapshot.tar.bz2 &&
pushd ffmpeg &&
PKG_CONFIG_PATH="$USR_LOCAL_PREFIX/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig:/usr/lib/pkgconfig:$USR_LOCAL_PREFIX/lib/pkgconfig" \
    ./configure \
    --prefix="$USR_LOCAL_PREFIX" --disable-static --enable-shared \
    --extra-cflags="-I$USR_LOCAL_PREFIX/include $NVIDIA_CFLAGS" \
    --extra-ldflags="-L$USR_LOCAL_PREFIX/lib $NVIDIA_LDFLAGS" \
    --extra-libs='-lpthread -lm' --bindir="$USR_LOCAL_PREFIX/bin" \
    --enable-gpl --enable-libaom --enable-libass --enable-libfdk-aac \
    --enable-libfreetype --enable-libmp3lame --enable-libopus \
    --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 \
    --enable-nonfree --enable-openssl $NVIDIA_FFMPEG_OPTS &&
make -j $CPUS &&
make install

Verifying GPU Acceleration Support in FFmpeg

Once you've set up FFmpeg with GPU acceleration capabilities, it's a good idea to ensure that it recognizes and can utilize the GPU for tasks like video encoding and decoding. The command below checks for the presence of NVIDIA's NVENC (NVIDIA Video Encoder) in the list of supported codecs within FFmpeg:

ffmpeg -hide_banner -codecs | grep nvenc

If you see the output codecs with the nvenc label (e.g., h264_nvenc, hevc_nvenc, av1_nvenc), it indicates that FFmpeg has been correctly configured to support GPU-accelerated encoding for those specific codecs using NVIDIA's hardware.

The Complete Script

Building upon the various segments we've discussed, here's the consolidated FFmpeg installation script along with the Nvidia GPU installation. This script has been thoroughly tested on Amazon Linux 2023 and Ubuntu 22.04 running on AWS G5g instances. It's not just limited to those; its flexibility is its strength. Whether you're on Debian, Red Hat, CentOS, Fedora, openSUSE, or even the sleek Alpine, this script is crafted to serve both ARM/aarch64 and x86_64 architectures seamlessly. Dive in and experience the universality of our FFmpeg installation script, bringing the power of multimedia to every corner of the Linux world.

Hands-On: Converting H.264 to H.265 with GPU Acceleration

In this section, we'll take a hands-on approach to show you how to convert an H.264 encoded video to H.265 using the power of NVIDIA's GPU acceleration. This way we can test the installation and experience the power of GPU acceleration in action. To perform this conversion, use the following command:

ffmpeg -y -hide_banner -hwaccel cuvid -c:v h264_cuvid -i input.mp4 -vf "scale_cuda=720:480" -c:v hevc_nvenc output.mp4

In this command:

  • -hwaccel cuvid and -c:v h264_cuvid uses GPU-based decoding of the H.264 video.

  • -vf "scale_cuda=1920:1080" scales the video using GPU acceleration.

  • -c:v hevc_nvenc instructs FFmpeg to encode the output video using the H.265 codec with NVIDIA's GPU acceleration.

After executing the command, you'll get output.mp4 an H.265 1080p encoded video.

Conclusion

By integrating FFmpeg with GPU hardware acceleration on AWS or any other infra-provider or architecture, you can optimize costs and reduce processing times, but it also future-proofs your workflow for higher-resolution media processing tasks down the line. By compiling from the source we are also addressing the outdated libraries and generalized package issues, which are not fully optimized for your architecture, as discussed in "How to install FFmpeg on Linux from source" and some aspects are also discussed in the article "Why your AWS Deep learning AMI is Holding you back and how to fix as well. The evolution of multimedia is ceaseless, but with tools like FFmpeg combined with AWS's GPU prowess and cost-effectiveness, you are well-equipped to handle whatever challenges come next. Whether you're a media company looking to scale or a hobbyist aiming for maximum efficiency, GPU-accelerated FFmpeg on AWS is a game-changer.

3
Subscribe to my newsletter

Read articles from Mirza Bilal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mirza Bilal
Mirza Bilal

Seasoned technology leader with 17+ years of experience. Driving startup to industry leader. Committed to technical excellence, innovation, and community engagement for the advancement of technology and education.