A Guide to Encoding and crack points of Low Latency Streaming

A Guide to Encoding and crack points of Low Latency Streaming

From Video Encoding Basics to Optimizing Streaming Workflows

Video streaming over the internet is gaining importance in many industries, including broadcast, enterprise, and government. It has become popular for a number of reasons, particularly because live video is a great way to contribute content and engage with consumers, employees, and the community. For broadcast engineers video streaming over the internet is a cost-effective and flexible alternative to satellite services.

For AV professionals, video streaming, if correctly implemented, can be an efficient and flexible means of communicating across an organization. Flexibility is not only important for content creation and keeping up with demand, but also for scalability and business continuity.

Video streaming begins with video encoding. For content creators, video encoding can be the most important part of a workflow, which is why it is so important to have a solid grasp of the basics of video encoding before embarking with video streaming.

This guide will explore the principle concepts of video encoding and streaming, including compression, codecs, latency, and network transport considerations when streaming from encoders.

The best way to identify your encoding priorities is to first understand the use case and end goal for your live video stream – what is it needed for, for who, and what will be the measure of its success? By establishing these priorities, one can review the four factors that make up a successful live video streaming experience:

• Quality / • Bandwidth / • Security and Reliability / • Latency

Blankom HDE-265 Hdmi Encoder

AN INTRODUCTION TO VIDEO ENCODING

What is Video Encoding?

Video encoding is the process of compressing raw video for transport over IP networks such as office LANs and the internet. As IP networks have limited bandwidth, the encoder needs to be able to compress the content accordingly. There are two types of video encoding: file-based and live, and it’s important to make the distinction between them.

When working with video files, encoders are used to compress and reduce the size of video content so that it can take up less storage space and be easier to transfer from one part of a video production workflow to another. Since the video files are not live, latency is usually not a key concern.

Live video encoding is the process of compressing real-time video and audio content prior to streaming. Compression significantly reduces the bandwidth required, making it possible for real-time video to be transmitted across constrained networks while maintaining picture quality at levels suitable for viewing. However, depending on the type of encoder used, compressing live video can also add latency which if too great can negatively impact the overall quality of experience.

Decoding and Transcoding: A Brief Overview

Video decoding is essentially the opposite of encoding. It is the process of decoding or uncompressing encoded video. A decoder can output uncompressed video through SDI for further video processing or over HDMI for displaying directly on a screen.

Decoders can also extract embedded audio tracks for sound production. Embedded metadata can be passed on by the decoder to other production components for information on video formatting, time codes, subtitles, and closed captioning.

Synchronizing Feeds Some decoders, support multiple incoming streams and can resync them based on timecode prior to decoding to SDI. This is especially useful for live broadcasts with multiple Camera angles that share an audio source.

For live video, it is imperative that video decoders add as little latency as possible in order to minimize the impact on production and provide a broadcast quality experience.

Video transcoding is the process of converting an already encoded stream from one format to another, or from one size to another. Most transcoders use a two-step process of decoding and re-encoding. Video transcoding is commonly used for enabling OTT (over the top) internet streaming services with a high quality source or mezzanine video transcoded into a cascade of different bitrates and resolutions. These multiple video transcodes or profiles are needed for ABR (adaptive bitrate) streaming which adapts picture quality in real-time based on available bandwidth. This enables a single video source to be delivered to different viewing devices including connected televisions, computers, and smartphones.

Video Encoders

There are two types of video encoders - software and hardware-based.

Software encoders can be installed on standard off the shelf hardware or as virtual machines (VM) in data centers and cloud platforms. Although software can be a great option for encoding file-based video content, depending on the computer hardware they run on, they don’t always offer ultra-low latency levels like dedicated hardware encoders and therefore are not always suitable for live broadcast contribution applications.

Hardware encoders are turnkey devices with dedicated processing power for low latency encoding of video streams. Whereas software encoders have to share CPU and other resources, hardware encoders can use purpose-designed micro-processing chips and can therefore encode and stream live video with very little latency.

Hardware video encoders are used by a wide range of organizations for delivering pristine quality, low latency video for many different applications including:

• Broadcast – for backhaul, bi-directional interviews, return feeds, and remote production (REMI)

• Enterprise – for internet streaming of all-hands meetings, product training, and employee briefs as IPTV, and digital signage

• Defense – for mission critical Intelligence, Surveillance and Reconnaissance (ISR) applications

(from.. Blankom . de / Downloads / White Papers)


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics