- January 21, 2021
- 2 minute read
Welcome back to the Second Article pertaining to our Streaming MiniSeries; in the previous, opening Article, we introduced Streaming as a high level concept in the Delivery Panorama and focused on Streaming via RTMP, an Adobe proprietary Protocol.
Today , our technical journey into Adobe proprietary Streaming technologies continues by taking a closer look at HLS , also known as HTTP Live Streaming, aka Adobe’s compliance to Apple’s diktats: this is peculiar as it is one of the few proprietary technologies compatible with all iDevices, as well as its general adoption since introduction has been pretty wide. Without further ado, let’s have a deeper look into it.
To begin with HLS or HTTP Live Streaming is a misnomer. HLS can stream both pre-recorded (On Demand) and Live video and audio streams. Set aside this obviously confusing nomenclature (not that Adobe ever really paid a big effort into reasonable Marketing Product names), let us look at this Streaming Technology. HLS was developed by Apple to serve its range of Mobiles and deliver video and audio to iDevices (which run on Apple proprietary operating system iOS 3.0 and on Safari 4.0 or later browser).
It is interesting to note that Apple has always adopted Technologies which are different from mainstream. When Microsoft Windows was the de facto Operating System, Apple came up with iOS for its Mac Computer. It was natural that Apple will continue with its romance with their own creations and so when the time came for Streaming, it adopted HLS as its primary vehicle for it. Given that Apple iPhone and iPad are wildly successful and adopted Platforms, there has been considerable interest shown by video and audio content developers to use HLS standard to deliver Streaming content.
Though there have been attempts by Google through its Android Devices and others to provide Streaming via HLS, the experience so far has been rather mixed and much less than a “total success”. It is not a daring statement, to say that HLS is primarily a Streaming Technology meant for Apple devices.
We must say kudos to Apple for providing extensive Documentation and Tools to enable developers to adopt HLS. Like (almost) any other Streaming Technology HLS also accomplishes its task by breaking down the video file into smaller segments (also called chunks, the process is also hence called “chunking”). Since it is an adaptive Streaming process, the Bitrates must accommodate and adapt to the channel Bandwidth and other parameters of the media Player. This means that there is an element (called “Manifest File”, it’s the one ending in “.m3u8”, as opposed to the raw content chunks represented by the “.ts” file extension) embedding information into the Stream which does communicate all available Bitrates from the Media Server to the player. Therefore, HLS stream essentially consists of two channels – content and signal.
Conceptually, HTTP Live Streaming’s Delivery workflow consists of three parts: the Server, the Distribution Component and the Client Software.
The Server takes care of preparing the media for distribution by encoding the original video /audio file and formatting it for Delivery.
Since the HLS Streaming is based on the HTTP protocol, its Distribution is through an off-the-shelf Web Server. The Web Server is responsible to distribute the formatted video.
The third architectural Component of HLS consists of Client Software which performs the task of ascertaining the required media, downloading it and restoring original content before streaming to the End User. This software is part of iOS 3.0+ Operating System in case of Apple devices and Safari 4.0+ for Web Browsers.
Typically in a Live Streaming context, the Server receives raw video and encodes it as H.264 video and AAC audio using a Media Encoder and outputs it in an MPEG-2 Transport Stream. A Stream Segmenter, in turn breaks this stream into segments (or chunks, file extension “.ts” as mentioned above). It also creates an index file (or Manifest, file extension “.m3u8”) which stores information on the segments’ numbering and distribution across Bitrate renditions. The index file and the associated segments are then placed on a Web Server. The Client Software creates a continuous media stream by using the index file and recreating the video/audio segments.
You must recall that the strength of HLS lies in Adaptive Streaming. This is all the more important because HLS is used in a Mobile Devices’ world, where resources in terms of bandwidth may vary greatly from one moment to another, say from a cellular to Wi-Fi or from 3G to EDGE Connections. Needless to say, the turn-of-the-Century generation of one single segment (Bitrate) Streams with an associated index file would be totally inadequate in dealing with today’s requirements (and make End Users connecting via Satellite from the Sahara Desert very unhappy). Therefore there is a need to introduce a bit of complexity into the basic structure which we discussed above.
One of the ways to handle alternate streams’ renditions is to generate and store several sets of segment files and deliver only a single compatible stream. This is exactly how HLS handles Adaptive Streaming. Chances are, you may now be wondering following: “Doesn’t having a number of different sets of segments put a strain on Server resources?” Of course yes. You have to store different files’ renditions, which means more Storage requirement on your Server’s part. Imagine that you have to keep thousands of video files for a video On Demand service. In such a case, storing several versions of the same original video can become a serious issue.
In HLS, the master index (or Manifest) file contains information about the alternate index files and related segments. Several alternate index files with segments are available but only one of them is streamed depending on the current Bandwidth of the receiver. In an On Demand environment, the master index file is downloaded only once at the beginning of the streaming session when the first alternative index file along with the respective segments is streamed. Later, any of the alternates can be chosen by the receiver depending on the available bandwidth. In a Live Environment instead, the Manifest file updates every time a new Live chunk is produced and needs to be referenced for Clients to keep on enjoying the “on air” event.
Each content chunk or segment accounts for a given playback time on an End User part, as you may know by now. This Setting – like most other ones – are configurable and as always, there are different views regarding what the “ideal” playback duration of any given chunk shall be.
Adobe’s recommended length duration of segments (chunks) used in HLS is 10 seconds. Though Apple claims that this enables optimum caching for a CDN, there are doubts raised by some tech-heads (including us). Our opinion is that ten second segments may be too long for efficient caching, when a CDN is adopted for Delivery of a Live Event.
The subject of HLS is vast and requires lots more discussion than could be presented here. However, we have covered the essentials of HLS which was our intention and further Posts will touch HLS as a Streaming Technology much more in depth, especially as related to a CDN sitting in the middle for Delivery and its own challenges. To wrap it up –