Video QoE is of vital importance to operators and not prioritizing video quality can rob them of both revenue and viewers. [Updated]
First off, some history. On 28 April 2019, the third episode of the last season of Game of Thrones, ‘The Long Night’ aired. Featuring the longest battle sequence ever filmed, shot over 55 grueling nights of a Northern Irish winter, it was also some of the most expensive television ever produced. It should have been a triumph for broadcaster HBO, and in many senses it was, but the immediate debate surrounding the episode’s initial screening was less about the multiple plot twists, resolutions, and sheer underlying spectacle, and more on the fact that viewers really couldn’t make out what was going on.
It’s an instructive tale, which we examine in a bit more detail at the end of this post, and one that highlights just how important Quality of Experience (QoE) is in the modern broadcast industry. In an increasingly competitive environment, keeping viewers happy and engaged is of primary importance.
How is the industry doing at this? Not bad, but could be better as well. The latest figures show overall progress in both picture quality and start times, but drilling down in the statistics highlights issues. Smart TV QoE is improving YoY, on mobile and desktop it is going backwards; geographically Africa suffers from almost 5% video start failures, while service quality in Asia declined across the board at the end of 2020 and viewers had to contend with worse buffering, start times, and start failures.
Video QoE: the problem of buffering
We’ve talked recently about the problems of latency in OTT, which is an increasing issue as services start to pivot towards offering more live sports and events. Standard linear broadcast latency varies between 3.5 to 12 seconds, with satellite capable of being marginally quicker than cable. OTT latency however, is currently hovering around the 25-40 second mark.
A lot of the issues surrounding latency are caused by buffering. The HTTP Live Streaming (HLS) format devised by Apple, for instance, specifies a three segment starting buffer. Historically those segments are 6-seconds long, adding 18 seconds alone to a stream. The way that encoding operates adds another segment, taking us up to 24 seconds, and this is, all before we get to any encoding, first mile, distribution and CDN delays. Buffering doesn’t only happen at the start of playback, it can happen at any point in the process, which is technically rebuffering.
There is interesting work underway here involving Apple's new LL-HLS (the LL stands, unsurprisingly for 'low latency'), but there is also a distinction to be made. While buffering hurts any OTT solution provider in terms of QoE at any stage of the process, it is the rebuffering while a stream is being viewed that really kills the figures. A 2017 IBM survey in the US suggested that 63% of consumers experience buffering issues, with only 18% reporting problems with delayed starts (though delayed starts have become more of an issue as more live events and sports appear on OTT and are out of sync with social media etc.).
Akamai’s landmark Understanding the Value of Consistency in OTT Video Delivery report put some figures on the cost of buffering, and here are some key takeaways, some of which has been mined from online video data from a leading (though unnamed) US network, some of which is based on interviews with top execs.
- There is a direct relationship between rebuffering and abandonment
- Each instance of rebuffering results in a 1% abandonment rate
- Improving video QoE reduces churn, by 90% in one SVOD provider’s case
- Audiences currently make QoE allowances for cheap or free services; this attitude is not expected to last
- All buffering issues understandably increase with higher bandwidth material, i.e. UHD
Furthermore, Akamai crunches some numbers that illustrate the scale of the problem for large operators with hundreds of millions of plays. In that case a 1% abandonment can lead to millions of incomplete plays, the loss of hundreds of thousands of additional viewing hours, and further millions of ad impressions lost. Assuming a CPM of $8, Akamai estimates that this network could have lost over $85,000 in revenue for every instance of buffering.
Solving the problem of Video QoE
As the Game of Thrones story illustrates, when it comes to QoE not everything is in an individual broadcaster or operator’s hands. There are also many questions relating to general infrastructure developments and investments which need to be addressed by other parties. Nevertheless there are actions that can be taken to minimize the problem.
First, work with technology companies that understand the importance of QoE. For instance, at VO we have worked hard to ensure that our DRM solutions do not contribute to the problem of latency by implementing proactive license acquisition. This is particularly relevant as more sports and live events feature on OTT services, as one million customers pressing play at the same time can introduce a lot of lag into the system.
Second, invest in delivery. While the content budget is the one that the industry tends to highlight, with customers very willing to abandon not just playback but entire services as well in the face of poor QoE, ensuring consistently high quality delivery is a must. HBO Go’s 5Mbps was found to be lacking in comparison to Amazon’s 10Mbps and, with UHD HDR set to become an increasing differentiator in the market, such bandwidth differences are only going to be exacerbated.
Third, look to new technologies. The Common Media Application Format (CMAF), for example, looked to change the delivery of adaptive bitrate streams and lead to much lower OTT latency, a more reliable QoE, and reduced CDN costs. Likewise, the movement behind LL-HLS is building steam, and holds out the promise of sub-2 second latencies .
Overall though, it should be remembered that video QoE is not an optional extra. Viewers increasingly expect it and are more than happy to vote with their feet and their wallets if it is not achieved.
So, what went wrong in Westeros?
For a series that has made its name by being dark in tone, headlines such as Game of Thrones: Was The Long Night too dark? have a certain irony to them. Shot and graded to be deliberately dark and claustrophobic, by the time ‘The Long Night’ had been squeezed down to an estimated 5Mbps for transmission by HBO Now and HBO Go, the result was a picture full of banding and artefacts. It was genuinely difficult to see what was going on at times, and complaints from viewers were vociferous. Less “For the night is dark and full of terrors,” (one of the signature quotes from the series) and more “For the night is dark and full of pixelated blurs.”
What was interesting was when viewers compared notes on the different viewing options. US audiences streaming HBO via Amazon Channels fared better at around 10Mbps, and it looks like Amazon Channels might have had an uptick in subscribers as a result. In the UK, Sky Atlantic HD occupied the middle ground and broadcast the show at an acceptably murky 7Mbps. It was watchable, but if you compare the disc sizes taken up in the STB of the Sky broadcast, the live TX takes up only 3.5GB; the download version for the catch up service occupies 8.3GB, much more data for the STB decoder to build up a decent picture from.
As an aside, when the episode is eventually made available on Blu-ray, it will be played out at approximately 40Mbps, which will enable the home audience to finally see what the production team will have seen on their monitors in the grading suite. But by not guaranteeing the video QoE of the end product, unfortunately for HBO the picture quality of ‘The Long Night’ overshadowed, if you will pardon the pun, all the other things the episode did very, very right.