Filter blogs by:

How VO and Google Cloud are working together to enable the hybrid cloud for video service providers

Inside the collaboration that’s seen us integrate our applications with Google Cloud Anthos to improve both flexibility and hybrid cloud performance for TV and video service providers.

google cloud blog post image

In April we announced that we were embarking on an exciting new collaboration with Google Cloud that we hope will play a significant part in helping to streamline TV and video service deployments and operations. VO solutions are now integrated with Google Cloud Anthos, a fully managed Kubernetes-based platform that allows users to manage their data and applications in hybrid cloud environments, including on-premises, the public cloud, the private cloud, and multi-cloud. 

This is hugely important for the industry. While the cloud dominates conversations at all levels of the business, the reality is that fully cloud native deployments are still comparatively few. Few people doubt that they are the eventual future, but for a wide variety of reasons progress on cloud native deployments is patchy and the majority of RFPs in the industry still have varying degrees of on-prem components to them.

The upshot is that support for hybrid cloud environments is critical for both success now and for a successful transition to fully cloud native ecosystems in the future.

As part of the launch event for the new Google Cloud region in France, we co-presented a  brief look at the future evolution of TV solutions in the industry alongside Google Cloud. The two presenters were Alain Nochimowski, VO’s CTO, and Nicolas Pintaux, Customer Engineer at Google Cloud France, both of whom have been involved in establishing the new partnership. In the blog post below, we reunite them to answer questions about the collaboration, what it means, and where industry development is going.

VO Super Aggregator banner

Offering something new

Q: What does the collaboration between VO and Google Cloud bring to the table? What does it enable broadcasters and operators to do that they couldn’t before?

Alain Nochimowski: When we say cloud we normally mean SaaS, it’s managed services in the cloud. But at VO we come at it from a different angle. We don’t see everyone migrating to the cloud, and certainly not to SaaS mode, overnight, with a lot of people requesting that at least part of their TV solutions will still be deployed on-prem for a variety of good reasons. We see a hybrid market going forward, so the question then is how do we deal with that and enable solutions and tools to manage security, to manage scalability, and to manage upgradability in the hybrid environment. This is what has led us directly to this collaboration with the Google Cloud Platform (GCP).

Nicolas Pintaux: There has definitely been an evolution in the way that people look at what the Cloud can bring to the media industry. The cloud has provided a lot of interesting features such as elasticity, but there have also been some real constraints such as with broadcasting live content; for example, for security reasons, you cannot just put your own equipment and custom broadcast hardware into most data centers, and this has slowed down a broad adoption. The change of pricing model (CAPEX vs OPEX) has also required some period of adaptation for most actors. However,  it is accepted today that some applications, such as Video On Demand, have been a perfect fit right from the start. 

Nowadays, operators typically adopt hybrid approaches, depending on their business needs and requirements. Some may decide to keep some equipment on prem, while some parts are migrated to the cloud for more scalability and agility. There is definitely a mixed set of architectures that need to be considered.

Q: What challenges currently facing broadcasters and operators does the new partnership address?

AN: The old paradigm about the big TV platform as a model that provides just everything from linear to on demand and all the complex business models that support it, that does not fit the skinny bundle approach where you need a modular approach. In some instances, on demand services offered through subscription and / or advertising biz models together with a mix of other features may suffice. And at the same time you need to deploy things more and more quickly. So, you need continuous innovation. But then what is the architecture? What is the deployment and the operation model that fits? We may rather aim at a TV platform that looks like a composite in terms of deployment and operation models. And for sure, cloud native operations in general are more in line with those kinds of challenges.

NP: Broadly speaking in all industries we are in the era of digital transformation and when you look at digital transformation there are two main things you need to think about. First, how do you modernise your applications so they can better leverage the platforms you want to deploy onto. Second, once you do that is how do you optimise your “Day 2”? The processes that were once efficient to deploy your solutions are now hitting a brick wall. You need to analyse this, understand where your velocity is being blocked, and work out where your operations can be transformed as well. That’s where the cloud native trend is helping out. Here we talk about rearchitecting applications but also about how to measure better all the different pieces of the puzzle and finding out where you can optimise your processes. It’s about continuous improvement. Cloud native is about mindset first and then you need to apply the technology. This is where the collaboration between VO and, and GCP is interesting because we're learning from each other to adapt both product sets to match the TV industry needs.

The hybrid cloud

Q: What are its USPs? How does it differ from other cloud-based solutions from other vendors?

AN: In terms of its ambitions we are trying to design something that can be deployed or managed in the hybrid cloud. This is the main challenge that we face. Today some customers want part of their solution on prem, other parts on AWS, Azure or GCP, so we need to make sure that the control plane can manage all that. And this is where the Anthos stack can really help. 

NP: Anthos, which is the Google Cloud solution that VO is leveraging, is a software platform, based on several OSS components to which Google contributes significantly: Kubernetes, Istio, K-native, but also other operations-focused tools such kustomize and kpt. From the start, the platform has been designed for reversibility for our customers. The main advantage of Anthos compared with the individual OSS projects is that Google Cloud integrates and packages the different components into releases that are supported by Google Cloud. Customers such as VO therefore do not need to regress test each individual component release, which is a significant advantage considering the lifecycle of those components.

With Anthos, Google Cloud also brings years of best practices managing containers in production. Anthos was built on the SRE (Site Reliability Engineering) precept, and integrates findings from Google’s DevOps Research & Assessment (DORA) team, which are both providing insights and methods to increase software delivery performance.

It is also important to highlight that Google is also using Anthos for its own internal systems.

Q: How do we bring customers operating legacy systems / based in low bandwidth locations with us. Is this where hybrid cloud environments are key?

NP: This is indeed where Hybrid Cloud plays an important part. With Anthos, Viaccess-Orca will be able to deploy Kubernetes clusters on-premise. The GCP console will provide information about the health and versions of those environments, and will enable operators to perform remote upgrade operations. This will work even in low-bandwidth conditions.

Q: One of the key takeaways from the joint presentation in Paris was that we need to rethink TV platform architecture design as a set of microservices. Why? What advantages does this bring?

AN: Instead of the big monolithic paradigm where there is one software architecture that does everything, you cut it into pieces. This makes it much more manageable in terms of the data, in terms of the upgrades, and all such optimisations. Combined with a DevOps approach, that brings you a lot of added value. But at the same time it should not be a dogma. You can also have legacy systems out there that work. I’m not for microservicing architectures for the sake of microservicing architecture. You can’t throw the baby out with the bathwater. You have to investigate where it makes sense, where it provides benefits, rather than just scrapping everything. You can improve though and you can containerise what is already there and design new services around legacy systems if they are still performing well. 

NP: I completely agree with Alain. We take the view that all technical decisions should be made with business input. If your current implementation serves your business and there is no better economic solution, then it should remain there. However, if someone comes up with a new system that is a lot less expensive, then this will tend to trigger a move. You can’t microservice everything, but you may analyse your systems and discover that you have two or three modules that you can and should change to microservices, and these will then have their own lifecycles. To that effect, SRE (Site Reliability Engineering) can be a very interesting practice to put in place in your organization. This will, for example, facilitate the collegial decision between the business and the engineering teams regarding which components to modernize first. This can bring better visibility on where to allocate your engineering budget to serve your business.

Green streaming

Q: How does a movement to the cloud help with the rollout of green streaming? Playing devil’s advocate here, is it not just passing the problem along so emissions occur in a data centre rather than on-prem?

NP: In a nutshell, the data centres owned and operated by hyperscalers are more energy efficient than the typical on-premise datacenter. Google for example uses AI to manage the cooling of their facilities, optimizing energy consumption based on real needs. By moving workloads to the cloud, customers can therefore benefit from these optimizations.

[Google states: “We have been matching our global electricity consumption with renewable energy since 2017 and are  aiming higher: our goal is to run on carbon-free energy, 24/7, at all of our data centers by 2030. Plus, we’re sharing technology, methods, and funding to enable organizations around the world to transition to more carbon-free and sustainable systems.”]

The second advantage of moving to the cloud is to optimize the size of the infrastructure required to run the workloads by leveraging the elasticity capabilities of the cloud. In a typical datacenter, hardware is sized and procured to handle peak activities. The infrastructure is therefore oversized most of the time. In Google Cloud, there are optimization tools that can provide insights on the best sizing for a given workload (e.g. how much CPU/RAM/disk to use) for its nominal usage. We can couple this with "intelligent" autoscalers to adapt the sizing of the infrastructure automatically based on the required load. You therefore deploy a system that is "right-sized" whatever the pattern of consumption of your services.

AN: Everything starts from measuring and one of the benefits of cloud native architectures is observability. With the cloud you can count, you can measure, and you can optimise and improve over time. 

Q: Another Parisian takeaway was we need to move from tailor made to ready made TV platforms. Why? What advantages does this confer to broadcaster and operators and what do we need to get there?

AN: The TV business is going to stay tailor made to some extent, but automation brings a lot of advantages and we need to bring more of it to this industry. A simple example: how do you do upgrades? We had the big Log4j crisis recently and it was a pretty interesting times for everyone. How do you track all the occurrences of this in your code and your deployments over time? How do you change that? Automation in the context of security alerts like this will help a lot, and that is only one example. 

NP: Actors like Google can bring some insights and processes in terms of delivering software in the most secure way, and how to react fast. You need to have processes in place to enable you to change things very rapidly while at the same time staying fully in control of what you are deploying. You want to avoid any loopholes or errors even when working at speed. This is where SRE can really help.

We are moving into an era where you will still build a platform from different modules, but this composition of modules will be automated. You need to ensure the provenance of all these modules is well known and controlled. Google has been strongly involved in the SLSA framework, which provides guidelines on how to achieve this level of control.

AB: Essentially you have to rethink your entire software production chain.

Serverless models and more

The industry is at an interesting junction; an extended pivot point to the cloud that is likely to take several years to resolve. Both of our interviewees expect many of the issues discussed here and elsewhere to at least be on the way to being solved in five years’ time, with concepts ready and in place to start doing the heavy lifting of development. 

They also note that evolution is constant. Currently there is a lot of traction developing around serverless development models, a cloud-native application development and execution environment that enables developers to build and run application code on an event basis, without the need for keeping infrastructure processes in the background. All they do therefore is concentrate on building the code, entrusting cloud service providers to manage the appropriate containers for it as well as all infrastructure, maintenance, and support. It’s fast, it’s agile, it’s cost effective (developers never pay for idle capacity), and it is likely the next stage in a journey that sees software development accelerating appreciably every year. It is also worth noting that serverless programming is also compatible with on-premise systems (K-native) and is supported on the Anthos platform.

To find out more about the subject and how our partnership with Google Cloud will help move your application development forward at ever-increasing speed, please make an appointment to see us at IBC on Stand 1:A51.

Atika Boulgaz

Atika Boulgaz is EVP Global Communication at Viaccess-Orca. She has 360° communication vision and experience thanks to several positions within the Orange Group, notably at Wanadoo and Orange France. After three years in the Gaming Unit (GOA) of the Orange Content Division she became expert in Press Relations, Event Management and Advertising. Atika joined Viaccess-Orca in 2010 and she is now managing the Marketing Communications and Internal Communication activities for the company. Atika graduated with a Masters degree in Communication and Advertising from INSEEC Paris.
  • Home
  • Blog
  • Evolution of Video How vo and google cloud are working together to enable the hybrid cloud for video service providers