Empowering developer velocity and efficiency with Kubernetes
Welcome to KubeCon North America! It seems only yesterday that we were together in San Diego. Though we’re farther apart physically this year, the Kubernetes community continues to go strong. Here in Azure, we’re thrilled to have seen how both our open-source efforts as well as the Azure Kubernetes Service have enabled people and companies like Finxact, Mars Petcare, and Mercedes Benz, to scale and transform in response to the COVID-19 pandemic.
In today’s environment, customers are looking to Azure and Kubernetes to enable application platforms and patterns that make it faster to build new applications and easier to iterate the applications that they’ve already built. Kubernetes on Azure is a reliable and secure foundation for this cloud-native application development. At the same time, the pressures of the current environment mean that it is also critical to be as efficient as possible and we are excited to see the ways that the Azure Kubernetes Service has empowered people to improve their operational and resource efficiency. Over the last few months our Microsoft teams have built amazing technology that enables our customers to be more efficient and I am excited to share some of that with you today.
Empowering people with Azure Kubernetes Service
When we think about modern application development, one of the most popular approaches is Function as a Service (FaaS) and event-driven programming. While the first development of these approaches was in the context of cloud-based platforms, over the last few years there has been increasing development of Functions as a Service on Kubernetes. A big part of this innovation has been the Kubernetes Event-driven Autoscaler (KEDA) project. Though KEDA started as a joint project between Microsoft and Red Hat, it has rapidly grown to be a true community project. We have been thrilled to see KEDA expanded by community members to connect event-driven programming with Apache Airflow and Alibaba Cloud. Recently the KEDA community announced their 2.0 release including improvements to the ScaledObject KEDA resource as well as new scalers that make it easy to integrate KEDA into many different workflows. KEDA 2.0 is generally available and ready for your production event-driven workloads.
Enabling innovation is a core tenant of the cloud-native team in Azure, but Kubernetes is also a backbone for mission-critical infrastructure. Therefore, we have been investing heavily in the fundamentals to enable high latency-sensitive workloads. This past September 15, Microsoft made the future of gaming available via its Xbox Game Pass streaming service. From the beginning, it has been awesome to partner with Team Xbox to enable their usage of Azure Kubernetes Service to power Project xCloud. In the spirit of supporting such mission-critical workloads, we are bringing Kubernetes version 1.19 to general availability and adding hardened images that align to the Microsoft security baseline and conform to Linux and Kubernetes CIS benchmarks. We are also continuing to support open innovation via the integration of containerd into the Azure Kubernetes Service (AKS). Many of these high-profile workloads like gaming are really latency-sensitive, so we have also added support for ephemeral disks in AKS enabling faster cluster creation time and lower latency disk access. And these are just the improvements for this month. Honestly, every month brings new features that enable reliability, performance, and scale in AKS.
As we think about improvements to AKS, we also want to improve the operational efficiency of our customers. We are deeply committed to building flexibility into AKS so that our customers can tailor their experience to meet their needs. To that end, we have recently brought the upgrade maximum-surge capability to general availability. maxSurge enables faster upgrades by leveraging multiple new buffer nodes to concurrently replace older nodes. Instead of replacing a single node at a time, users can now customize their own max surge value per node pool to define how many concurrent replacements occur. Increasing the max surge for an upgrade significantly reduces upgrade latency at the cost of increased disruption. By default, the AKS upgrade is slow and careful, minimizing disruption to applications running on the cluster. However, as our customers adopt more cloud-native and disruption tolerant application patterns, they can also accelerate their upgrades using maximum surge.
Connect with us at KubeCon
Congratulations to all the teams across Microsoft who have made these improvements possible. Every time I meet with a customer, whether at KubeCon or beyond, it is a real pleasure to see how this hard work has empowered every one of our customers to do more. Since we are virtual, I’m hosting an Ask Me Anything panel session on Wednesday, November 18 at 2:30 Pacific Time with several Microsoft engineering leads to answer any questions you may have. To everyone in the Kubernetes community, have a great KubeCon. I am looking forward to (fingers crossed) seeing you in person next year.
Source: Azure Blog Feed