You are viewing content from a past/completed QCon

Track: Software Defined Infrastructure: Kubernetes, Service Meshes, & Beyond

Location: Majestic Complex, 6th fl.

Day of week: Wednesday

Many companies are embracing kubernetes for container orchestration, and a service mesh technology such as Istio to manage their applications. As the applications managed by kubernetes and Istio continue to mature, users are stretching the limits of what is currently possible. In this track, we will hear from users and contributors who help evolve the kubernetes and Service Mesh ecosystems. We will hear about large-scale cluster management and pushing the limits of scalability. We will explore the trade-offs between efficiency and utilization on the one hand and performance on the other, and dive into what it means to run multi-tenant applications. Finally, as systems become more complex, their operations become harder as well. We will talk about effective debuggability and observability in the context of kubernetes and service meshes.

Track Host: Dawn Chen

Software Engineer @Google

Senior Staff Software Engineer, Google cloud. Dawn worked on Kubernetes since the day one the project was found. She has been one of tech leads in both Kubernetes and GKE, mainly focus on the node and infrastructure areas. Prior Kubernetes, she was the one of the tech leads for Google internal container infrastructure -- Borg for about 6 years.

10:35am - 11:25am

Alibaba Container Platform Infrastructure - a Kubernetes Approach

As one of the biggest data companies in the world, Alibaba provides thousands of on-line/off-line services to various customers to support their business. Most of the Alibaba applications are fully containerized and run on top of Alibaba container platform which manages huge number of clustered physical machines. A typical such container cluster consists of tens of thousands nodes and manages more than a hundred thousand heterogeneous applications.

With the rapid increase of adoption and the active development in the community, Kuberenetes has become the dominant cloud operating system to manage cloud native applications.  To adopt this emerging technology, we decided to fully integrate upstream Kubernetes into existing Alibaba container management system. We will present how we extend, scale Kubernetes to make this integration succeed in this talk.

Overall, we’d like to deliver the following key takeaways from this talk.

Architectural-wise, Kubernetes is a scalable container operating system. It can manage large scale cluster(more than 10K nodes) with minimal modifications.

Kubernetes can support complicated application deployment/upgrade requirements leveraging its strong extendability. We built a set of new controllers to satisfy our application requirements.

The scheduler plug-in mechanism make it possible to overcome the default scheduling limitations by developing a new scheduler as a replacement. We will share the design of our in-house scheduler which scales extremely well.

It is important to keep the Kubernetes APIs intact during the integration in order to preserve a standard for upper PAAS clients.  We will share some of our integration best practices for this purpose.

Fei Guo, Senior Staff Engineer in Alibaba Container Platform Group

11:50am - 12:40pm

How to Evolve Kubernetes Resource Management Model

Built with Linux container and cgroup technologies, Kubernetes provides an efficient  framework for deploying different kinds of application workloads across multiple machines and compute platforms. Over the past five years, Kubernetes has evolved to support increasingly complex and diverse classes of applications, such as web services, databases, big data, and AI/ML workloads. As people are adopting Kubernetes to run even more diverse enterprise-class, cloud-native, and web-scalable workloads, we are seeing more requirements on improving the Kubernetes resource management model with better isolation, utilization, and performance consistency, yet provides a flexible and extensible framework for people to enable more hardware and workload-specific optimizations on Kubernetes.

In this talk, we will first provide an overview of the current Kubernetes resource model and best practice guidance on managing compute resources and specifying application resource requirements on Kubernetes. We will then discuss some recent and ongoing work on extending the Kubernetes resource model to provide better resource isolation, support more diverse hardware, facilitate fast and flexible application and resource scaling, and promote more consistent application performance across different compute platforms.

Jiaying Zhang, Software Engineer @Google Kubernetes team

1:40pm - 2:30pm

The Service Mesh: It's About Traffic

The "cloud native" ecosystem---largely centered around Kubernetes and microservice architectures---has taken the industry by storm. As teams rush into this brave new world, they quickly find that there's a need for a new set of instrumentation and tooling primitives. This can be overwhelmingly complex, and without a disciplined, incremental approach, these migrations can be doomed to fail.  

In 2016, we launched the Linkerd project, a service mesh hosted by the Cloud Native Computing Foundation, to give operators control over the traffic between their microservices. Since then, dozens of organizations have adopted Linkerd as a component of their cloud native architecture; and we've learned a ton about the pitfalls and pratfalls of adopting a service mesh.  

In this talk, we'll discuss:

  • How the service mesh feature set developed organically at early cloud native companies like Twitter
  • The lessons we've learned helping dozens of organizations get to production with Linkerd
  • How we've applied these lessons to tackle complexity with Linkerd

Oliver Gould, Co-Founder & CTO @BuoyantIO

2:55pm - 3:45pm

Introduction to SMI the Service Mesh Interface)

Recently a consortium of companies released a generic interface for service mesh technology. The goal of this abstraction layer is to provide an easy to consume API that can be implemented by many different service mesh implementations (e.g. Istio, Linkerd, Consul Connect, etc). In providing an abstraction between users and implementation, users are free to adopt service mesh concepts without being bound to any particular implementation. Likewise, tooling and ecosystem products for Service Mesh can evolve without having to bet on any specific mesh technology. This talk will cover the SMI specification, implementations and a look at the forward evolution of this approach.

Brendan Burns, Co-Founder of the Kubernetes Open Source Project & Distinguished Engineer @Microsoft

4:10pm - 5:00pm

Securing a Multi-Tenant Kubernetes Cluster

Organizations are rapidly adopting containers and Kubernetes to more easily develop and manage the applications that drive business value. However, to get the biggest bang for your Kubernetes buck, you need strong multi-tenancy with security built-in at all layers of the software stack. You need to match the speed and agility of Kubernetes with both proactive and reactive security measures. You need defense in depth. In this session, Red Hat’s Kirsten Newcomer will identify the most common layers in a typical container deployment, and discuss technologies and deployment patterns you can use to ensure strong multi-tenancy at each layer. These layers include: the container host, the container platform, networking, storage and api management. Kirsten will also touch on best practices for managing container content, and registries, the build process, and the deployment process in a multi-tenant cluster.

Kirsten Newcomer, OpenShift Senior Principal Product Manager @RedHat

2019 Tracks

Monday, 24 June

Tuesday, 25 June

Wednesday, 26 June