Monday, February 7, 2022
Cloud-native applications today are increasingly complex and therefore increasingly hard to understand. It’s critical to connect decisions around resource allocation and architecture to business metrics such as end-user latency, but very difficult to do in practice. Ultimately, understanding how your systems behave and why is a data analytics problem. Like most data analytics problems, the trick is in collecting and wrangling the right data sources. In this talk, you will learn how Pixie, an open-source observability platform for Kubernetes, can be used to painlessly turn low-level telemetry data into high-level signals about system health. The talk will also show these high-level signals can be used as input to infrastructure workloads such as CI/CD and load balancing in order to improve their performance.
Tuesday, February 8, 2022
PRO TALK (CloudWorld): Debugging Kubernetes-based Microservices with Telepresence: Local-to-Remote "Hot Reload"Join on Hopin
Many organizations adopt cloud native development practices with the goal of shipping features faster. The technologies and architectures may change when we move to the cloud, but the fact remains that we all still add the occasional bug to our code. The challenge here is that many of your existing local debugging tools and practices can't be used when everything is running in a container or deployed onto Kubernetes running in the cloud. This is where the open source Telepresence tool can help.
Join me to learn about:
- The challenges with scaling Kubernetes-based development i.e. you can only run so many microservices locally before minikube melts your laptop
- An exploration of how Telepresence can "intercept" or reroute traffic from a specified service in a remote K8s cluster to your local dev machine
- The benefits of getting a "hot reload" fast feedback loop between applications being developed locally and apps running in the remote environment
- A tour of Telepresence, from the sidecar proxy deployed into the remote K8s cluster to the CLI
- An overview of using "preview URLs" and header-based routing for the sharing, collaboration, and isolation of changes you are making on your local copy of an intercepted service
The Koii Protocol tracks attention on the open internet to equitably reward valuable content, and the network of Koii Nodes provide faster, cheaper, and more rewarding ways to build cross-compatible, chain-agnostic decentralized apps.
We will explore the benefits of a managed IoT Cloud platform, HiveMQ Cloud, which lightens the burden of deployment, connection, messaging and monitoring of your enterprise-grade IoT devices and messaging brokers.
Wednesday, February 9, 2022
Cloud deployments offer the potential for almost infinite resources and flexible scalability. But there are so many options! It can be overwhelming to know which services are best for your use case. Building distributed systems which take advantage of in-memory computing only adds to the complexity.
During this session we will introduce a new cloud service for Apache Ignite in-memory computing platform and the best practices we followed in implementing this service . We will look at the advantages and disadvantages containers vs. VMs, the value of standardized configurations, how to size system resources based on the workload, and how we configured security and networking.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session, Oleg Chunikhin, CTO of Kublr, describes best practices for “configuration as code” in a Kubernetes environment. He will demonstrate how a properly-constructed containerized app can be deployed to both Amazon and Azure, and how Kubernetes objects, such as persistent volumes, ingress rules and services, can be used to abstract from the infrastructure.
By now, most of us have experienced the benefits of automated drift detection and reconciliation. Any application running in Kubernetes is benefiting from those. No matter what happens to our resources, Kubernetes will always try to converge the actual into the desired state without human intervention.
Why don't we have those features when working with infrastructure? Why don't we embrace Kubernetes API for everything, and not only for infra? If we do, we'll be able to manage all our resources in the same way and rip the same benefits, no matter whether those resources are applications, infrastructure, services, or anything else.
In this talk, we'll explore the effects of having (and not having) automated drift detection and reconciliation applied on infrastructure and explore Crossplane as one possible solution that enables us to leverage the Kubernetes control plane to manage everything, including infra.
Understanding what is happening with a solution that is built from multiple components can be challenging. While the solution space for monitoring and application log management is mature, there is a tendency for organizations to end up with multiple tools which overlap in this space to meet different team needs. They also work on aggregate then act, rather than consider things in a more granular way.
FluentD presents us with a means to simplify the monitoring landscape, address challenges of hyper-distribution occurring with microservice solutions, allowing different tools needing log data to help in their different way.
In this session, we’ll explore the challenges of modern log management. How its use can make hybrid and multi-cloud solutions easy to monitor.
PRO TALK (CloudWorld): How an AI Driven Approach Reduces Cloud Cost and Makes Your Kubernetes Infrastructure AutonomousJoin on Hopin
Measuring and controlling costs in cloud environments is often complex. But it does not need to be. In this session, we will discuss how an AI driven approach renders your cloud native applications on Kubernetes fully autonomous and rightsizes your cluster in sub-minute intervals the cloud compute resources. We will go over an experiment with the deployment of an application, and apply autonomous techniques that fiercely controls and optimizes the cluster.
We will discuss how to control and optimize in minutes the cost of your AWS EKS, Google GKE and Azure AKS applications. Instantly. You will learn about powerful -yet simple- strategies to rightsize your clusters: automated scaling up and scaling down to zero your nodes and pods, smart selection of VM shapes, and the automated use of spot instances.