CloudWorld Main Stage
Tuesday, February 8, 2022
Event-driven, real-time development in the cloud is a major part of many organizations’ digital transformation initiatives and businesses realize that data is the currency of competitive advantage. Event-driven applications must consume, enrich, and deliver data securely in real-time, and efficiently at scale. Therefore, the size of data packets, speed and frequency of data transmission and update, and the “intelligence” of data handling, are critical to successfully running mission-critical, corporate applications and making time-sensitive business decisions.
The core expertise of many companies lies in the development of their business applications, not in developing streaming data technology. As organizations everywhere move to the cloud, the demand for the dynamic enrichment, management and security of real-time, inflight data is critical. The fundamental challenge of developing event-driven, real-time applications and systems for the cloud, is managing the complexity of the end-to-end journey from sources to recipients of the highly “perishable” data – fast, reliably, securely, often in large volume, and sometimes to many recipients (hundreds of thousands of applications, systems, and devices concurrently). This talk will highlight how an Intelligent Event Data Platform enables organizations to accelerate innovation and deliver game-changing, real-time applications to market faster, while significantly reducing the cost of software development and operations.
The Cloud Native Computing Foundation (CNCF) bought you such fan favorites like Kubernetes & Prometheus. In this talk Annie Talvasto will introduce you the most interesting and coolest upcoming CNCF tools and projects.
This compact and demo-filled talk will give you ideas and inspiration that you can 1) discover new technologies and tools to use in your future projects as well as 2) be the coolest kid in the block, by being up to date with the latest and greatest.
PRO TALK (CloudWorld): Debugging Kubernetes-based Microservices with Telepresence: Local-to-Remote "Hot Reload"
Many organizations adopt cloud native development practices with the goal of shipping features faster. The technologies and architectures may change when we move to the cloud, but the fact remains that we all still add the occasional bug to our code. The challenge here is that many of your existing local debugging tools and practices can't be used when everything is running in a container or deployed onto Kubernetes running in the cloud. This is where the open source Telepresence tool can help.
Join me to learn about:
- The challenges with scaling Kubernetes-based development i.e. you can only run so many microservices locally before minikube melts your laptop
- An exploration of how Telepresence can "intercept" or reroute traffic from a specified service in a remote K8s cluster to your local dev machine
- The benefits of getting a "hot reload" fast feedback loop between applications being developed locally and apps running in the remote environment
- A tour of Telepresence, from the sidecar proxy deployed into the remote K8s cluster to the CLI
- An overview of using "preview URLs" and header-based routing for the sharing, collaboration, and isolation of changes you are making on your local copy of an intercepted service
PRO TALK (CloudWorld): Modernizing Applications with Oracle Verrazzano Enterprise Container Platform
Oracle released Verrazzano as Container Management Platform software in 2021. It is Open Source Platform that accelerates application development productivity and innovation across different business applications regardless of your use of microservices or traditional monolith applications. The platform enables customers to modernize their existing applications landscapes and it provides a cloud-neutral approach to achieve the same observability and lifecycle benefits regardless of deploying on premise or on Cloud infrastructure with the ability to manage multi-cloud environments
PRO TALK (CloudWorld): The Most Dangerous Demo Ever (Or How to Perform Real Time Sentiment Analysis on Audience Messages)
It’s common knowledge that everything that could go wrong in a live demo will. Join us challenging Murphy’s law on multiple occasions as we build an application that will perform real-time sentiment analysis on the audience messages, from scratch. All you need to participate is your phone’s QR reader! Come and learn about streaming (vs batch), deploying ML models in real-time and automating MLOps in a data science project.
For a growing set of cloud-native use cases the centralized public cloud is not able to provide a high level of service because workloads are not run in proximity to end users. This situation is exacerbated as new communication technologies like 5G are introduced with a higher demand for high throughput and low latency as well as increased in-country data ownership requirements.
In this presentation I will introduce a distributed cloud built upon existing infrastructure in hundreds of data centers across the globe. This new computing paradigm converts local data centers’ existing capacity into modern PaaS offerings. It accomplishes this by enabling data centers, or any legacy infrastructure, that have private cloud or virtual data center offerings to add modern, cloud native PaaS services, such as managed Kubernetes, containers, and object storage.
DevOps has changed the way software is built, delivered, and operated in production. Features are pushed out faster than ever before, applications are more resilient, and improvements in the development pipeline have given engineers the power to own the complete delivery of their application.
Behind the improvements that we have seen from the advent of the DevOps movement are DevOps teams, cultural shifts, and tooling that was built to serve the engineers themselves. While the world has shifted left and a best-in-class standard has been established for software engineering, application security has remained stagnant.
The Koii Protocol tracks attention on the open internet to equitably reward valuable content, and the network of Koii Nodes provide faster, cheaper, and more rewarding ways to build cross-compatible, chain-agnostic decentralized apps.
In this session, we will discuss how developers, DBA's, and Architects deploy database proxies to better manage SQL connections for Microservice architecture, buy avoiding unnecessary latency. We review various proxies (open source and proprietary) in the market and discuss key features that accelerate SQL scale without code changes. A live demo will be included.
We will explore the benefits of a managed IoT Cloud platform, HiveMQ Cloud, which lightens the burden of deployment, connection, messaging and monitoring of your enterprise-grade IoT devices and messaging brokers.
Getting to grips with cloud-native is as vital to your application evolution as breathing is to the body. However, with this term encompassing so many technologies, products and architectural styles, how do you decide which will be best for your own application? Diving into the anatomy and evolution of the human body can give us great insights into the journey you’ll need to make for your own application evolution. Join this session to find out why and discover what is critical for a healthy cloud-native system.
Are you worried about granting too much access to resources on your Kubernetes cluster? With the extensible framework of Kubernetes, there is scarcely a day without a new tool popping up. In order to ensure the tools, users, and applications have appropriate security policies, a streamlined onboarding process is required.
The onboarding process not only streamlines how securely we can grant access but also enables self-service capabilities improving the user experience.
In this workshop, audiences will get a good understanding of common pitfalls and how to avoid them by leveraging the Role-Based architecture approach, pod security policies, admission controllers, policy enforcement through OPA, etc.
Wednesday, February 9, 2022
As of 2017, 90 percent of public clouds workloads ran on Linux. Linux allows organizations to make the most of their cloud-based environments and power their digital transformation strategies. Many of today’s most cutting-edge cloud-based applications and technology run on Linux, making it a critical area of modern technology to secure.
According to a recent Linux Threat Report, most threats arise from systems running end-of-life versions of Linux distributions. This includes 44 percent from CentOS versions 7.4 to 7.9. In addition, 200 different vulnerabilities were targeted in Linux environments in just six months. This means attacks on Linux are likely taking advantage of outdated software with un-patched vulnerabilities.
This session will reveal steps you can take to ensure the security across workloads and cloud presence powered by Linux and how to effectively respond to the possible threats.
Join Aaron as he walks through the data, speaks to the threat, and highlights the top three mitigation strategies for all enterprises.
Attendees will learn:
• How to utilize free Linux native tools including Iptables, seecomp, PaX, etc., for configuration assessment, vulnerability patching and activity monitoring.
• Simple steps you can take to secure containers effectively.
• Best practices in Appsec, including testing, scanning and Open Source (SCA).
Access control in AWS is done via IAM policies. Policies and permissions in IAM can get really complex really fast, leaving a ton of room for mistakes and misconfigurations. To put this in perspective:- There are six types of IAM policies- Policies can have a combination of Deny and Allow statements- Each statement includes Actions, Resources, Principal, Conditions- Each statement can also have negations (exceptions) such as NotResource or StringNotEquals in Conditions- And many other details and tricksIt is best practice to configure least privileged policies. However, to get it right is often more challenging than it looks. As a result, most policies are written with wildcards (*) in Actions, or Resources, or both, with no meaningful Conditions.It is also very difficult to understand the net effective permissions of a policy that contains both Allow and Deny statements, with seemingly contradicting conditions and exceptions. AWS provides an IAM policy simulator that helps, but only helps to a limited extent. With the IAM policy simulator, you have to specify the service(s), action(s), and/or resource(s) and get a “yes/no” answer back telling you if a policy grants the permission to that known combination. It cannot answer the broader question of “given a policy, what resource permissions does it grant access to” in general.
During this session, we will guide the audience on the important role that DevSecOps has to effectively and efficiently drive and support cybersecurity compliance for enterprises. Specifically, we will explain how achieving a cybersecurity audit can help businesses focus their efforts on driving revenue and sales. We’re experts on the topic -- our team at Strike Graph takes customers from zero to 100 by helping their teams (like DevSecOps) to manage and automate important audits effectively and efficiently.
We will share tips and insights to help you maximize efficiency for compliance, such as:
What is DevSecOps really?
Why is security operations a revenue issue?
What is the lifecycle and distribution of security activities?
How to scope and operationalize security from a technology executive perspective.
What are security controls and how do I avoid “Security Theater”?
How to automate procedures and drive DevSecOps towards effective security.
How to take credit for your security practices that drive towards valuable certifications.
How to manage your auditor as opposed to being managed by your auditor.
Cloud deployments offer the potential for almost infinite resources and flexible scalability. But there are so many options! It can be overwhelming to know which services are best for your use case. Building distributed systems which take advantage of in-memory computing only adds to the complexity.
During this session we will introduce a new cloud service for Apache Ignite in-memory computing platform and the best practices we followed in implementing this service . We will look at the advantages and disadvantages containers vs. VMs, the value of standardized configurations, how to size system resources based on the workload, and how we configured security and networking.
The software we write does not always work as smoothly as we would like. In order to know if something went wrong, understand the root cause and fix the problem, we need to monitor our system and get alerts whenever issues pop up. There are many useful tools and practices for Kubernetes based applications. As we adopt serverless architecture can we continue to use the same practice? Unfortunately, the answer is no.
In this session, we will discuss:
- The differences between monitoring Kubernetes and serverless based applications
- Best practices for serverless monitoring
- Methods to efficiently troubleshoot serverless based applications
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session, Oleg Chunikhin, CTO of Kublr, describes best practices for “configuration as code” in a Kubernetes environment. He will demonstrate how a properly-constructed containerized app can be deployed to both Amazon and Azure, and how Kubernetes objects, such as persistent volumes, ingress rules and services, can be used to abstract from the infrastructure.
Why pay a lot of money for public and private cloud providers, when you already have your own, free, server farm? I will show you how you can utilize your organizational resources to run serverless functions for free, at scale, using open source serverless platforms (or your own platform), in a few easy steps. This is the next step in cloud/serverless evolution - see my article here: https://firstname.lastname@example.org/you-are-your-own-cloud-7c1cf7256ce2
Coined in 1994, “Zero-trust” has only recently come into focus as a powerful tool to combat the recent explosion of cybersecurity attacks. In short, the concept advocates a default posture to deny access under the assumption that nothing in the IT infrastructure can be fully secured. But how does Zero Trust relate to DevSecOps and how can developers work within a Zero Trust framework while still maintaining agility and flexibility? In this session, Anant Misra will guide developers through best practices for upholding Zero Trust principles throughout the application development lifecycle.
Attendees will learn:
1. What Zero Trust DevSecOps means, why it is important, and how it can be used to proactively combat cyberattacks
2. How to set up Zero Trust DevSecOps in their organization
3. How to create a holistic Zero Trust DevSecOps strategy that doesn’t slow down development or release timelines
Understanding what is happening with a solution that is built from multiple components can be challenging. While the solution space for monitoring and application log management is mature, there is a tendency for organizations to end up with multiple tools which overlap in this space to meet different team needs. They also work on aggregate then act, rather than consider things in a more granular way.
FluentD presents us with a means to simplify the monitoring landscape, address challenges of hyper-distribution occurring with microservice solutions, allowing different tools needing log data to help in their different way.
In this session, we’ll explore the challenges of modern log management. How its use can make hybrid and multi-cloud solutions easy to monitor.
PRO TALK (CloudWorld): How an AI Driven Approach Reduces Cloud Cost and Makes Your Kubernetes Infrastructure Autonomous
Measuring and controlling costs in cloud environments is often complex. But it does not need to be. In this session, we will discuss how an AI driven approach renders your cloud native applications on Kubernetes fully autonomous and rightsizes your cluster in sub-minute intervals the cloud compute resources. We will go over an experiment with the deployment of an application, and apply autonomous techniques that fiercely controls and optimizes the cluster.
We will discuss how to control and optimize in minutes the cost of your AWS EKS, Google GKE and Azure AKS applications. Instantly. You will learn about powerful -yet simple- strategies to rightsize your clusters: automated scaling up and scaling down to zero your nodes and pods, smart selection of VM shapes, and the automated use of spot instances.
One of the tough challenges in adopting containers and Kubernetes across all enterprise applications is the availability of shared data services native to Kubernetes. Developers often fraught with making a trade-off between choosing the flexibility that Kubernetes offers vs. enterprise rich data management that comes with traditional IT. This session presents novel architecture principles in delivering a Kubernetes native data store that addresses the needs of cloud native modern applications. The audience will learn about NetApp's shared file service solution that delivers enterprise grade data management to Kubernetes applications.