Tuesday, September 14, 2021
Kubernetes is great, but complex — and the learning curve is steep. Some might even call it frustrating. After all, application developers should be making great applications, not struggling with the complexities of Kubernetes. And operations teams shouldn't be biting their nails over developers having direct Kubernetes cluster write access.
Lagoon solves this (and more). It reduces the initial Kubernetes knowledge required from developers by providing easy to use application templates that involve just a couple lines of code. It does this with fully automated deployment on every branch commit or pull request. This removes the need for developers to have write access to the Kubernetes cluster and makes your operation teams happier.
Lagoon is fully configurable and flexible, and supports your developers in learning (and loving!) Kubernetes.
Modern environments such as Kubernetes and serverless, have made it easy to manage and scale microservices but observability into these environments is still a challenge for DevOps. In this session, we will describe how to use user request flows to build intuition about your architecture and build resilient applications. We will also dive into correlating metrics, events, & logs using distributed tracing, and creating alerts for anomalies detected in your environments.
Observability is critical for any application. Polyglot microservices-based applications, hosted on ephemeral environments such as containers and serverless technologies, make it increasingly important to have the right tools, frameworks, and processes to understand application behavior, performance, and health. Done correctly, Observability helps reduce Mean Time To Resolution (MTTR) when troubleshooting complex problems, and can help improve customer satisfaction. Whatever your role - developer, cloud operator, or business person - you need to be able to visualize, inspect, and comprehend telemetry data. AWS offers a variety of services and options to help you gain comprehensive Observability of your applications, whatever the environment. In this session, we will show how you can implement Observability for your .NET applications with logs, metrics, and traces, unlocking your ability to build better systems and increase operational efficiency. You will learn AWS best practices for implementing Observability with services including CloudWatch, X-Ray, Amazon Manager Service for Prometheus (AMP), Amazon Managed Service for Grafana (AMG), and AWS Distro for OpenTelemetry.
Why are enterprise organizations making a move from on-premise solutions to completely cloud-native? What does that mean for improving, scaling, and securing their CI/CD pipelines? And what exactly is continuous packaging, anyway?
Join Cloudsmith’s Dan McKinney in this session as he answers all of these questions, helping attendees understand the true difference between cloud-hosted and cloud-native, how to get started with migrating to a cloud-native solution, and the true benefits of being entirely within the cloud.
Kubernetes has become a popular platform among application developers for building cloud-native applications. They value the flexibility to deploy anywhere, automate tasks, and expedite production.
At the same time, PostgreSQL has increasingly become the database of choice among application developers.
For anyone who has deployed or looking to deploy Cloud Native PostgreSQL, a big question is how do I get connected and then how to leverage built-in features.
Join us during this session as we talk about what's next after you have deployed a PostgreSQL cluster using EDB's Kubernetes Operator. Topics on the agenda include an overview of Operator patterns with stateful workloads, what makes up Cloud Native PostgreSQL, tools to benchmark Cloud Native PostgreSQL, What makes PostgreSQL a fit for Kubernetes, PostgreSQL flexible data types, Document databases vs Relational databases, Imperative vs Declarative stateful infrastructure, databases in your CI/CD pipeline, and deploying your application anywhere.
Persistent storage is one of the most difficult challenges to solve for Kubernetes workloads especially when integrating with continuous deployment solutions. The session will provide the audience with an overview of how to address persistent storage for stateful workload the Kubernetes way and how to operationalize with a common CD practice like GitOps
The data revolution is upon us, and, well, has been for several years. It comes as no surprise that as application technology has evolved to keep up with the ever increasing expectations of users, the data platforms and solutions have had to as well. A decade or so ago we thought all our problems had been solved with a new player in the game, NoSQL. But, spoiler alert, they weren't.
In this session we're going to dive into a brief history of data. We'll examine its humble beginnings, where we stand today, and how modern relational databases will shape the cloud landscape going forward. Throughout the journey you'll gain an understanding of how SQL and relational databases have adapted to pave the road for a truly bright future.
Your company’s “digital transformation” will be driven by new application designs and methods, new technology stacks, and new processes. To master it, and delivering next generation services through it, massively complex sets of signals and data need to be leveraged, processed, and acted on. Developers need integrated data and insights through that noise, while being able to leverage their tools of choice. All of this must be managed, even in spite of massive rates of change and innovation. The challenge is determining who or what is going to do that work, where the work gets done, and how the business benefits from it. This session focuses on methods to overcome the complexity of digital transformation in the cloud and drive operational maturity despite constant change across applications, digital services, and products.
Open GitOps is crystalizing as a standard so how do you actually do it? Codefresh open source engineers recently launched Argo CD Autopilot, an opinionated way to manage applications across environments using GitOps at scale. We’ll bootstrap two clusters, deploy our apps, and promote a change from staging to prod. Easy peazy.
Application performance metrics are a top priority for Developers and Engineering teams, as they have to ensure their applications are running properly at all times, handling high fluctuations in demand and scale. All while keeping in mind the rising and changing cloud costs that come with the territory.
In this session, Ezequiel will go over the internals of profiling in production and explain how this practice provides teams with deeper visibility into their workloads at scale, enabling them to optimize performance. He'll then go over a real-life use-case of how profiling our own workloads, managing millions of events per second, improved our CPU utilization, and reduced it from 80% to 15%.
OPEN TALK: Unlock Cassandra Data for Application Developers Using GraphQL and REST APIs with Stargate.IOJoin on Hopin
Cassandra is an incredibly powerful, scalable and distributed open source database system. Companies with extremely high traffic use it to provide their users with consistent uptime, blazing speed, and a solid framework. However, many developers find Cassandra to be challenging because the configuration can be complex and learning a new query language (CQL) is something they just don't have time to do. Stargate is an OSS multi-model API Data Layer for cloud native databases which sits on top of Cassandra and provides HTTP interfaces to your data - it provides a REST API, a GraphQL API, and a document-oriented Schemaless API. You can install it on top of your own Cassandra instance and participate in the community. During this presentation we will demonstrate and share the purpose, capabilities and internals of Stargate. We also give a working sample as a docker-ready configuration file.
You’ve heard of Serverless but you really aren’t sure what it is about. Isn’t serverless just another word for cloud computing? Isn’t it just “Other People’s Computers”? Or is it the most efficient way to develop applications, letting the developer focus on their own priorities instead of anything to do with the administration of a server? Cloud providers would have you believe it means letting them take care of the platform side. But the idea of Serverless extends beyond the platform to encompass everything from microservices to databases, from development to operation, from storage capacity to the network. This talk is geared towards those curious about this new Serverless technology and what opportunities arise by embracing the latest movement.
When you think of authorization control and policy enforcement, do you put together a scavenger hunt of resources needed to figure out what should have access, then what actually does have access? Is there one team or one person in your organization holding all the policy information needed to secure your cloud-native application in an excel spreadsheet or a wiki somewhere? Then is this information hard-coded into each layer between your microservices?
OPA (Open Policy Agent) is a graduated CNCF (Cloud Native Computing Foundation) project that exists to simplify and accelerate application development by decoupling policy decisions from enforcement. Already battle tested and proven by organizations such as Netflix, Goldman Sachs, Pinterest and Atlassian; who are using OPA for distributed policy enforcement across a range of languages, execution environments and protocols.
During this talk we will cover some common authorization use cases. Showing how to utilizes OPA's decoupled nature to write simple policies that can be easily enforced by your system.
Common Use Cases:
* Restrict API access during blackout periods
* Grant SSH and sudo access to on-call engineers
* Require test certification for workloads deployed to production environments
You should attend this talk if you have an interest in learning how to enforce complex policies at scale with OPA, and without introducing significant latency or impacting availability.
As companies transition to hybrid cloud, they are faced with complex decisions about choosing a strategic cloud partner who can support their growth at an affordable cost. Now more than ever, buyers are highly educated about the technology they need to scale their business. That’s why many value a partner who will make decisions that are right for their customers; a partner who’s invested in supporting their growth.
We will discuss how Vultr, the largest privately-owned Global cloud provider outside of the Big 3 Clouds supporting over 1.3 million customers, believes developers and businesses should feel the freedom of the cloud, and be empowered to do what they do best: develop and build a company.
You know Datalogics for our Adobe-powered PDF SDKs and Command-line applications, but we’ve brought that same dynamic document technology to the development space you need it most: the Cloud. In this session, we’ll share how our tried-and-true solutions are supporting web application and service development like never before. What can Datalogics do for your Cloud development projects, ideas, and goals? Join us and find out!
Join us as Pau Labarta Bajo, Data Scientist and ML Engineer with over eight years of experience will show us how to break multi-million dollar computer vision models using adversarial examples.
Computer vision models based on neural networks have become so good in the last 10 years that nowadays serve as the “eyes” behind many mission-critical systems, like self-driving cars, automatic video surveillance, or face recognition systems in airports. What you probably do not know is that there are easy methods to fool them, forcing them to produce wrong predictions. These methods are theoretically simple and computational feasible and open the door to potentially critical security issues.
This session will include information about how popular open source has become and how it is driving innovation for enterprises in today's market. Open source allows enterprises to get value to market faster, and ensure the survival of many businesses. But open source software (OSS) has recently been an attack vector and focus for cybercrime syndicates. How can you protect yourself? What are you up against? We will also cover how the Struts2 vulnerability, a common java OSS component, led to the attack and breach of several financial institutions. In this workshop, we will set up the Struts2 application and walk through not only how to exploit it, but also how to protect yourself from this attack.
As cloud threats continue to rise, understanding an adversary's tactics, techniques and procedures (TTPs) is critical to strengthening cloud security. How can you pull together a unified and simple approach to speed up detection and response for your SOC team?
In this session, we will:
-Dive into a comprehensive view of the MITRE ATT&CK for Cloud Matrix
-Explore real attack scenarios and best practices to detect them
-Advise on how to establish a unified threat detection strategy for cloud and containers
-Share how open source tools like Falco provide IDS capabilities for containers
The CNCF project OpenTelemetry is increasingly becoming the standard for getting reliable and consistent application and machine data to your monitoring and observability tools. Many organizations are realizing the power of decoupling their metric, log, traces and span data collection from their monitoring stack. Giving them more freedom, and capabilities, to improve the observability of their application. Allowing organizations to be more consistent and have more confidence in supporting their applications. In this session learn about.
1.) What is OpenTelemetry
2.) What is the architecture of the OpenTelemetry Collector (OTel)
3.) How do you build a strategy around OpenTelemetry
4.) How do you get started with OTel
Standardizing on OpenTelemetry makes your application more observable, and helps your organization implement better observability and monitoring practices.
In today’s fast-paced business and technology environments, an organization should never find itself boxed in by limited options for adapting to changing requirements or improving its workload strategy.
The Five Pillars of the AWS Well-Architected Framework—Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization—provide a way to consistently measure operations and architectures, identify areas for improvement, and respond to evolving requirements or external issues. The goal of the framework is to help architects learn the process of making informed, value-add decisions that reflect the organization’s priorities.
In this Q&A session, Excellarate’s Mike Watson hosts Hamdy Eed, an AWS Senior Solution Architect, for a lively discussion about putting the pillars into practice. They’ll explore how to navigate tradeoffs, a crucial function of the framework in guiding organizations through the process of shifting focus and priority among the pillars as needed. And Mike will ask Hamdy to talk about the latest tools and innovations available in the market to augment the implementation of each pillar.
Walk away with a better understanding of how the AWS Well-Architected Framework will help you learn how to:
~Design and implement scalable architectures that align with AWS best practices.
~Effectively utilize computing resources to maintain efficiency when system requirements change or technologies evolve
~Expand options with a structure that weighs priorities and adds business context when evaluating the trade-offs of each decision
Wednesday, September 15, 2021
In 2021, your entire tech stack is likely in the Cloud - so why aren’t your software packages?
Whether you’re currently on-premise, have your own in-house solution or have a bit of a hybrid set up, join us in this session to explore why the future is cloud-native, what the benefits of this are over cloud-hosted, and how to easily set up a secure, cloud-native software pipeline in 60 seconds.
Many modern video games are constantly evolving post-release. New maps, game modes, and game balancing adjustments are rolled out, often on a weekly basis. This continuous iteration to improve player engagement and satisfaction requires data-driven decision making based on events and telemetry captured during gameplay, and from community forums and discussions.
In this session you will learn how OpenShift Streams for Apache Kafka and Kafka Streams can be used to analyze real-time events and telemetry reported by a game server, using a practical example that encourages audience participation. Specifically you’ll learn how to:
Provision Kafka clusters on OpenShift Streams for Apache Kafka.
Develop a Java application that uses Kafka Streams and Quarkus to process event data.
Deploy the application locally, or on OpenShift and connect it to your OpenShift Streams for Apache Kafka Cluster.
OPEN TALK: Synthetic Monitoring and Single Page Apps: How to Increase Control, Visibility, and PerformanceJoin on Hopin
For web developers or SREs leveraging Single Page Applications, client-side rendering can create challenges of control, visibility, and understanding user experience. Modern synthetic monitoring promises deeper understanding and visibility into user experience in pre-production, and after deployment. Join Developer and Technology Advocates Tetiana Kelly and Scott Mason, as they discuss how they leveraged synthetic monitoring to identify performance improvement opportunities for Splunk’s global SPA, The Quest for Observability. From measuring user experiences across geographies, to compression and image optimization opportunities, this talk provides best practices and lessons learned to help engineers deploy better SPAs.
In the course of your day as an SRE, or DevOps, or SysAdmin, your knowledge and expertise are in high demand. You can’t do every task every person in your org needs from you without the help of comprehensive automation.
Automation can be tricky. Some systems aren’t built with automation in mind, but assume that a human being will be there to keep an eye on things and fix errors on the fly, and we can’t be everywhere when there’s too much to do.
Plus, you want to provide access to automation for the right folks and keep a record of when the tools were used.
In this workshop, we’ll cover some things to keep in mind when you’re building out your automation library, characteristics of good automation, and give you a look at PagerDuty Rundeck, a platform that will help you share your expertise with other folks in your organization.
Build automation that works for you and gives you your time back!
Most organizations considering open source and open core cloud technologies understand they need to rigorously evaluate the software’s licensing terms and gauge the long-term health of its community and ecosystem. What still happens less frequently – but is just as crucial to these risk assessments – is developing a thorough understanding of the business models governing the commercial organizations attached to each solution being considered. You must discern the underlying motivations of the vendors or technology providers you depend on to deliver or support open source data-layer software (as well as those vendors with strong influence over its development and maintenance). By acutely understanding these incentives, you can identify if, where, and how they may map to possible risks to your enterprise’s adoption and ongoing open source implementation. Don’t limit the assessment to licenses and community health -- although both are still very key variables.
This session will discuss specifics on what you need to look for and consider when vetting open source technologies in the cloud as offered by:
-- Businesses using OSS as the foundation of their own intellectual property
-- Businesses that maintain total control offer the OSS they offer
-- Major cloud providers
Today's Kubernetes based applications need data services which can meet them where they are: in the cloud. But which ones? It is quickly becoming apparent that a presence in multiple, public clouds is necessary to maintain strategic data agility. However, keeping data highly-available and synchronized across multiple providers can be challenging. In this presentation we'll discuss Apache Cassandra's best-in-class approach to solve this problem, and how it can be leveraged to support multiple distributed use cases.
Distributed systems, microservices, containers/schedulers, continuous delivery … we’ve been through one paradigm shift after another when it comes to architecture, but when it comes to observability we’re still using crufty old logging and metrics and dashboards that haven't been innovative since the LAMP stack era. And guess what? These tools completely fall apart past a certain level of complexity. Let’s dig into some of the deep technical reasons why this is happening and talk about some newer approaches to debugging complex systems when every single request into a system must be identifiable and aggregatable (e.g. honeycomb, distributed tracing). Why are events better than metrics? What is cardinality and why does it matter? And what is the difference between monitoring and observability, anyhow? Come find out.
Transformer-based models have been dominant in the NLP landscape due to their state of the art performance on a wide variety of benchmarks and tasks. However, deploying such large models at scale can be quite difficult and costly. Learn about the techniques that we've utilized at Stream to overcome these challenges and moderate real-time chat messages efficiently on relatively inexpensive hardware. While this talk will focus on the BERT and its offshoots, many of these techniques can also be applied to other models.
OPEN TALK: Cybersecurity at a Global Scale: Addressing Next Generation Supply Chain Issues in Open Source EcosystemsJoin on Hopin
The landscape of cybersecurity is rapidly changing. Traditional, or “Legacy Attacks” used to target code downstream in open source code running in production, but the next generation of attacks is in manufacturing upstream Typo-squatting campaigns, Malicious Code Injection directly at source and Tool Tampering in development stream, all of which pose risks from the biggest corporations to the smallest hobbyist project as we all rely on the same open source ecosystems to do our work. The reality of the modern development landscape is that in a world of continuous integration and delivery, we have to start thinking about continuous security in open source security. This talk will describe the security taxonomy that offers the ability to detect, report and resolve vulnerability and malware attacks before they make their way into our applications, and to provide actionable recommendations when new vulnerabilities in distributions are surfaced in open source repositories.