Containers & Kubernetes Lifecycle
Tuesday, September 14, 2021
Developers always expected databases to work out-of-the-box, but historically it is the exact opposite.
With the rise of Kubernetes StatefullSets and CRDs, we started thinking about running databases on it. But why should I do that in the first place? How hard is it? Which are the challenges? Is it production already? All those questions will be answered during a live demo where we will deploy a database, deploy an operator, fail nodes, scale up and down with nearly no manual intervention.
Kubernetes is great, but complex — and the learning curve is steep. Some might even call it frustrating. After all, application developers should be making great applications, not struggling with the complexities of Kubernetes. And operations teams shouldn't be biting their nails over developers having direct Kubernetes cluster write access.
Lagoon solves this (and more). It reduces the initial Kubernetes knowledge required from developers by providing easy to use application templates that involve just a couple lines of code. It does this with fully automated deployment on every branch commit or pull request. This removes the need for developers to have write access to the Kubernetes cluster and makes your operation teams happier.
Lagoon is fully configurable and flexible, and supports your developers in learning (and loving!) Kubernetes.
When you move to Kubernetes and you might want to enable a GitOps/DevOps/AppOps workflow, your inner development loop becomes more complicated. For example, you will have a few more steps in your inner loop development workflow such as build a container image based on the application and dependencies, a quick sanity test with running the container image, push it to the external registry, pull that image to the remote Kubernetes cluster. You might also need to externalize some configurations by using Kubernetes features like ConfigMap, Secret. Of course, you also need to figure out how to create YAML files for creating Kubernetes manifesto or resources. In the end, it will take 2~3 times longer than traditional inner loop development without Kubernetes.
This talk with a demo showcases how developers can have the same experiences to accelerate their inner loop development from local to the remote container environment, Kubernetes using Quarkus. But IT leaders also learn the way they can help their development teams for improving Inner loop development quicker.
Kubernetes has become a popular platform among application developers for building cloud-native applications. They value the flexibility to deploy anywhere, automate tasks, and expedite production.
At the same time, PostgreSQL has increasingly become the database of choice among application developers.
For anyone who has deployed or looking to deploy Cloud Native PostgreSQL, a big question is how do I get connected and then how to leverage built-in features.
Join us during this session as we talk about what's next after you have deployed a PostgreSQL cluster using EDB's Kubernetes Operator. Topics on the agenda include an overview of Operator patterns with stateful workloads, what makes up Cloud Native PostgreSQL, tools to benchmark Cloud Native PostgreSQL, What makes PostgreSQL a fit for Kubernetes, PostgreSQL flexible data types, Document databases vs Relational databases, Imperative vs Declarative stateful infrastructure, databases in your CI/CD pipeline, and deploying your application anywhere.
Did you know that Oracle supports running your Oracle Database as a Docker Container?
In this session, you will get to see how easy it is following a step-by-step tutorial. You will also get to see the different deployment choices that are available to you so you can choose the model that works best for your use case.
The session will include live demonstrations
After the rush to take advantage of cloud native application development and tools like Kubernetes, DevOps teams now have a lot more to think about. In many cases, DevOps adopted early continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, and are now attempting to apply them in cloud native scenarios where they aren’t the appropriate fit they once were. Cloud native pulls the developer down to infrastructure-related operations, and the current CD tools cannot help bring back the application-level context that developers once had before moving to a microservices architecture – hence, adding more complexity to the development workflow and observability of applications post-deployment. DevOps teams also face new challenges in application policy management, especially so in closely regulated industries, as they adapt their processes to establish trust and security in cloud native environments. At the same time, DevOps needs to reevaluate approaches to automation and strategies for eliminating human error, as cloud and Kubernetes deployments have ushered in a return of very manual and tedious efforts.
This session digs into details around three cloud native 2.0 strategies that DevOps teams ought to consider sooner than later to stay on top of a fast-changing ecosystem: 1) how to build CI/CD pipelines with greater interoperability and composability, 2) how and why to harness application policy management, and 3) how to balance automation and audits
Wednesday, September 15, 2021
Today's Kubernetes based applications need data services which can meet them where they are: in the cloud. But which ones? It is quickly becoming apparent that a presence in multiple, public clouds is necessary to maintain strategic data agility. However, keeping data highly-available and synchronized across multiple providers can be challenging. In this presentation we'll discuss Apache Cassandra's best-in-class approach to solve this problem, and how it can be leveraged to support multiple distributed use cases.
You can find talks demonstrating how some security tools work in isolation, but what about a closer to life scenario showing how to introduce security throughout development, deployment and runtime?
This is the demo that will finally fill that gap! Attendees will be able to take back knowledge and get a head start in introducing security everywhere in their SDLC.
We’ll see a hands on demonstration of how to use a variety of tools under the CNCF to dramatically enhance the security of any environment:
- In-Toto will help us ensure the integrity of our software from development to deployment
- Kyverno will allow us to define policies in our environment to guarantee compliance
- We’ll use Notary to sign our dockerimages and finally
- Falco we’ll notify us if any threats are identified in the runtime of our kubernetes cluster
Talk about all important steps that it takes to run the database on Kubernetes in production. We will answer the questions: Can you do it without operators? Can you work with k8s primitives only to run production-grade DB and then DBaaS?
Roman Stanek, current founder and CEO of GoodData, has founded three SaaS companies over the past 22 years. His first two companies, NetBeans and Systinet, both ended in successful exits, including a sale to Sun Microsystems and one of the most successful acquisitions in the web services/SOA space. GoodData is currently experiencing rapid growth including a 33% expansion across the entire customer base in Q4 2020, a 9x increase in the number of self-service accounts in 2020, and the signing of our largest expansion deal yet, a $14 million contract –– all critical metrics as GoodData continues to surge and provide customers with high-quality data analytics and insights.
Until now, there’s been little market pressure for BI to adapt to modern devops tooling and best practices like CI/CD, DataOps, GitOps and others. Popular BI tools often offer a “real time BI optimized” architecture that removes the analytical storage layer to reduce ETL latencies. Unfortunately, in most cases, the analytical capabilities are severely limited in the “real-time-optimized” mode. Roman and the GoodData team just released GoodData Cloud Native after two years of engineering work — the first solution to deliver enterprise-grade analytics as a microservices-based stack. Roman can speak to how to identify not just today’s market need but tomorrow’s — and how to turn those insights into the next phase of your roadmap. For GoodData, that looked like putting analytics on equal footing with core business operations like app dev, and committing to a headless BI structure that delivers scalable, real-time data to everyone who needs it.