Microservices Architecture & Deployment
Tuesday, September 14, 2021
Modern environments such as Kubernetes and serverless, have made it easy to manage and scale microservices but observability into these environments is still a challenge for DevOps. In this session, we will describe how to use user request flows to build intuition about your architecture and build resilient applications. We will also dive into correlating metrics, events, & logs using distributed tracing, and creating alerts for anomalies detected in your environments.
Too often we encounter the idea that software architecture is an esoteric concept, of which only the chosen ones, and at the right time, are allowed to discuss. Well, how about a little change of perspective? With software development and users' needs evolving so fast, we don’t afford the luxury of rewriting systems from scratch just because teams fail to understand what they are building. Today’s software developers are tomorrow's architects. We must challenge them to step away from the IDE and understand how the architecture evolves in order to create a common and stable ground in terms of quality, performance, reliability, and scalability. At the same time, software architects need to step away from the abstractions and stay updated to the project development reality. This session revolves around finding the right ways of intertwining up-front architecture, API design & coding while maintaining a continuous focus on architecture evolution.
Kubernetes has become a popular platform among application developers for building cloud-native applications. They value the flexibility to deploy anywhere, automate tasks, and expedite production.
At the same time, PostgreSQL has increasingly become the database of choice among application developers.
For anyone who has deployed or looking to deploy Cloud Native PostgreSQL, a big question is how do I get connected and then how to leverage built-in features.
Join us during this session as we talk about what's next after you have deployed a PostgreSQL cluster using EDB's Kubernetes Operator. Topics on the agenda include an overview of Operator patterns with stateful workloads, what makes up Cloud Native PostgreSQL, tools to benchmark Cloud Native PostgreSQL, What makes PostgreSQL a fit for Kubernetes, PostgreSQL flexible data types, Document databases vs Relational databases, Imperative vs Declarative stateful infrastructure, databases in your CI/CD pipeline, and deploying your application anywhere.
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day.
This talk goes through 4 Caching Patterns that are used by Wix's 1500 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests.
It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* Kafka topic based 0-latency Cache - utilizing compact logs
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top
each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
Application performance metrics are a top priority for Developers and Engineering teams, as they have to ensure their applications are running properly at all times, handling high fluctuations in demand and scale. All while keeping in mind the rising and changing cloud costs that come with the territory.
In this session, Ezequiel will go over the internals of profiling in production and explain how this practice provides teams with deeper visibility into their workloads at scale, enabling them to optimize performance. He'll then go over a real-life use-case of how profiling our own workloads, managing millions of events per second, improved our CPU utilization, and reduced it from 80% to 15%.
You’ve heard of Serverless but you really aren’t sure what it is about. Isn’t serverless just another word for cloud computing? Isn’t it just “Other People’s Computers”? Or is it the most efficient way to develop applications, letting the developer focus on their own priorities instead of anything to do with the administration of a server? Cloud providers would have you believe it means letting them take care of the platform side. But the idea of Serverless extends beyond the platform to encompass everything from microservices to databases, from development to operation, from storage capacity to the network. This talk is geared towards those curious about this new Serverless technology and what opportunities arise by embracing the latest movement.
In the Demo
* We will see how to develop Angular SPA and host it on Azure Static Web app and will use integrated API ( Azure Function ) to develop our translator modules , authentication .
* We will see how to use workflows using Azure Logic Apps to trigger invite mail once a user login to our application
* Finally we will also cover Serverless Cosmos DB which is used as our persistence layer.
The new LAMP (Linux, Apache, MySQL, PHP) is a collection of modern, developer-friendly APIs. The first generation of enterprise APIs were designed to expose slow moving legacy apps. Modern APIs must move at the pace and scale of microservices. This offers a huge opportunity to modernize internal systems to be API first and developer friendly. In this session the speaker will consider the relevance of internal v. external APIs for refactoring legacy apps. Attendees will learn to build a catalog of internal APIs to use as building blocks when developing new apps and discover how to navigate the noisy market of API offerings to find the best fit solution.
You know Datalogics for our Adobe-powered PDF SDKs and Command-line applications, but we’ve brought that same dynamic document technology to the development space you need it most: the Cloud. In this session, we’ll share how our tried-and-true solutions are supporting web application and service development like never before. What can Datalogics do for your Cloud development projects, ideas, and goals? Join us and find out!
We went from a single monolith to a set of microservices that are small, lightweight, and easy to implement. Microservices enable reusability, make it easier to change and scale apps on demand but they also introduce new problems. How do microservices interact with each other toward a common goal? How do you figure out what went wrong when a business process composed of several microservices fails? Should there be a central orchestrator controlling all interactions between services or should each service work independently, in a loosely coupled way, and only interact through shared events? In this talk, we’ll explore the Choreography vs Orchestration question and see demos of some of the tools that can help.
Azure Functions are the serverless offering from Microsoft on Azure, enabling the fulfillment of many use cases without the need to worry about the servers. By responding to events within the Azure platform, Functions are granted access to a wide variety of use cases and situations. Perhaps their most important role is as the "glue" for event driven architectures mainly through supported bindings.
In this talk, we will walk through the most commonly used bindings and illustrate ways larger systems can be constructed through the use of gluing Azure Service offerings together using functions.
Both Box and Split, like many other companies are working to split their monolith into microservices. We didn't want to just end up with a distributed monolith (i.e. lots of services that still had a very high level of interdependency), so this required some specific thinking. Additionally, we wanted to think about how to make sure we didn't have the overhead of hundreds of services while also not ending up with several mini-monoliths. In order to think about how to design our new services, we approached the problem using domain driven design and layered architecture.
DDD is an approach to developing software for complex needs by deeply connecting the implementation to an evolving model of the core business concepts. It's an approach that was first coined in 2004 but is still very applicable today. It emphasizes problem solving, cross functional collaboration and simplicity.
Meanwhile, layered architecture, while a fairly common approach, does not have the same common language or formalized concepts that domain driven design has. We used layered architecture as a way to think about how we separate our front end services, our core logic, and our infrastructure services.
Together, these two approaches helped us think through how and where to divide our services. In this talk, I will go into much more depth about what each of these two approaches are, as well as how we applied each to our problem space at Split.
Wednesday, September 15, 2021
Many modern video games are constantly evolving post-release. New maps, game modes, and game balancing adjustments are rolled out, often on a weekly basis. This continuous iteration to improve player engagement and satisfaction requires data-driven decision making based on events and telemetry captured during gameplay, and from community forums and discussions.
In this session you will learn how OpenShift Streams for Apache Kafka and Kafka Streams can be used to analyze real-time events and telemetry reported by a game server, using a practical example that encourages audience participation. Specifically you’ll learn how to:
Provision Kafka clusters on OpenShift Streams for Apache Kafka.
Develop a Java application that uses Kafka Streams and Quarkus to process event data.
Deploy the application locally, or on OpenShift and connect it to your OpenShift Streams for Apache Kafka Cluster.
Distributed systems, microservices, containers/schedulers, continuous delivery … we’ve been through one paradigm shift after another when it comes to architecture, but when it comes to observability we’re still using crufty old logging and metrics and dashboards that haven't been innovative since the LAMP stack era. And guess what? These tools completely fall apart past a certain level of complexity. Let’s dig into some of the deep technical reasons why this is happening and talk about some newer approaches to debugging complex systems when every single request into a system must be identifiable and aggregatable (e.g. honeycomb, distributed tracing). Why are events better than metrics? What is cardinality and why does it matter? And what is the difference between monitoring and observability, anyhow? Come find out.
As engineers, once we start having more than one (micro)service or product in our architecture, we think about sharing code, functionality and having seamless user experiences between systems; that’s the start of a platform! But we have so many decisions to make:
* What features are part of the platform, and when?
* Do I go lightweight with drop-in libraries that are quick to adopt or heavyweight like frameworks for a better developer (and user) experience?
* How do I make my platform extensible and maintainable?
* How can I address the classic hockey-stick adoption pattern on your services?
* How does Conway's Law apply to the platform?
In this talk we describe a number of patterns (and anti-patterns) for designing a platform we’ve seen and implemented both from industry and as part of the platform powering Atlassian Cloud.