DeveloperWeek PRO Stage B
Wednesday, February 17, 2021
If you have been watching the development of the cloud native technology stack ecosystem, you’re probably getting the gist of why people are migrating to it.
Cloud native technologies promise an unparalleled productivity and reliability jump for application development and operations. But with a multitude of options for cloud native newcomers, it can be challenging to know where to start.
Join this session to see how you can setup a cloud native development and operations CI/CD environment and pipelines with Kubernetes.
We will provide a template github projects and walk you through:
1. How to establish Kubernetes as your infrastructure for a portable and truly cloud native environment for optimal productivity and cost.
2. Using Kublr for infrastructure and build environment as code approach for fast, reliable and inexpensive production-ready DevOps environment setup bringing together a combination of technologies - Kubernetes; AWS Mixed Instance Policies, Spot Instances and availability zones; AWS EFS; Nexus and Jenkins
3. Using Jenkins and Jenkins Kubernetes integration for pipeline as code approach
4. Best practices based on open source tools such as Nexus and Jenkins.
5. How to tackle build process dilemmas and difficulties including managing dependencies, hermetic builds and build scripts.
We will see how to manage a complex microservices mesh using Istio and Anthos Service Mesh to get application obervability, security and intelligent request routing with no changes to application code.
We will show how to gain visibility on application golden signals, monitor Service Level Objectives and Error Budgets, centrally manage encryption, authentication and authorization and define request routing policies.
Do you remember that microservice that you wrote and successfully deployed in production a few weeks ago? I am afraid to tell you that it is under performing and occasionally generating faults that are compromising the rest of the system. Could you please join our war room in an hour to help us fix this as quickly as possible?
For most developers this reads like the plot of a horror movie. There is nothing scarier for developers than fixing problems that they have no idea about where to start — because their microservice is just a piece of a larger puzzle composed by different other microservices, databases, and distributed systems. Though observability is the right answer for scenarios like this the reality is that it might not be in place yet to effectively make the difference.
The internet is not running in short supply of resources that explain in detail what this new buzzword called observability is about and why you need it so hard. However, very few explain how to implement it as quickly as possible to be used in troubleshooting scenarios like the aforementioned. This talk will show in 40 minutes how to implement observability from scratch and how to leverage logs, metrics, and traces to identify problems in microservices.
We have been hearing a lot about the benefits of using the reactive approach to solving concurrency problems in distributed systems. While reactive programming refers to the implementation techniques being used on the coding level, on the systems deployment and runtime level, we can leverage on a robust yet very flexible and lightweight framework such as Vert.x to deliver. In this session, we will first learn about what the missions of a reactive system are, which, among many things, include handling multiple concurrent data stream flows and being able to control back pressure as well as managing errors in an elegant manner. The very loosely-coupled nature of a reactive system also lends itself very well to building microservices that can communicate well within its messaging infrastructure. We will also discuss the special polyglot nature of Vert.x, its event loop, and its use of the Vertical model. Live coding will accompany this session to illustrate how to program a simple use case using multiple JVM languages such as Java and Kotlin, we will then build and dockerize it to be deployed as a serverless container to the Knative cluster on the cloud in a delightful manner.
Manage the end to end lifecycle of an API program requires to have in mind the detailed knowledge and management of several assets: From the api specs published in an API Portal, to the API policies and artifacts (a lot of them) in an API Gateway. And we could not forget the backend solution that supports the API business logic or the several environments we have to manage (from sandbox to production, p.e.).
This talk is about how to manage the several changes in all the artifacts associated to an API lifecycle in an automated way using DevOps principles and tools, to ensure the value stream in a efficient way.e a Full end to end API lifecycle
Thursday, February 18, 2021
Containerization gave applications portability from local dev to production, but in our pursuit of service-oriented design that portability has been lost. This talk will discuss how we can build upon containerization to make complex, cloud-native applications easier for developers to create and contribute to.
GitOps uses Git as the “single source of truth” for declarative infrastructure and enables developers to manage infrastructure with the same Git-based workflows they use to manage a codebase. Having all configuration files version-controlled by Git has many advantages, but best practices for securely managing secrets with GitOps remain contested. Join us in this presentation about GitOps and secret management. Attendees will learn about the pros and cons of various approaches and why the Jenkins X project has chosen to standardize on Kubernetes External Secrets for secret management.
Kubernetes has transformed the way in which we manage cloud environments and build cloud-native applications, but for many developers, a higher degree of transparency within Kubernetes is needed. This session will explore how data-centric security in Kubernetes can provide technical controls for data in use by an application. Today’s approach of network-level security through the use of service mesh relies on blind trust that data is used for its intended purpose.
Instead, in this session, we’ll explore the next frontier of Kubernetes security: one in which technical controls persist throughout the pipeline to protect data in-motion and in-use. We’ll discuss low-friction methods for enabling control at the data level, including how to enable non-humans to access data in specific ways and places and how to create a strong form of identity.
In this session, Lewis Marshall from Appvia, will discuss best practice for teams managing the critical 'Day-2' phase of Kubernetes deployments, and the key areas they must have visibility and coverage on -production topology, updating, monitoring, scaling.
Day-2 is the time between the initial deployment of a cluster for development and when kubernetes clusters are hosting a production business service. It sits between designing the deployment (Day-1), and ongoing maintenance (Day-3)
Moving from Day 1 to Day 2 isn't as simple as it might seem. It's a critical time period where you bring technology out of the development staging phase and into production. Without a solid plan to overcome the traps along the way, you won't be able to realise the potential benefits of Kubernetes, you'll struggle to scale your environments and put the entire infrastructure in danger.
Key points Lewis will cover include what best practice looks like and his own experience of deploying Kubernetes for major organisations (including in the UK's Home Office).
We discuss how the business service requirements are affected when running on Kubernetes:
Production Topology - How isolation of workloads, clusters and cloud resources e.g. networking, affects all other day two concerns.
Upgrading - the choice a team makes around upgrading is essential to making sure there's no downtime to hosted applications within your cluster.
Monitoring - the business drivers around actual service availability and support and how Kubernetes helps and hinders observability.
Should application developers invest the time to learn Kubernetes? Do they even need to be aware of Kubernetes within their infrastructure? It’s become an increasingly popular question that DevOps, platform engineering, and dev teams are asking.
While Kubernetes delivers robust capabilities – far more than most developers need – developers don’t really care about Kubernetes itself. What they care about is delivering their product to users. The arguments in favor of developers learning Kubernetes often revolve around the fact that it’s an incredible tool and well-liked by DevOps. For most developers, these arguments are like being told how fulfilling it is to make your own pizza from scratch, when you have a lot of work to do and would much rather simply order one. Developers only appreciate Kubernetes to the degree that it allows them to focus on doing it faster. They want to eat and not have to cook.
Things can go wrong in the kitchen as well: small changes to Kubernetes have outsized ripple effects. Even experienced developers may find that operators are reluctant to grant them cluster access. The complexity of Kubernetes makes it easy for developers to mess up in unpredictable ways. Because of this, many organizations make years of investments attempting to build a layer between their applications and Kubernetes, in order to abstract Kubernetes away from developers and allow them to simply push code.
Kubernetes needs to transform into a user-friendly application management framework, in the same way Docker turned complex tools such as cgroups into user-friendly products. This session’s audience will learn strategies and tactics for transforming Kubernetes into that user-friendly solution, enabling developers to focus on application code and DevOps and platform engineers to keep control of their clusters and infrastructure.
Takeaways from this presentation will include:
1. How to stop prioritizing Kubernetes, but instead focus more on the applications and the teams that develop and control them.
2. How to stop worrying about ConfigMaps, ingress rules, PVs, PVCs, and other complications in your day-to-day activities.
3. How to enable DevOps and platform engineering teams to move Kubernetes across clusters or even providers without impacting how applications are deployed, operated, and controlled.
Kubernetes is much more than a runtime platform for Docker containers. Through its API not only can you create custom clients, but you can also extend Kubernetes. Those custom Controllers are called Operators and work with application-specific custom resource definitions.
Not only can you write those Kubernetes operators in Go, but you can also do this in Java. Within this talk, you will be guided through setting up and your first explorations of the Kubernetes API within a plain Java program. We explore the concepts of resource listeners, programmatic creation of deployments and services and how this can be used for your custom requirements.
As we get deeper into Kubernetes yaml files, we see a lot of duplication. Can we move to a higher level that eliminates this duplication? Let's look at Helm, a tool both for templating k8s yaml files and for installing complex infrastructure dependencies as one package. With Helm 3, we now have deeper integration and more security when working with Kubernetes. Join us on this path to a simpler, more repeatable, and more discoverable yaml experience.
Service Fabric is the foundational technology introduced by Microsoft Azure to empower the large-scale Azure service.
In this session, you’ll get an overview of containers like Docker after an overview of Service Fabric, explain the difference between it and Kubernetes as a new way To Orchestrate Microservices. You’ll learn how to develop a Microservices application and how to deploy those services to Service Fabric clusters and the new serverless Service Fabric Mesh service. We’ll dive into the platform and programming model advantages including stateful services and actors for low-latency data processing and more.
You will learn:
Overview of containers
Overview of Service Fabric
Difference between Kubernetes and Service Fabric
Setup Environment to start developing an application using Microservices with Service Fabric
Kubernetes brings new ideas of how to organize the caching layer for your applications. You can still use the old-but-good client-server topology, but now there is much more than that. This session will start with the known distributed caching topologies: embedded, client-server, and cloud. Then, I'll present Kubernetes-only caching strategies, including:
- Sidecar Caching
- Reverse Proxy Caching with Nginx
- Reverse Proxy Sidecar Caching with Hazelcast
- Envoy-level caching with Service Mesh
In this session you'll see:
- A walk-through of all caching topologies you can use in Kubernetes
- Pros and Cons of each solution
- The future of caching in container-based environments
(DeveloperWeek) : So You Think You Know the Behavior of Your Containers? Would You Stake Your Job on It?
You’ve developed a fabulous application in a container/Kubernetes Continuous Integration (CI) pipeline. The application works like it should, and the static scans look secure, but, is it actually operating securely? Are any 3rd party components you’ve integrated doing something they shouldn’t be doing? How do you know?
To be confident about the behavior of your app, active inspection of running binaries within a container, utilizing live telemetry is key. Pre-production observability enables this by filling the gaps that static code (SAST) and dynamic external inspections (DAST) don’t cover.
During this technical session, you’ll see pre-production observability in action and the benefits the solution delivers to developers and their teams. Mike Larkin, CTO at DeepFactor, and John Day, Customer Success Engineer at DeepFactor, will discuss a straightforward method to obtain this information from any container to deliver extracting metric data with minimal overhead. This information can then be processed to indicate issues that may affect the unknowing behavior of your container be it security, performance, or operational intention. You’ll leave this session armed with the knowledge to immediately leverage pre-production observability to consistently deploy apps with confidence.
Friday, February 19, 2021
PRO SESSION: The Potential Pitfalls of Deploying Real-Time APIs in an Event-Driven Microservices Architecture
Today, software must be efficient, adaptable, and easy to use – Microservices deliver simple solutions to complex problems and are vogue within the developer community. As is event-driven architecture, which is a system of loosely coupled microservices that exchange information between each other through the production and consumption of events.
This type of architecture is particularly well suited to event streams and through this in-stream processing businesses are able to make fast decisions, literally in milliseconds. Event stream processing enables applications to respond to changing business solutions as they happen and make decisions based on all available current and historical data.
By implementing event-driven architectures, it is possible to build a resilient microservice-based architecture that is truly decoupled, giving increased agility and flexibility to the development lifecycle. Many developers now consider event-driven architecture as best practice for microservices implementations.
This trend in development approach coincides with an increasing desire from business for real time data – producers and consumers demand faster experiences which has led to emergence of the real-time APIs. But…problems exist.
Real-time data management and delivery introduces a unique set of development challenges that must be understood and effectively addressed, in order to reliably deliver real-time data and easily scale.
When you expose event streams to millions of consumers the assumptions made for traditional APIs no longer hold true. Developing real-time APIs means rethinking latency, scalability, security, and publication from the ground up. This talk will discuss the hard lessons I have learned while helping companies successfully deploy real-time APIs in an event-driven microservices environment, I will touch upon:
• The realities of the Internet, and how to address the challenges at the transport, protocol, and application layers
• What caching looks like in a push-oriented world, and how it drives significant efficiencies
• How to prevent your data model from impacting security and latency at scale over the Internet
• Why basic Pub/Sub is not sufficient for today's event-driven applications.
No server configuration? No problem! With serverless & JAMStack becoming more and more popular, it’s like static sites never went out of fashion. Though, unlike the 90s, we don’t have to sacrifice style for performance. Let’s recreate a Japanese style photo booth with React & WebAssembly, and get some insight into how our users are interacting with our site so we know how to make improvements on future versions.
API integrations and the reasons a business would want to do them are plentiful. Many are driven by wanting to drive higher user (employees and customers) productivity, greater employee engagement, or reducing busywork and task switching in order to increase, quicker, smoother cross-application workflows. As APIs are built multiple and different types of vendors are involved, fostering some critical business considerations. Having built dozens of these integrations we've come to understand the critical business considerations to keep in mind in terms of types of business restrictions and terms of service to put around APIs.
Questions this session will help answer:
• When you think about building APIs from multiple, different vendors how will that impact your business model?
• How do you set up a win-win alliance between your business and other vendors involved so that everybody's business model wins, and your customer received value?
• What considerations should you keep in mind when building packaging pricing models?
• What is it going to cost to drive a transaction when multiple vendors, with different pricing models are involved?
During the past year I’ve implemented or have witnessed implementations of several key patterns of event-driven messaging designs on top of Kafka that have facilitated creating a robust distributed microservices system at Wix that can easily handle increasing traffic and storage needs with many different use-cases.
In this talk I will share these patterns with you, including:
* Consume and Project (data decoupling)
* End-to-end Events (Kafka+websockets)
* In memory KV stores (consume and query with 0-latency)
* Events transactions (Exactly Once Delivery)
As software is becoming more pervasive, APIs are fast becoming the building blocks of business enterprises. And with that, the role of the API developer is gaining more responsibility for driving growth ranging from small to large companies. In this session, we will discuss how to think about building tools for the API developer that leads to quicker time to market value for both the business and the consumers.
This talk will focus on how Rockset made changes to RocksDB, a popular open-source storage engine for a persistent key-value store, to run in a serverless way. We'll share how we introduced RocksDB-Cloud to auto-scale storage and remote compactions in RocksDB to independently scale ingest compute and query compute. These architectural decisions enable Rockset to meet the low data and query latency requirements of real-time analytics. Finally, we'll conclude by showing you how Rockset's serverless technology enables real-time analytics without you needing to provision capacity, maintain servers or manage clusters.
The DBaaS style of deployment has taken the world by storm. I want to share my insights into the advantages, gotchas, and what we should be looking for in the future from DBaaS providers to continue providing an excellent developer self service experience that unlocks agility in the modern application development world.
In this session, hear from Jordan Schuetz, Developer Advocate at MuleSoft, on how to build, secure, and deploy your first API using MuleSoft's Anypoint Platform. This talk will cover enterprise-level topics on API development, security, and best practices when it comes to deploying your first mule application. Also, learn how to create integrations with Salesforce, databases, Twilio, and more!
Parking and mobility payments is an industry undergoing a renaissance, accelerated by the sudden need to reduce shared surfaces such as parking meters and physical currency in the wake of COVID-19. Citizens now need to make parking payments through digital sources they already have on hand like mapping services, connected vehicles, or mobile applications. Cities and third-party partners are racing to leverage operating systems via open-APIs to offer better parking payment options.
But how does a business integrate its API at scale across thousands of clients who all have unique needs and requirements? Luke Segars breaks down his technical learnings and offers best-practices for developers tackling similar challenges across any industry.