Tuesday, September 14, 2021
Developers always expected databases to work out-of-the-box, but historically it is the exact opposite.
With the rise of Kubernetes StatefullSets and CRDs, we started thinking about running databases on it. But why should I do that in the first place? How hard is it? Which are the challenges? Is it production already? All those questions will be answered during a live demo where we will deploy a database, deploy an operator, fail nodes, scale up and down with nearly no manual intervention.
When you move to Kubernetes and you might want to enable a GitOps/DevOps/AppOps workflow, your inner development loop becomes more complicated. For example, you will have a few more steps in your inner loop development workflow such as build a container image based on the application and dependencies, a quick sanity test with running the container image, push it to the external registry, pull that image to the remote Kubernetes cluster. You might also need to externalize some configurations by using Kubernetes features like ConfigMap, Secret. Of course, you also need to figure out how to create YAML files for creating Kubernetes manifesto or resources. In the end, it will take 2~3 times longer than traditional inner loop development without Kubernetes.
This talk with a demo showcases how developers can have the same experiences to accelerate their inner loop development from local to the remote container environment, Kubernetes using Quarkus. But IT leaders also learn the way they can help their development teams for improving Inner loop development quicker.
Why are enterprise organizations making a move from on-premise solutions to completely cloud-native? What does that mean for improving, scaling, and securing their CI/CD pipelines? And what exactly is continuous packaging, anyway?
Join Cloudsmith’s Dan McKinney in this session as he answers all of these questions, helping attendees understand the true difference between cloud-hosted and cloud-native, how to get started with migrating to a cloud-native solution, and the true benefits of being entirely within the cloud.
Did you know that Oracle supports running your Oracle Database as a Docker Container?
In this session, you will get to see how easy it is following a step-by-step tutorial. You will also get to see the different deployment choices that are available to you so you can choose the model that works best for your use case.
The session will include live demonstrations
Your company’s “digital transformation” will be driven by new application designs and methods, new technology stacks, and new processes. To master it, and delivering next generation services through it, massively complex sets of signals and data need to be leveraged, processed, and acted on. Developers need integrated data and insights through that noise, while being able to leverage their tools of choice. All of this must be managed, even in spite of massive rates of change and innovation. The challenge is determining who or what is going to do that work, where the work gets done, and how the business benefits from it. This session focuses on methods to overcome the complexity of digital transformation in the cloud and drive operational maturity despite constant change across applications, digital services, and products.
You’ve heard of Serverless but you really aren’t sure what it is about. Isn’t serverless just another word for cloud computing? Isn’t it just “Other People’s Computers”? Or is it the most efficient way to develop applications, letting the developer focus on their own priorities instead of anything to do with the administration of a server? Cloud providers would have you believe it means letting them take care of the platform side. But the idea of Serverless extends beyond the platform to encompass everything from microservices to databases, from development to operation, from storage capacity to the network. This talk is geared towards those curious about this new Serverless technology and what opportunities arise by embracing the latest movement.
After the rush to take advantage of cloud native application development and tools like Kubernetes, DevOps teams now have a lot more to think about. In many cases, DevOps adopted early continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, and are now attempting to apply them in cloud native scenarios where they aren’t the appropriate fit they once were. Cloud native pulls the developer down to infrastructure-related operations, and the current CD tools cannot help bring back the application-level context that developers once had before moving to a microservices architecture – hence, adding more complexity to the development workflow and observability of applications post-deployment. DevOps teams also face new challenges in application policy management, especially so in closely regulated industries, as they adapt their processes to establish trust and security in cloud native environments. At the same time, DevOps needs to reevaluate approaches to automation and strategies for eliminating human error, as cloud and Kubernetes deployments have ushered in a return of very manual and tedious efforts.
This session digs into details around three cloud native 2.0 strategies that DevOps teams ought to consider sooner than later to stay on top of a fast-changing ecosystem: 1) how to build CI/CD pipelines with greater interoperability and composability, 2) how and why to harness application policy management, and 3) how to balance automation and audits
The new LAMP (Linux, Apache, MySQL, PHP) is a collection of modern, developer-friendly APIs. The first generation of enterprise APIs were designed to expose slow moving legacy apps. Modern APIs must move at the pace and scale of microservices. This offers a huge opportunity to modernize internal systems to be API first and developer friendly. In this session the speaker will consider the relevance of internal v. external APIs for refactoring legacy apps. Attendees will learn to build a catalog of internal APIs to use as building blocks when developing new apps and discover how to navigate the noisy market of API offerings to find the best fit solution.
Join us as Pau Labarta Bajo, Data Scientist and ML Engineer with over eight years of experience will show us how to break multi-million dollar computer vision models using adversarial examples.
Computer vision models based on neural networks have become so good in the last 10 years that nowadays serve as the “eyes” behind many mission-critical systems, like self-driving cars, automatic video surveillance, or face recognition systems in airports. What you probably do not know is that there are easy methods to fool them, forcing them to produce wrong predictions. These methods are theoretically simple and computational feasible and open the door to potentially critical security issues.
The CNCF project OpenTelemetry is increasingly becoming the standard for getting reliable and consistent application and machine data to your monitoring and observability tools. Many organizations are realizing the power of decoupling their metric, log, traces and span data collection from their monitoring stack. Giving them more freedom, and capabilities, to improve the observability of their application. Allowing organizations to be more consistent and have more confidence in supporting their applications. In this session learn about.
1.) What is OpenTelemetry
2.) What is the architecture of the OpenTelemetry Collector (OTel)
3.) How do you build a strategy around OpenTelemetry
4.) How do you get started with OTel
Standardizing on OpenTelemetry makes your application more observable, and helps your organization implement better observability and monitoring practices.
For nearly thirteen years, Amazon Web Services has offered the ability for .NET developers to host their workloads in the cloud, and over that time has extended that support to many of AWS’ services. In this session, we will explore the broad range of support AWS has to offer .NET developers. From supporting your favorite development environment, to the most cost-effective and high performance hosting environment, to the operational tools you use for deployment and management, this session explores how you can leverage your skills in your AWS environment.
Wednesday, September 15, 2021
There are many, many resources for DevOps engineers: learning paths, guides and tutorials for using tools such as Terraform, Packer and Ansible to save time in provisioning and configuring reliable, predictable systems. This session looks at the other side of the equation: creating the plugins, modules and providers that abstract away upstream APIs for use by DevOps tools.
Director of Developer Evangelism Pat Patterson will explain how Citrix implemented DevOps tooling for its App Delivery & Security products, and how the company is working with its community to create tooling for its Virtual Apps & Desktops Service. Pat will explain the different approaches to creating tooling, trade-offs between them, and the lessons that Citrix has learned along the way. This session will NOT be death-by-PowerPoint! Come prepared for semi-colons, curly braces and monospaced text!
Understanding what is happening with a solution that is built from multiple components can be challenging. While the solution space for monitoring and application log management is mature, there is a tendency for organizations to end up with multiple tools which overlap in this space to meet different team needs. They also work on aggregate then act, rather than consider things in a more granular way.
FluentD presents us with a means to simplify the monitoring landscape, address challenges of hyper-distribution occurring with microservice solutions, allowing different tools needing log data to help in their different way.
In this session, we’ll explore the challenges of modern log management. How its use can make hybrid and multi-cloud solutions easy to monitor.
Today’s pace of change is relentless. Customers expect organizations to respond to their needs immediately, with services that are tailored to them. New competitors appear out of nowhere and reshape markets overnight. Global events cause demand to surge in one area and evaporate in another, creating pressure on every aspect of business, requiring the ability to adapt and perform in real-time.
In this environment, success requires more than size and scale. It requires using applications and data to deliver rich, personalized experiences; to get the right data to the right person at the right time—no matter where it’s stored. And to do it all with greater efficiency, security, and speed. In an age when businesses are trying to disrupt the world, and the world is disrupting business, organizations have to move faster, smarter, and with greater operational efficiency…or risk being left behind.
Cloud is reshaping the way EVERYTHING is done in today’s world. We see it in our personal lives, where we expect a graceful operation between our own devices and what’s happening “out there” in the cloud. Why should an enterprise organization be any different? They need that graceful operation to ensure the speed and efficiency they’ll need to keep up. For a business, cloud is key – and the only way to move fast is to use the cloud, operate like a cloud, or both.
And when it’s both, organizations need the best of both worlds. They don’t have time to figure out how to do it one way in their own data center, and then a completely different way in the cloud. In short, they need the same way to acquire, consume and operate no matter where they are.
The demand for cloud has never been higher.
- A report from Canalys on Q3 spending shows a significant jump in worldwide cloud spending, up 33%.
- IDC expects that by the end of 2021, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic.
- For NetApp, in Q1 FY21 earnings our cloud business grew 192% YoY.
To instantly adapt to a rapidly changing landscape, our customers need to have access to the right data, at the right time, in the right place—at the right pace.
- We see organizations looking at solutions from cloud providers for two important reasons:
------ First, is to lower their I.T. cost because they're facing economic challenges and they want to move to cloud as a mechanism to get a more efficient and agile I.T. infrastructure. -------- Second is the shift to digital. People want to get new innovations to change their business model.
- The future of innovation lies in our ability to harness the power of the data that is available to us, and to act on that data to transform. Companies that do this effectively will thrive.
----- Whether it’s a retailer looking at e-commerce, a financial institution looking for new ways to use data to identify business opportunities or a manufacturer l using sensor technology and I.T. to change their manufacturing shop floor, all of them eventually boil down to unlocking new business models using the power of data.
----- Every customer is in a different place on their journey to cloud – with a different set of imperatives and challenges. But across the board, a few things are clear--as we’re hearing directly from our customers, every day. Data is at the heart of everything our customers do.
The rate of innovation in the cloud software industry is accelerating at an unprecedented pace. There are many benefits to all of this exciting innovation, like pushing the bar on things that used to require specialized hardware that can now be done exclusively in software. It truly is an exciting time to be a software engineer working in cloud.
However there is a critical factor to consider with all of this change and innovation, and that is getting technology to a production-ready state in a rapidly changing innovation landscape. It takes time to make a product stable, scalable and secure. By the time that happens, it seems that the industry has moved on to greener pastures, and the production-ready technology appears old and stale. Could we be innovating ourselves out of production environments?
This talk will share practical steps on how to maintain production-ready quality when chasing after the next ‘shiny new thing’ in cloud innovation.
I've seen so many developers failing when they tried to think new apps for the cloud or when they needed to move an existing app over there. In this (short) session we'll talk about some (mostly non-technical) topics to consider to think like a Cloud Architect.
As engineers, once we start having more than one (micro)service or product in our architecture, we think about sharing code, functionality and having seamless user experiences between systems; that’s the start of a platform! But we have so many decisions to make:
* What features are part of the platform, and when?
* Do I go lightweight with drop-in libraries that are quick to adopt or heavyweight like frameworks for a better developer (and user) experience?
* How do I make my platform extensible and maintainable?
* How can I address the classic hockey-stick adoption pattern on your services?
* How does Conway's Law apply to the platform?
In this talk we describe a number of patterns (and anti-patterns) for designing a platform we’ve seen and implemented both from industry and as part of the platform powering Atlassian Cloud.
Roman Stanek, current founder and CEO of GoodData, has founded three SaaS companies over the past 22 years. His first two companies, NetBeans and Systinet, both ended in successful exits, including a sale to Sun Microsystems and one of the most successful acquisitions in the web services/SOA space. GoodData is currently experiencing rapid growth including a 33% expansion across the entire customer base in Q4 2020, a 9x increase in the number of self-service accounts in 2020, and the signing of our largest expansion deal yet, a $14 million contract –– all critical metrics as GoodData continues to surge and provide customers with high-quality data analytics and insights.
Until now, there’s been little market pressure for BI to adapt to modern devops tooling and best practices like CI/CD, DataOps, GitOps and others. Popular BI tools often offer a “real time BI optimized” architecture that removes the analytical storage layer to reduce ETL latencies. Unfortunately, in most cases, the analytical capabilities are severely limited in the “real-time-optimized” mode. Roman and the GoodData team just released GoodData Cloud Native after two years of engineering work — the first solution to deliver enterprise-grade analytics as a microservices-based stack. Roman can speak to how to identify not just today’s market need but tomorrow’s — and how to turn those insights into the next phase of your roadmap. For GoodData, that looked like putting analytics on equal footing with core business operations like app dev, and committing to a headless BI structure that delivers scalable, real-time data to everyone who needs it.