KEYNOTES & FEATURED
Tuesday, September 14, 2021
Developers always expected databases to work out-of-the-box, but historically it is the exact opposite.
With the rise of Kubernetes StatefullSets and CRDs, we started thinking about running databases on it. But why should I do that in the first place? How hard is it? Which are the challenges? Is it production already? All those questions will be answered during a live demo where we will deploy a database, deploy an operator, fail nodes, scale up and down with nearly no manual intervention.
Why are enterprise organizations making a move from on-premise solutions to completely cloud-native? What does that mean for improving, scaling, and securing their CI/CD pipelines? And what exactly is continuous packaging, anyway?
Join Cloudsmith’s Dan McKinney in this session as he answers all of these questions, helping attendees understand the true difference between cloud-hosted and cloud-native, how to get started with migrating to a cloud-native solution, and the true benefits of being entirely within the cloud.
Your company’s “digital transformation” will be driven by new application designs and methods, new technology stacks, and new processes. To master it, and delivering next generation services through it, massively complex sets of signals and data need to be leveraged, processed, and acted on. Developers need integrated data and insights through that noise, while being able to leverage their tools of choice. All of this must be managed, even in spite of massive rates of change and innovation. The challenge is determining who or what is going to do that work, where the work gets done, and how the business benefits from it. This session focuses on methods to overcome the complexity of digital transformation in the cloud and drive operational maturity despite constant change across applications, digital services, and products.
The new LAMP (Linux, Apache, MySQL, PHP) is a collection of modern, developer-friendly APIs. The first generation of enterprise APIs were designed to expose slow moving legacy apps. Modern APIs must move at the pace and scale of microservices. This offers a huge opportunity to modernize internal systems to be API first and developer friendly. In this session the speaker will consider the relevance of internal v. external APIs for refactoring legacy apps. Attendees will learn to build a catalog of internal APIs to use as building blocks when developing new apps and discover how to navigate the noisy market of API offerings to find the best fit solution.
The CNCF project OpenTelemetry is increasingly becoming the standard for getting reliable and consistent application and machine data to your monitoring and observability tools. Many organizations are realizing the power of decoupling their metric, log, traces and span data collection from their monitoring stack. Giving them more freedom, and capabilities, to improve the observability of their application. Allowing organizations to be more consistent and have more confidence in supporting their applications. In this session learn about.
1.) What is OpenTelemetry
2.) What is the architecture of the OpenTelemetry Collector (OTel)
3.) How do you build a strategy around OpenTelemetry
4.) How do you get started with OTel
Standardizing on OpenTelemetry makes your application more observable, and helps your organization implement better observability and monitoring practices.
For nearly thirteen years, Amazon Web Services has offered the ability for .NET developers to host their workloads in the cloud, and over that time has extended that support to many of AWS’ services. In this session, we will explore the broad range of support AWS has to offer .NET developers. From supporting your favorite development environment, to the most cost-effective and high performance hosting environment, to the operational tools you use for deployment and management, this session explores how you can leverage your skills in your AWS environment.
Wednesday, September 15, 2021
There are many, many resources for DevOps engineers: learning paths, guides and tutorials for using tools such as Terraform, Packer and Ansible to save time in provisioning and configuring reliable, predictable systems. This session looks at the other side of the equation: creating the plugins, modules and providers that abstract away upstream APIs for use by DevOps tools.
Director of Developer Evangelism Pat Patterson will explain how Citrix implemented DevOps tooling for its App Delivery & Security products, and how the company is working with its community to create tooling for its Virtual Apps & Desktops Service. Pat will explain the different approaches to creating tooling, trade-offs between them, and the lessons that Citrix has learned along the way. This session will NOT be death-by-PowerPoint! Come prepared for semi-colons, curly braces and monospaced text!
Today’s pace of change is relentless. Customers expect organizations to respond to their needs immediately, with services that are tailored to them. New competitors appear out of nowhere and reshape markets overnight. Global events cause demand to surge in one area and evaporate in another, creating pressure on every aspect of business, requiring the ability to adapt and perform in real-time.
In this environment, success requires more than size and scale. It requires using applications and data to deliver rich, personalized experiences; to get the right data to the right person at the right time—no matter where it’s stored. And to do it all with greater efficiency, security, and speed. In an age when businesses are trying to disrupt the world, and the world is disrupting business, organizations have to move faster, smarter, and with greater operational efficiency…or risk being left behind.
Cloud is reshaping the way EVERYTHING is done in today’s world. We see it in our personal lives, where we expect a graceful operation between our own devices and what’s happening “out there” in the cloud. Why should an enterprise organization be any different? They need that graceful operation to ensure the speed and efficiency they’ll need to keep up. For a business, cloud is key – and the only way to move fast is to use the cloud, operate like a cloud, or both.
And when it’s both, organizations need the best of both worlds. They don’t have time to figure out how to do it one way in their own data center, and then a completely different way in the cloud. In short, they need the same way to acquire, consume and operate no matter where they are.
The demand for cloud has never been higher.
- A report from Canalys on Q3 spending shows a significant jump in worldwide cloud spending, up 33%.
- IDC expects that by the end of 2021, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic.
- For NetApp, in Q1 FY21 earnings our cloud business grew 192% YoY.
To instantly adapt to a rapidly changing landscape, our customers need to have access to the right data, at the right time, in the right place—at the right pace.
- We see organizations looking at solutions from cloud providers for two important reasons:
------ First, is to lower their I.T. cost because they're facing economic challenges and they want to move to cloud as a mechanism to get a more efficient and agile I.T. infrastructure. -------- Second is the shift to digital. People want to get new innovations to change their business model.
- The future of innovation lies in our ability to harness the power of the data that is available to us, and to act on that data to transform. Companies that do this effectively will thrive.
----- Whether it’s a retailer looking at e-commerce, a financial institution looking for new ways to use data to identify business opportunities or a manufacturer l using sensor technology and I.T. to change their manufacturing shop floor, all of them eventually boil down to unlocking new business models using the power of data.
----- Every customer is in a different place on their journey to cloud – with a different set of imperatives and challenges. But across the board, a few things are clear--as we’re hearing directly from our customers, every day. Data is at the heart of everything our customers do.
The rate of innovation in the cloud software industry is accelerating at an unprecedented pace. There are many benefits to all of this exciting innovation, like pushing the bar on things that used to require specialized hardware that can now be done exclusively in software. It truly is an exciting time to be a software engineer working in cloud.
However there is a critical factor to consider with all of this change and innovation, and that is getting technology to a production-ready state in a rapidly changing innovation landscape. It takes time to make a product stable, scalable and secure. By the time that happens, it seems that the industry has moved on to greener pastures, and the production-ready technology appears old and stale. Could we be innovating ourselves out of production environments?
This talk will share practical steps on how to maintain production-ready quality when chasing after the next ‘shiny new thing’ in cloud innovation.
Roman Stanek, current founder and CEO of GoodData, has founded three SaaS companies over the past 22 years. His first two companies, NetBeans and Systinet, both ended in successful exits, including a sale to Sun Microsystems and one of the most successful acquisitions in the web services/SOA space. GoodData is currently experiencing rapid growth including a 33% expansion across the entire customer base in Q4 2020, a 9x increase in the number of self-service accounts in 2020, and the signing of our largest expansion deal yet, a $14 million contract –– all critical metrics as GoodData continues to surge and provide customers with high-quality data analytics and insights.
Until now, there’s been little market pressure for BI to adapt to modern devops tooling and best practices like CI/CD, DataOps, GitOps and others. Popular BI tools often offer a “real time BI optimized” architecture that removes the analytical storage layer to reduce ETL latencies. Unfortunately, in most cases, the analytical capabilities are severely limited in the “real-time-optimized” mode. Roman and the GoodData team just released GoodData Cloud Native after two years of engineering work — the first solution to deliver enterprise-grade analytics as a microservices-based stack. Roman can speak to how to identify not just today’s market need but tomorrow’s — and how to turn those insights into the next phase of your roadmap. For GoodData, that looked like putting analytics on equal footing with core business operations like app dev, and committing to a headless BI structure that delivers scalable, real-time data to everyone who needs it.