Tuesday, September 14, 2021
Too often we encounter the idea that software architecture is an esoteric concept, of which only the chosen ones, and at the right time, are allowed to discuss. Well, how about a little change of perspective? With software development and users' needs evolving so fast, we don’t afford the luxury of rewriting systems from scratch just because teams fail to understand what they are building. Today’s software developers are tomorrow's architects. We must challenge them to step away from the IDE and understand how the architecture evolves in order to create a common and stable ground in terms of quality, performance, reliability, and scalability. At the same time, software architects need to step away from the abstractions and stay updated to the project development reality. This session revolves around finding the right ways of intertwining up-front architecture, API design & coding while maintaining a continuous focus on architecture evolution.
The Journey of achieving No-Ops (No-Operations) always begins with 2 key objectives
• Extreme automation
• No “dedicated” infrastructure teams, ever!
In this era of tech-evolution, even though we are already in the middle of Industry 4.0 revolution, there is no unified / singular framework to adopt No-Ops. Everyone has a different take of what No-Ops to them means. While for some the idea of evolving their systems to minimal operations is exciting, for a few – its more of a way to refine management of teams & channelize their efforts into something towards development. Whatever it may be, loosing operations specialists is still a long distant dream. Maybe we are so dependent on Managed-Ops, that plugging it off “majorly” is a nightmare to even think of.
The continuous growth of the CD tooling with a plethora of extensions made available to the DevOps ecosystem, even though we have significantly achieved & reaped quantifiable benefits from this implementation – scaling this across the organizational divisions is becoming a visible challenge.
Introducing more evolutionary controls such as templating provisioning & orchestration, immunizing integrations & connectors, adding extended & deep monitoring systems etc., Uncertainty & Unreliability still is a common problem across transformation scoring charts.
Revolutionary processes like Chaos Engineering, Auto-Enabled SRE, AIOps are creating aspirational backlogs for BU’s who are still struggling to manage their existing implementation
Operations (Ops) amalgamated with development transformational ways of using microservices, containerization etc. applications are becoming indeed complex to manage as well.
Current Ops management is already beyond the scope of manual management & it will eventually become worse in the years to come because of the growing complexity of applications.
It’s time to introduce a friend, DevOps & CD automation needed since a good long time. Welcome No-Ops!
Few key highlights of the talk would be:
1. In DevOps Ecosystem
a. With Speed & Agility comes Responsibility
b. The Human limitations aspect of evolution
2. DevOps + AIOps – How will the match be?
3. AIOps – Few Key Enablers
a. Market Analysis – what the future beholds
b. How to integrate with your current tools
a. The Entire Framework
b. How DevOps is to be extended, properly with AIOps
c. How it enables SRE Teams
d. Interesting Use Cases
5. The Road Ahead
Persistent storage is one of the most difficult challenges to solve for Kubernetes workloads especially when integrating with continuous deployment solutions. The session will provide the audience with an overview of how to address persistent storage for stateful workload the Kubernetes way and how to operationalize with a common CD practice like GitOps
The data revolution is upon us, and, well, has been for several years. It comes as no surprise that as application technology has evolved to keep up with the ever increasing expectations of users, the data platforms and solutions have had to as well. A decade or so ago we thought all our problems had been solved with a new player in the game, NoSQL. But, spoiler alert, they weren't.
In this session we're going to dive into a brief history of data. We'll examine its humble beginnings, where we stand today, and how modern relational databases will shape the cloud landscape going forward. Throughout the journey you'll gain an understanding of how SQL and relational databases have adapted to pave the road for a truly bright future.
Your company’s “digital transformation” will be driven by new application designs and methods, new technology stacks, and new processes. To master it, and delivering next generation services through it, massively complex sets of signals and data need to be leveraged, processed, and acted on. Developers need integrated data and insights through that noise, while being able to leverage their tools of choice. All of this must be managed, even in spite of massive rates of change and innovation. The challenge is determining who or what is going to do that work, where the work gets done, and how the business benefits from it. This session focuses on methods to overcome the complexity of digital transformation in the cloud and drive operational maturity despite constant change across applications, digital services, and products.
As companies transition to hybrid cloud, they are faced with complex decisions about choosing a strategic cloud partner who can support their growth at an affordable cost. Now more than ever, buyers are highly educated about the technology they need to scale their business. That’s why many value a partner who will make decisions that are right for their customers; a partner who’s invested in supporting their growth.
We will discuss how Vultr, the largest privately-owned Global cloud provider outside of the Big 3 Clouds supporting over 1.3 million customers, believes developers and businesses should feel the freedom of the cloud, and be empowered to do what they do best: develop and build a company.
Event-driven, real-time development in the cloud is a major part of many organizations’ digital transformation initiatives and businesses realize that data is the currency of competitive advantage. Event-driven applications must consume, enrich, and deliver data securely in real-time, and efficiently at scale. Therefore, the size of data packets, speed and frequency of data transmission and update, and the “intelligence” of data handling, are critical to successfully running mission-critical, corporate applications and making time-sensitive business decisions.
The core expertise of many companies lies in the development of their business applications, not in developing streaming data technology. As organizations everywhere move to the cloud, the demand for the dynamic enrichment, management and security of real-time, inflight data is critical. The fundamental challenge of developing event-driven, real-time applications and systems for the cloud, is managing the complexity of the end-to-end journey from sources to recipients of the highly “perishable” data – fast, reliably, securely, often in large volume, and sometimes to many recipients (hundreds of thousands of applications, systems, and devices concurrently). This talk will highlight how an Intelligent Event Data Platform enables organizations to accelerate innovation and deliver game-changing, real-time applications to market faster, while significantly reducing the cost of software development and operations.
In today’s fast-paced business and technology environments, an organization should never find itself boxed in by limited options for adapting to changing requirements or improving its workload strategy.
The Five Pillars of the AWS Well-Architected Framework—Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization—provide a way to consistently measure operations and architectures, identify areas for improvement, and respond to evolving requirements or external issues. The goal of the framework is to help architects learn the process of making informed, value-add decisions that reflect the organization’s priorities.
In this Q&A session, Excellarate’s Mike Watson hosts Hamdy Eed, an AWS Senior Solution Architect, for a lively discussion about putting the pillars into practice. They’ll explore how to navigate tradeoffs, a crucial function of the framework in guiding organizations through the process of shifting focus and priority among the pillars as needed. And Mike will ask Hamdy to talk about the latest tools and innovations available in the market to augment the implementation of each pillar.
Walk away with a better understanding of how the AWS Well-Architected Framework will help you learn how to:
~Design and implement scalable architectures that align with AWS best practices.
~Effectively utilize computing resources to maintain efficiency when system requirements change or technologies evolve
~Expand options with a structure that weighs priorities and adds business context when evaluating the trade-offs of each decision
For nearly thirteen years, Amazon Web Services has offered the ability for .NET developers to host their workloads in the cloud, and over that time has extended that support to many of AWS’ services. In this session, we will explore the broad range of support AWS has to offer .NET developers. From supporting your favorite development environment, to the most cost-effective and high performance hosting environment, to the operational tools you use for deployment and management, this session explores how you can leverage your skills in your AWS environment.
Wednesday, September 15, 2021
Most organizations considering open source and open core cloud technologies understand they need to rigorously evaluate the software’s licensing terms and gauge the long-term health of its community and ecosystem. What still happens less frequently – but is just as crucial to these risk assessments – is developing a thorough understanding of the business models governing the commercial organizations attached to each solution being considered. You must discern the underlying motivations of the vendors or technology providers you depend on to deliver or support open source data-layer software (as well as those vendors with strong influence over its development and maintenance). By acutely understanding these incentives, you can identify if, where, and how they may map to possible risks to your enterprise’s adoption and ongoing open source implementation. Don’t limit the assessment to licenses and community health -- although both are still very key variables.
This session will discuss specifics on what you need to look for and consider when vetting open source technologies in the cloud as offered by:
-- Businesses using OSS as the foundation of their own intellectual property
-- Businesses that maintain total control offer the OSS they offer
-- Major cloud providers