Session Stage 6
Wednesday, August 18, 2021
SaaS is not for all! Many organizations are prohibited storing data outside their data centers, worried about security or restrained by limited change control!
The solution is a hybrid architecture that allows deploying your SaaS offering on-premises. While we have known examples such as AWS Outputs or Google Anthos the fact is that not everyone is Google or Amazon! Join my session where I share how we at Dynatrace moved from SaaS to a hybrid offering including an on-premises deployment. I discuss my top 3 aspects of a successful hybrid implementation: pro-active support, automated updates delivery, and zero-configuration approach and how our “Mission Control” takes care of these. This talk should inspire you how to expand your software offering from SaaS to premises with the lowest total cost of ownership!
Software is in the heart of many products that we use today, from consumer to mobile applications. The way we deliver software has changed as well now that we work in the cloud. Every developer became a little DevSecOps engineer, when expected to deliver secured, robust and scalable software. CI/CD technology was supposed to make time from coding to production much faster and processes were supposed to be fully automated. But are we really doing it right?
In this session I will talk about current trends in developer-centric security, best practices for implementation and some of the lessons learned from creating a shift-left cloud security product.
Single cloud is no longer enough, it is time for Multi-cloud! In the session, the latest trend and technologies related to Multi-cloud will be presented. However technology is not enough, the real business case studies and their benefits will be presented. Finally, the future research directions will be briefly described. The session is based on the commercial and research experience for over 5 years period.
Should you always run your cluster in multiple availability zones? How can a transition rule on S3 double your storage costs? I want to monitor and understand my data transfer costs, where should I start? Following so-called “best practices” works only when you fully understand the implications, costs included. We will discuss a few cloud anti-patterns, making your bill smaller and your deployment better. And possibly reducing your cloud carbon footprint too.
We live in the world of the visual web. Images and videos are being consumed by millions of people every single day. Digital assets represent about 75% of an average website, which, often times leads to performance bottlenecks. During the talk we will investigate the history of the visual web, discuss its impact on performance while also showcasing how developers can enhance the performance of their website(s) by using various media optimisation and transformation techniques on the web today.
In 2020, OpenAI has launched GPT-3, an autoregressive language model that uses deep learning to produce human-like text which was trained on 175 Billion parameters.
In 2021, Google AI has open sourced Switch Transformer, an artificial intelligence language model which was trained on 1.6 Trillion parameters.
How do these developments affect the tech industry?
Data-driven is the how tech is going right now, and the next step is becoming AI-driven. This applies to any field, from meditation apps, through investment robots to the cloud infrastructure. This won’t stop there, as new fields that are being disrupted by tech, like legal-tech, med-tech and others, are also AI-driven.
Bringing innovation to your work means keeping up with these changes across all the tech teams - product, software, DevOps, QA, etc. In this talk we will cover the current state of AI and how you can make your product and teams future-compatible.
In this talk, I will walk through how someone can set up and run continuous SQL queries against Pulsar topics utilizing Apache Flink. We will walk through creating Pulsar topics, schemas and publishing data.
We will then cover consuming Pulsar data, joining Pulsar topics and inserting new events into Pulsar topics as they arrive. This basic overview will show hands-on techniques, tips and examples of how to do this using Pulsar tools.
In this talk, I provide some insights about the growth of software testing. Starting from the early times of software testing, finally we discuss what we can face in the future. Key take-away’s are to be ready for the next generation testing activities (AI-supported testing and others).
We all observe that software testing continues to grow, proving that it is a living organism. Software testing processes are started to be adapted into Software Development Life Cycle (SDLC) in waterfall approaches. At the end of the development activities, verification and validation are performed to check the product before shipment to customers. What was the problem with Waterfall methodology? Testing activities were scheduled at the end of timeline and testers were out of time since previous activities are shifted.
Then, in agile methodologies, we see testing activities in all phases of the Software Development Life Cycle (SDLC). It starts from the first sprint. In this point, challenges are bigger since it is a very dynamic environment with lots of changes in a short duration.
To cope with complex scope to be verified in a limited time, automated testing started to appear in our life. Nowadays we meet lots of “Continuous X” terms, such as Continuous Integration, Deployment and Testing. Can we go home and get some rest when we automate all cases? Of course not. We have to continue to track test results, maintain flaky results and keep quality in high standard. Still we have many manual tasks on healing, maintenance and analysis.
Nowadays, researches are looking for adaptation of Machine Learning algorithms and other hot topics to testing processes to reduce the manual effort and improve quality. To sum up, improvement of software testing never ends, but sometimes the growth confuses people. What is the deal of Scrum? Why are people crazy about continuous integration and continuous delivery (CI/CD)? What is the difference between Agile and Devops? We will go over lots of this kind of questions.
Objective of the talk is: To provide some insights about the growth of software testing. Starting from the early times of software testing, we will try to overview the big picture and finally we will discuss what we can face in the future.
* Growth of software Testing
+ Replacement of manual activities to Automation
+ Transition to Agile Methodologies
+ Adaptation of Devops
+ Machine Learning in Software Testing
* Wrap-Up & Questions
Take-aways: Key take-away’s will be awareness on the software testing lifecycle and be ready for the next generation activities (AI-supported testing and others).
The concept of “progressive delivery” using feature flags has taken the world of software delivery by storm in recent years, but what does this mean for enterprise software development operations teams and how they should change their technology and practices? Like many things, the application of progressive delivery in the enterprise setting is much easier said than done with disparate technology and teams around the world.
This talk will cover the state of progressive delivery, the potential benefits and use cases unlocked by adding feature flags into the release management process, and technical considerations for creating CD pipeline integrity with shared feature flag management and control.
Thursday, August 19, 2021
Product Manager, this is a title that is very hard to explain and most of the time comes with big responsibilities but yet it is easy to overlook. Moreover, in the Application Modernization journey that focuses on modernizing your legacy application.
We are not delivering code or creating a prototype and definitely our job description is NOT attending meetings all day. We believe there are skills and mindset as Product Manager to accelerate the team to be successful in building modern applications. In this talk, I will share why we need product management skills to increase the success in your Application Modernization journey.
Since the emergence of Kubernetes, we hoped that developers will adopt it. That did not happen, and it will likely never happen. Developers do not need Kubernetes. They need to write code, and they need an easy way to build, test, and deploy their applications. It is unrealistic to expect developers to spend years learning Kubernetes.
On the other hand, operators and sysadmins need Kubernetes. It gives them all they need to run systems at scale. Nevertheless, operators also need to empower developers to deploy their own applications. They need to enable developers by providing services rather than doing actual deployments.
So, we have conflicting needs. Kubernetes is necessary to some and a burden to others. Can we satisfy all? Can we have a system that is based on Kubernetes yet easy to operate? Can we make Kubernetes disappear and become an implementation detail running in the background?
Let's discuss where Kubernetes is going and how it might look like in the future.
AI, has transcended its theoretical existence to become present in our day-to-day lives, and is encountered by most people from morning to night. Some examples are
* The e-tailer Amazon is one way many people are exposed to AI regularly, their AI algorithms learned what we like and what other people who are like us purchased the items which we would like to purchase
* Digital voice assistants like Amazon Alexa are quickly becoming our part of life. They use NLP and generators driven by AI to return answers to us.
And so on. But there are many other areas which could be improved by leveraging AI.
One such are of improvement is crowd management in Rapid Transit System (Metro)
Metro train ridership has grown significantly over the past decades and this growth is expected to continue into the future.
Crowding at train and metro stations is therefore experienced more frequently, resulting in safety issues, decreased comfort levels, increased total travel times etc...
As a high-capacity public transportation, the Metro Rail Transit has been operating at a level above its intended capacity by every government.
Despite numerous efforts in implementing an effective crowd control scheme, it still falls short in containing the formation of crowds and long lines, thus affecting the amount of time before they can proceed to the platforms
In this workshop, let us see how AI and Cloud Platforms can be leveraged in managing the crowd in Metro Stations and in Trains.
When defining APIs the most common considerations are from what our payload looks like, and then from a implementer perspective.
However, good APIs whether they’re internal or public are far more than just a payload description and need a consumer’s perspective.
In this session we look at what makes up a good API; from OWASP Top 10 implications to ISO and data definitions, to how to make it easy for your consumers, why these points are important and the implications. We’ll explore techniques to overcome of the challenges seen when producing good APIs.
Whilst we all think we know how to define APIs, you’ll be surprised at the things that get overlooked or opportunities to be better.
Women around the world have been directly affected by the pandemic in more ways than one, especially women in technology. Lack of Representation; Lack of Supplier Diversity Allocation; Lack of Access to Investment; and Increased displacement of women-held jobs are all constituting a global crisis. Well, what does one do then? The answer begins with Fortune 1000 companies. Systematic change requires collective action by organizations large enough to influence and maintain change. An attempt to change the norm requires Fortune 1000 companies to come together to acknowledge the problem and consciously take steps towards change.
In this session, I will highlight the gaps in the technology industry that is disabling women to succeed and create an economic; furthermore the session will also focus on ways to bridge that gap through collaboration and collective action.
For the most flexible, powerful stream processing engines, it seems like the barrier to entry has never been higher than it is now. If you’ve tried, or have been interested in leveraging the strengths of real-time data processing - maybe for machine learning, IoT, anomaly detection or data analysis - but you’ve been held back: I’ve been there, and it’s frustrating. And that’s why this talk is for you.
That being said, this talk is also for you if you ARE experienced with stream processing but you want an easy (and if I say so myself, pretty fun) way to add some of the newest, bleeding edge features to your toolbelt.
This session will be about getting started with Flink SQL. Apache Flink’s high level SQL language has the familiarity of the SQL you know and love (or at least, know…), but with some powerful new functionality, and of course, the benefit of being able to be used with Flink and PyFlink.
More specifically, this will be a pragmatic entry into creating data pipelines with Flink SQL, as well as a sneak peek into some of its newest and most interesting features.