Wednesday, August 18, 2021
We had a dream. To have continuously releasable code, and to provide functionality in more than one language without too much effort.
But in the past (like many before us), we didn’t always succeed. Our Open Source codebase is available on two platforms, Java & .NET and it wasn’t an easy task to keep them always in sync and buildable. In the old days we did this manually, risking broken develop branches in both codebases and this meant getting a .NET release out could take a month (or more). There had to be a better way…
In this talk we’ll share how we overcame these hurdles; and how we transitioned from a manually tested and ported codebase to an automated system, where develop is always green!
Our automated system is not final yet. It’s continuously being improved, and we still have many ideas… We’ll share some of these such as the introduction of build agents in the cloud. This would enable us to run tests on different platforms and configurations almost effortlessly.
Guided by a timeline, we’ll go through step by step how we achieved this by introducing different tooling, some in-house and some external. The main part of our talk will be about our Merge Pipeline (trademark pending) based on Jenkins, which forms the backbone of our fully-fledged automated system.
We’ll share its internal details and explain how it handles the different steps that are needed to get a Java branch merged, and automatically ported into the Java and .NET develop branches.
It’s designed in such a way that as little as possible time is wasted by running steps in parallel, keeping track of what was, or was not run already. Code that does not pass Sonar will not make it into develop. Functional tests are in a separate repository but go together with the code they are testing.
We firmly believe that others would benefit from learning the steps we took along the way, and how simple tooling can be used to introduce a merge pipeline to help the development process at any company.
Enterprise blockchain projects more than doubled from 2019 to 2020, and industry analysts expect use cases to keep growing at the same pace year after year. Despite blockchain technology moving beyond hype, the technology still has a ways to go before mass adoption and building these products is easier said than done. With Blockchain.com valuing at $5.2B and the number of wallet users continuing to rapidly grow, Lewis will discuss how to build high-performing blockchain applications aligned with company growth—from the challenges companies face when building products on top of blockchain and how to overcome them, to best practices to achieve and surpass user milestones.
Thursday, August 19, 2021
AI, has transcended its theoretical existence to become present in our day-to-day lives, and is encountered by most people from morning to night. Some examples are
* The e-tailer Amazon is one way many people are exposed to AI regularly, their AI algorithms learned what we like and what other people who are like us purchased the items which we would like to purchase
* Digital voice assistants like Amazon Alexa are quickly becoming our part of life. They use NLP and generators driven by AI to return answers to us.
And so on. But there are many other areas which could be improved by leveraging AI.
One such are of improvement is crowd management in Rapid Transit System (Metro)
Metro train ridership has grown significantly over the past decades and this growth is expected to continue into the future.
Crowding at train and metro stations is therefore experienced more frequently, resulting in safety issues, decreased comfort levels, increased total travel times etc...
As a high-capacity public transportation, the Metro Rail Transit has been operating at a level above its intended capacity by every government.
Despite numerous efforts in implementing an effective crowd control scheme, it still falls short in containing the formation of crowds and long lines, thus affecting the amount of time before they can proceed to the platforms
In this workshop, let us see how AI and Cloud Platforms can be leveraged in managing the crowd in Metro Stations and in Trains.
For the most flexible, powerful stream processing engines, it seems like the barrier to entry has never been higher than it is now. If you’ve tried, or have been interested in leveraging the strengths of real-time data processing - maybe for machine learning, IoT, anomaly detection or data analysis - but you’ve been held back: I’ve been there, and it’s frustrating. And that’s why this talk is for you.
That being said, this talk is also for you if you ARE experienced with stream processing but you want an easy (and if I say so myself, pretty fun) way to add some of the newest, bleeding edge features to your toolbelt.
This session will be about getting started with Flink SQL. Apache Flink’s high level SQL language has the familiarity of the SQL you know and love (or at least, know…), but with some powerful new functionality, and of course, the benefit of being able to be used with Flink and PyFlink.
More specifically, this will be a pragmatic entry into creating data pipelines with Flink SQL, as well as a sneak peek into some of its newest and most interesting features.