Saturday, February 13, 2021
Finite state machines (FSMs) are one of the simplest models of computation, but it’s this simplicity that makes them so useful. They are expressive enough to cover a wide range of practical problems while remaining easy to reason about. I believe they should be more widely used, and the only reason they aren’t is because many developers only know them in the context of a dusty theory of computation course. This talk sets out to change this!
I’ll first (re)introduce finite state machines as a model of computation. I’ll show that they are very simple, which makes them easy to understand and therefore create and debug. We’ll then see how finite state machines appear just about everywhere. We’ll see examples in the Scala standard library, and in application code from web services and user interfaces. We’ll finish by discussing implementation techniques and further applications.
Following Alice’s adventure to the world of pods and higher-order functions. Now Alice is a professional Scala developer and over the years she has forgotten her trip to the world of pods and higher-order functions. But this time a new adventure found her. Alice needs to bring back the pod home using her knowledge of Scala. Will she be able to discover the link between Scala and Kubernetes and save her friend? You will find it out in this talk.
Here is the plan: 1. First we discuss some problems, requiring boilerplate, macro-converters, or loose typing. 2. We describe what HKD is, how it transforms into different data shapes. 3. Here go fancy typeclasses. We define a small hierarchy useful for HKD, and reference to a library, that could derive them automatically (tofu module) 4. We discuss shapes from p2. more deeply, turning them into design pattern implementations, while using previously defined typeclasses. 5. We discuss how to infer ordinary typeclasses for HKD, providing a more general and strict form of magnolia-like derivation for case classes.
Wix has finally released to open-source its Kafka client SDK wrapper called Greyhound. Completely re-written using the Scala functional library ZIO. Greyhound harnesses ZIO’s sophisticated async and concurrency features together with its easy composability to provide a superior experience to Kafka’s own client SDKs It offers rich functionality including: - Trivial setup of message processing parallelisation, - Various fault tolerant retry policies (for consumers AND producers), - Easy plug-ability of metrics publishing and context propagation and much more. This talk will also show how Greyhound is used by Wix developers in more than 1500 event-driven microservices.
For the last few years, we at Spotify have been developing and using Scio, an open-source Scala framework, to develop data pipelines. During that time, Spotify has been successfully deploying and running thousands of unique Scio jobs in production.
The scale of our deployments is one of the challenges we have to face, in terms of the amount of data we process, of course, but most importantly, in the number of engineers who will interact with our platform.
This talk will be an exploration of our experience using Scala in a large and rapidly growing company, and the unique strengths of the language one may leverage to reach true scalability.
Curious to know how good your tests are? There’s an easy way to find out: use mutation testing!
Most of us use code coverage to measure how effective our tests are. But what does code coverage really mean? How many times have you seen a test with a missing assertion or even assertions in comments? This is where mutation testing will help you. A mutation testing framework inserts small bugs into your code, hoping that your tests can spot them.
In this talk, you will learn the basics of mutation testing, and how you can use it in your Scala projects with Stryker4s, the mutation testing framework for Scala.
Scala is a hybrid OOP+FP language. If you love OOP, Scala is one of the best static OOP languages. But Scala also exposes parametric polymorphism and can encode type classes.
Thus, developers can also choose to use parametric polymorphism restricted by type classes (aka ad hoc polymorphism). As if choosing when to use immutability versus object identity wasn't bad enough, developers are also faced with a difficult choice when expressing abstractions. Such choices create tension in teams, with the code style depending on the team leader or whoever does the code reviews.
Let's go through what is OOP, what is ad hoc polymorphism via type classes, how to design type classes, go through the pros and cons, and establish guidelines for what to pick, depending on the use case.
With Scala 3 right around the corner, now is the perfect time to discover exciting new features, that will simplify or reinvent how we write Scala applications. In this talk I will give you a taste of the future. I will be live-coding to compare how certain problems were solved using Scala 2 and what Scala 3 brings to the table.
Scala is a cool language running on JVM and constantly changing. The same applies to JVM, which is running under the hood. What a Scala developer should know about changes in JVM? In this talk we’ll cover the major JVM changes and scratch updates to Java language.
This talk will show a few simple and easy to implement tips writing models in Scala. It will answer questions like:
* how can I compare entities from DDD if case classes always compare all fields?
* do I have to give up on non-flat models if my persistence implementation doesn’t like them?
* do I have to pollute my models with annotations and implicits used by e.g. JSON serialization libraries?
* if I want to use things like Scala newtype or Refined, do I really have to several imports in every file that uses them?
* if I am dedicated used o Cats who uses import cats.implicits._ everywhere, do I really have to import it in every single file?
* does it always have to be so painful to update nested immutable model or to transform one object into another?
Some programmers take these for granted, while a lot of them still struggle with writing repetitive or needlessly complex code. This talk will help you go from the later to the former.
Scala has a highly expressive type system for modeling sets of instances, and their properties. But it can be hard for programmers to get a good intuition for what different types represent if we only see types through the source code that describes them.
This talk will be a journey through the Scala type system, examining the wealth of types on offer in Scala 2 and Scala 3, and presenting each in a visual form, showing the relationships between them, and developing an understanding of operations such as finding the least upper-bound of a pair of types. Furthermore, we will see how the concept of categorical duality arises every step of the way.
Distributed tracing has been slowly gaining momentum and popularity since the public release of Zipkin by Twitter and thanks to standards such as OpenCensus and OpenTracing which have combined into OpenTelemetry. Many managed monitoring providers now support tracing, while the self-hosted community favourite is Jaeger.
Trace4Cats is both an application library for capturing traces and a partial tracing system aimed at aggregating, sampling and forwarding traces to monitoring systems.
This talk will give an overview of what distributed tracing is, how it works and how it can help you in production. Followed by an introduction to Trace4Cats, what sets it apart from other tracing libraries, how to use it and its integrations, and deployment topologies.
Most people don't go into work excited to update their old code to slightly newer versions of APIs and figuring out what's replaced this. This is complicated in Spark where the new version of Spark will be dropping support for older language releases. This talk will explore how we can use tools to semi-automatically upgrade our Scala & Python Spark. We'll compare and contrast how the tooling & language differences between Scala and Python impact these tools.
Akka’s new type-safe APIs graduated from experimental to stable with the release of Akka 2.6. Akka 2.6 was released in November 2019 and represented a major step forward for the project, despite being a minor version tick from 2.5 to 2.6. The new API became the default for documentation, reference projects, and is the base upon which many exciting new features and projects were added to the Akka ecosystem in the following year.
There are too many topics in the Akka 2.6 series to cover in a single talk, but I’ll highlight several major developments, such as easier to use APIs for Akka Persistence and Cluster Sharding, a new remoting layer to optimize peer to peer connections in Akka cluster, a new project called Projections to manage readside views in event sourced systems, and the ability to define external shard allocation strategies with Akka Cluster to optimize data locality (i.e. with Alpakka Kafka consumer instances). We’ll also highlight recently open sourced components that were previously only available to Lightbend customers, such as the Split Brain Resolver for Akka Cluster.