Scala 3 Migration: Report from the Field

Pierre RicadatPierre Ricadat
16 min read

April 30, 2024. I decided to dedicate a week to migrate our main project at work (a multiplayer mobile game server in production for over 4 years) from Scala 2.13 to Scala 3.

May 7, 2024. I gave up. The removal of several features from Scala 3 (macro annotations, type projections, etc.), combined with the large number of changes necessary for the migration, was overwhelming. I was barely able to migrate a single module, had to modify thousands of lines of code (while my colleagues were adding new features to the main branch, a large number of merge conflicts were already appearing), and the IDE was completely unresponsive due to hundreds of compile errors. At that point, I thought the project might be stuck on Scala 2 forever.

Flash forward to January 2025. I had a little free time, so I decided to give it another try. And (spoiler!) this time I made it to the end. Let’s see what the various problems I encountered were, the changes I had to make, and the workarounds I implemented.

Preamble

The main place to look when starting a migration is the official Scala 3 Migration Guide. It contains a lot of information about the changes in the language and details on how to proceed.

As I mentioned, the large number of changes required was an issue because it caused a lot of merge conflicts with the main branch. It was not possible to stop all other developments during the migration, so I decided to apply as many changes as possible in the Scala 2 main branch to avoid these conflicts.

The main thing you can do while still on Scala 2.13 is to compile with the -Xsource:3 compiler flag, which enables the Scala 3 syntax for imports (* instead of _, as instead of =>), intersection types (& instead of with), and more, and also turns on a number of warnings for things no longer supported in Scala 3 (e.g., .map(CaseClass) should become .map(CaseClass.apply)).

Most of those changes were easy to apply, but there were a lot of them, which was challenging. Scala 3 offers a “migration mode” and is able to rewrite the code with the new syntax, but this is not applicable if you want to apply these changes in a Scala 2 codebase. My salvation actually came from IntelliJ, which has an inspection for code compiled with -Xsource:3 and a quick fix action to replace all the code at once. Incredibly useful!

IntelliJ inspection for -Xsource:3

IntelliJ even lets you select which of these changes you want to apply, so I excluded the “case in pattern bindings of for-comprehensions” because it transformed the code in a weird, unnecessary way.

After this was done, I was able to apply a large number of changes directly to our main branch, avoiding many more conflicts!

Dropped Features

While it brought a number of new and interesting features such as enums or opaque types, Scala 3 dropped a few features altogether, and this proved to be particularly challenging for us. The dropped features are listed on this page, and there were two of them that we relied on heavily: macro annotations and type projections.

Macro annotations

Macro annotations let you annotate Scala 2 types to generate code at compile-time, most typically by adding code to the companion object of annotated classes.

For example, using the Circe JSON library, you could write this:

@JsonCodec
case class Bar(i: Int, s: String)

This will automatically generate an implicit Codec[Bar] in the companion object of Bar. Very concise, very convenient. In the case of Circe, there was an "easy" workaround, which is to use the derives keyword available in Scala 3. I put quotes around "easy" because, for some reason, it is not mentioned at all in the Circe documentation.

The code above can be changed to the following for the same result:

case class Bar(i: Int, s: String) derives Codec.AsObject

Case closed? Not exactly, because our main use of macro annotations was not with Circe, but with Monocle and its @Lenses annotation.

@Lenses
case class Bar(i: Int, s: String)

This will generate the following in the companion object of Bar:

object Bar {
  val i: Lens[Bar, Int] = ??? // implementation omitted for clarity
  val s: Lens[Bar, String] = ??? 
}

Our project, being a complex game, has a huge user state object, lots of business logic, and domain entities. Lenses allow us to modify parts of the user state in a concise and elegant manner without having to use a chain of nested copy.

The removal of that macro annotation left us with no clear path or alternative for the migration. Unlike the Circe case, this is not a typeclass instance, so we can’t use the derives keyword: we need a val generated for each field of the case class. There is an open issue in the Monocle repository that discusses various options, but nothing tangible (Kit Langton has an interesting approach using Selectable, but this is not supported by IntelliJ).

One obvious alternative was to write those lenses ourselves. That was definitely doable; however, it would have required considerable effort to write thousands of these, and it would have added an enormous amount of boilerplate to the project, making Scala 3 quite unpopular within our team. This alone stopped the migration effort I started in 2024.

We are always trying to reduce boilerplate in our project, so we’ve used a few techniques over the years to address it. Sometimes it’s doable with macros or mirrors, but one way is to use sbt’s source generators, which allow you to run some custom code before compilation to generate additional source code files. Combined with Scalameta, you can parse and analyze your own code to generate more code. It is ultimately this technique that we used to generate the lenses.

The code generation works like this:

  • Look for all case classes in a specific module that contain the @lenses annotation

  • For each of those case classes, create an object

    • For each field of the case class, create a lens with the appropriate types

Using Scalameta is a little bit involved, so I’ve shared a snippet of our code in this gist so that it may be used by others. One downside of this approach is that the generated lenses are no longer in the companion objects of the case classes (we can generate new source files but not modify the existing ones), which required us to change all the lenses usage to use different object names. But it was worth it since it unlocked the migration path.

Note that a “macro annotation” feature was added to Scala 3, but it is much more limited than what was possible in Scala 2 and does not allow implementing the Monocle @Lenses annotation (the generated code is not visible to the user).

Type projections

Imagine you have a type Request that has an abstract type Result defined inside it.

trait Request {
  type Result
}

case class IntRequest() extends Request {
  type Result = Int
}

In Scala 2, you can write a function that, for a given Request, returns Request#Result, meaning it returns the Result that matches the subtype of Request that was used. So if Request is IntRequest, we will get an Int back.

def foo[R <: Request](req: Request): R#Result = ???

This is no longer possible in Scala 3 if R is abstract! You get a compile error saying R is not a legal path since it is not a concrete type. There is an easy workaround if you have a value of type Request, which is to use a function dependent type and return req.Result.

def foo[R <: Request](req: Request): req.Result = ???

However, our code had various uses of this pattern, and not all of them could be changed to a function dependent type. We ended up using a combination of different techniques depending on each case: function dependent types in some places, typeclasses in others, and we had to give up on making the code generic in a few places. Overall, this felt like a regression from the old code, but at least we were able to make it compile without changing too much code.

EDIT: After publishing this article, Voytek Pituła suggested a different workaround on Reddit using match types, and I was able to apply it successfully in the places where I had no alternatives. It made the code much nicer! I had heard of match types as an alternative before, but I thought I would have to construct a giant pattern matching with the list of all requests and their matching results. I had no idea it could be used in a generic way. Here’s his approach applied to our example:

trait Request {
  type Result 
}

object Request {
  type Aux[T] = Request { type Result = T }
  type Result[T <: Request] = T match {
    case Aux[s] => s
  }
}

def foo[T <: Request]: Request.Result[T]

Unsupported/broken libraries

Most libraries we were using were available on Scala 3, and for a few missing ones (mostly related to Spark or Kryo), we used cross(CrossVersion.for3Use2_13), which allows depending on a library built for Scala 2.13.

However, a few of them were not available or didn’t work as expected, so they required a complete change.

Newtypes and refined types

In Scala 2, we were using a combination of scala-newtype and refined to define custom types used all over our business logic (IDs, bounded values, etc.). There is no Scala 3 version of scala-newtype, which makes sense because it can be entirely replaced by opaque types. Refined is sneakier: it has a Scala 3 version, but if you try to use it, you will notice that it is only partially implemented; the macros are missing, so the library is not usable (the first example in their README doesn’t compile).

In another project using Scala 3, we were already using the neotype library, which lets you define both newtypes and refined types and is built on top of opaque types, therefore having no runtime cost. We switched to using this library instead. It might sound simple on paper, but we rely on these types so much that it was quite an invasive change impacting a lot of files. At least the migrated code felt better than the old one since writing refined type validation is nicer and slightly less boilerplate-y, and the runtime impact was reduced.

Magnolia typeclass derivation

Another issue we had was with typeclass derivation using Magnolia. While the library supports Scala 3, our existing derivation code caused a compile error for reaching -Xmax-inlines (too much inlined code). I tried to increase it up to 10,000 (!) and it finally failed with a stack overflow in the compiler.

The failing derivation occurred while deriving a sealed trait with a LOT of subtypes (~1,000), but there was already a typeclass instance for each of the subtypes. After looking at the internals of Magnolia, I noticed that a recursive method was used to fold over the list of subtypes, and that method was not tail-recursive, explaining why the number of inlines (and the stack depth) was increasing proportionally to the number of subtypes. To make matters worse, that recursive method also called distinctBy and sortBy on the list of subtypes at every iteration, which is pretty bad when you have lots of them. I opened an issue to report this behavior and changed the code locally, but then I ran into a Method too large error because the generated code was longer than what the JVM allows.

After doing a little research, I came across a great feature of Scala 3 that is poorly documented: Tuple.Map. Mirrors give you access to two tuples: for a sum type, MirroredElemLabels is a tuple with the names of the subtypes, while MirroredElemTypes is a tuple with the actual subtypes. You can use summonAll and Tuple.Map to materialize the list of names of those types or even to summon a typeclass instance for each of them.

trait TC[A]

inline def gen[A](using m: Mirror.SumOf[A]): TC[A] = {
  // get TC instances of all subtypes
  val subTypes = compiletime.summonAll[Tuple.Map[m.MirroredElemTypes, TC]]
  new TC[A] {
    ??? // given (a: A), we can then use subTypes(m.ordinal(a)).asInstanceOf[TC[A]]
  }
}

I posted a full example on Gist that shows how to derive a typeclass for a sealed trait without even needing Magnolia. This solution is very concise and does not run into inline or Method too large issues. I just wish there were more learning materials about these Tuple utilities because I think they are very powerful.

Macros

We had a few macros developed in-house, mostly to reduce boilerplate code. They proved relatively easy to port, except for one of them. The reason it was difficult is that Scala 3 macros are much more strict than Scala 2 macros, which let you generate any kind of code. On the other hand, Scala 3 macros require that the code you generate is valid in the context where the macro is defined (which might be different from where the macro is used, making things trickier). I am not a macro expert, so apologies if this is a little imprecise; my colleague @nox737 is the one who made the magic happen.

It took us quite a long time to make the macro compile with these restrictions (note: AI agents were not helpful at all for this kind of task!), and in the end, the code still failed to compile because of a Method too large error. Compile time felt a bit slower too. We ended up removing the macro entirely and replacing it with another source generator written with Scalameta. It made the code easier to inspect and to split into smaller chunks.

Dependency issues

As mentioned earlier, we used CrossVersion.for3Use2_13 for a few libraries not available in Scala 3, but one tough problem arose. One of those libraries was sparksql-scalapb, which lets us use protobuf with Spark. This library depends on Spark, so it is only available for 2.13. It also depends on scalapb-runtime, so depending on it brings scalapb_runtime_2.13 into dependencies. The problem is that the rest of our code already depended on scalapb_runtime_3. In that case, sbt failed to resolve the build with this error:

Modules were resolved with conflicting cross-version suffixes in ProjectRef(uri("..."), "spark"):
org.scala-lang.modules:scala-collection-compat _3, _2.13
com.thesamet.scalapb:lenses _3, _2.13
com.thesamet.scalapb:scalapb-runtime _3, _2.13

In other words, you can’t depend on the same library in both 2.13 and 3 versions.

I initially tried to solve that issue by shading dependencies, but it didn’t work because one function we use from sparksql-scalapb expects a specific input extending a type from ScalaPB, which means the rest of our code needs to extend that type. If that type is shaded only in the spark module, it doesn’t match the type from our other modules.

The solution was actually relatively simple: I forked sparksql-scalapb and made it compile with Scala 3, depending on scalapb_runtime_3 and using CrossVersion.for3Use2_13 for its other dependencies. The code was very straightforward to port, with just some minor things to fix. Then I embedded the produced JAR in our project instead of depending on the 2.13 library. I had to add the transitive dependencies of that library explicitly in our project, and that was it.

Slow compile time

Once all the code was migrated and I was able to compile successfully for the first time, I noticed that it was taking longer than usual. I also noticed that IntelliJ was constantly compiling to show syntax highlighting. There was definitely something wrong. I had already debugged slow compile times with Scala 2 and was accustomed to using the -Vstatistics compiler flag to see which phases were taking time, and even using scalac-profiling to profile the compilation. Unfortunately, a little research made me realize that such tools did not exist for Scala 3. After asking around on Twitter, I heard that the new version of Scala (3.6.3) released a day earlier was bringing a compiler flag to generate compiler traces. What a nice timing, I really got lucky with this one.

I immediately upgraded from 3.6.2 to 3.6.3 and enabled the traces. Within minutes, I was able to generate the following flamegraph:

This was extremely useful: as you can see, it breaks down the compilation time by phases, but also by files and even methods! This helped me pinpoint which code was slow to compile. Even though I did not really understand why it was slow (I tried to reproduce it in an isolated example but failed), I was able to refactor the code in a way that made it fast. The issue was about using an extremely large intersection type (with over 100+ types) as a ZIO environment. Reorganizing the environment into fewer types completely solved this issue, made the compile time on par with 2.13, and made IntelliJ very reactive.

This tool is so useful that I plan to spend more time on it in the future because I am pretty sure that it will allow me to find other slow points, considering how detailed the output is. But my goal for the migration was only to be as fast as with 2.13.

IntelliJ support

Speaking of IntelliJ, I did run into a couple of issues, which I reported to JetBrains:

I hope these bugs get fixed in the near future since they have very simple and easy reproducers (the first one was fixed as I was writing this post, though not released yet). I briefly looked into it, but the Scala plugin for IntelliJ is not really approachable, and I didn’t even know where to start looking.

Other than that, IntelliJ support was pretty good. One thing I recommend is to select Use separate compiler output paths in the sbt configuration menu because the sbt shell and IntelliJ’s own compiler tend to conflict with each other otherwise.

Compiler flags

Here are a few notable compiler flags I ended up using:

  • -language:experimental.betterFors (available under -experimental): this allows using = on the first line of for-comprehensions, and it also optimizes the generated bytecode by avoiding the extra map call at the end of the flatMap calls.

  • -no-indent: I am strongly against significant indentation in Scala, wish it never happened, but at least I am glad it is easy to disable. This is coupled with runner.dialectOverride.allowSignificantIndentation = false in Scalafmt.

  • -Wunused:all: I had a bunch of @nowarn I had to add with Scala 2 because of false positives, and I was able to remove them. It also found some extra unused code that Scala 2 didn’t detect, so it seemed to work better.

Conclusion

Finally, on February 4, the CI turned green on this PR. It has been a long journey with a lot of hurdles, but the situation felt much better in 2025 than a year before. Overall, our code did not change heavily, and most of the changes are for the best. The two things that I really regret are the lack of macro annotations (fortunately, sbt source generators and Scalameta are powerful enough to emulate it) and the removal of general type projections that made our code uglier in some places.

To wrap things up, I am glad our main project did not become a painful legacy stuck in the past, and I am now excited to be able to play with some of the powerful tools that Scala 3 has to offer, particularly around metaprogramming. I hope this read will be helpful to others, whether you have a similar migration to perform or are involved directly with the development of the language and its tooling.

20
Subscribe to my newsletter

Read articles from Pierre Ricadat directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pierre Ricadat
Pierre Ricadat

Software Architect | Scala Lover | Creator of Caliban and Shardcake