My Scala Sprint: JVM to Poland Insights

Hello again, world! 👋

After sharing my journey in getting selected for Google Summer of Code 2025 with the Scala Center in my first blog, I’m back with the next chapter: my deep dive into Scala, the JVM, and a talk that redefined how I look at Generative AI in functional programming.

This blog is not just a learning reflection—it's a shoutout to the resources and mentors that made my GSoC journey meaningful before the first line of code was ever written.

📘 How I Learned Scala (Fast & Right)

When I committed to contributing to LLM4S during GSoC 2025, I knew one thing for sure: I needed to get really good at Scala and fast. That’s when I discovered the Scala at Light Speed course by Rock the JVM.

🎯 Why I Chose This Course:

  • Tailored for beginners and intermediate devs with prior programming experience.

  • Covers not just syntax, but Scala's core Functional Programming mindset - the key to working effectively in a project like LLM4S.

  • Clean explanations of JVM internals, immutability, type safety, functional constructs, and more.

🧠 What I Learned:

  • Pattern matching, case classes, and companion objects: all made intuitive.

  • Monads, immutability, and safe side-effects.

  • How Scala compiles down to JVM bytecode—and why that matters for performance and LLM tooling.

This course gave me confidence to read and contribute to real-world Scala codebases. If you're eyeing GSoC, open-source work, or just curious about functional programming, I highly recommend this course.

🎤 The Talk That Sparked My Vision: Kannupriya in Poland

Learning the language is one thing. Understanding its potential in shaping the future of AI? That’s another level altogether.

Just around the same time I was exploring Scala deeper, I watched this talk by Kannupriya Kalra at the Scalar Conference in Poland: "Let’s Teach LLMs to Write Great Scala".

🔗 LinkedIn Post
🎥 Recording

🎤 Highlights from the Talk:

  • Idiomatic Scala Matters: She emphasized LLMs often write code that works, but misses nuance - semantics and style are key.

  • Tech Deep Dive: Covered how semantic embeddings, traceable execution, and structured feedback loops help teach models Scala’s complexity.

  • Beyond Python Dominance: Her talk made a compelling case for Scala as a powerful LLM training ground, even against Python-centric tools .

🌱 Why It Resonates:

  • It connected my JVM course learnings to real AI tooling challenges.

  • As my mentor, Kannupriya’s ability to teach and conceptualize on a stage made the mentorship feel tangible and visionary.

  • It reinforced my sense of purpose in contributing to LLM4S - not just code, but deeper tooling support.

✅ What’s Next?

  1. Advanced Scala Study: Exploring implicits, context bounds, and FP libraries like Cats & ZIO.

  2. LLM4S Dev: Working on basic tracing and agentic loop calls to enhance LLM4S tooling.

  3. AI + Tooling Labs: Building small tracing prototypes aligned with insights from Kannupriya’s talk.

  4. Continued Mentorship: Staying connected with Kannupriya and Scala Center to shape outcomes meaningfully.

🤗 Join Me on This Journey

Whether you're learning Scala, eyeing open-source, or curious about AI tooling:

  • Take Rock the JVM’s course - free, agile, and pragmatic.

  • Watch Kannupriya’s talk - video above is a must‑see.

  • Experiment early - even small LLM4S tracing prototypes help cement learning.

Connect with me on LinkedIn or drop a message on GitHub. Happy to help any fellow aspirants!

If you found the project or blog helpful, do consider starring the LLM4S repository to show your support!

0
Subscribe to my newsletter

Read articles from Shubham Vishwakarma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shubham Vishwakarma
Shubham Vishwakarma