It's been a while.


I realize It’s almost been two years since I last updated this; life’s been busy. I’m considering a Master’s in computer science, and I’ve been preparing for that by taking college courses in computer science. Moreover, I switched jobs in the middle of last year — I’m still at New Relic, but I’ve transitioned to a new team. While I enjoyed my previous role, I felt like I had come as far as I could, and it was time to take on new challenges — New Relic is a great company, and I was fortunate to be able to remain there.
I’ve recently seen a raft of articles describing Small Language Models (SLMs) — this one describing SLMs in the legal profession caught my eye. As one might imagine, SLMs are smaller and more compact than Large Language Models, and thus hopefully more efficient.
When I last wrote on the topic, one of the trends I thought possible was the increasing segmentation and differentiation of AI into specialist models for specialist use cases. Moreover, it’s no secret that large language models are resource intensive, in terms of computing power, energy, and even water.
A smaller, more efficient model could mitigate these concerns. Additionally, they may be easier to train and guarantee quality of output, possibly ameliorating the hallucination problem, although I am not certain of that point. However, if true, these factors would favor a market composed of a larger number of specialist AIs, rather than a few big models.
It will be interesting to see what happens.
Subscribe to my newsletter
Read articles from Sheel Bedi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
