Gen AI for JavaScript Devs: Exploring Open AI Alternatives: Mistral, Llama, and More
Introduction:
In our previous posts in this series, we explored the OpenAI SDK, in one of the posts we delved into LLM parameters like temperature, top-p, and top-k. We also covered various prompting techniques. In another post, we explored how to choose the right LLM by comparing pricing across providers and open-source models like Mistral and Llama. We also discussed why you might consider these alternatives for your specific needs.
For instance, when discussing pricing, we found that Mistral AI can be up to 10 times cheaper than OpenAI's ChatGPT, depending on your needs. You can use the Mistral 7B model from providers like Together AI or Groq if it fits your use case.
In this post, we'll dive into working with alternative AI SDKs, focusing on providers like Cohere, Anthropic, Together AI and models like Mistral, Llama, and more.
Together AI
Together AI is a powerful platform that offers over 100 open-source LLMs. From text generation models like Mistral to the latest LLaMA 3.1 from Facebook, and Google's Gemma 2, to cutting-edge image generation models like Stable Diffusion, Together AI has it all. Many of these models are fine-tuned versions of their originals, making them even better for specific tasks.
One example of a fine-tuned model available on Together AI is the Llama-2-7B-32K-Instruct. This model is built on Meta’s Llama-2 architecture and optimized for handling instruction-based tasks more effectively.
Getting started with Together AI is straightforward. Simply navigate to Together AI, sign in, and you'll receive $50 in free credits. Fill out some basic information, get your API keys, and save them for later use. In the playground, you can choose from a variety of models, adjust parameters like temperature, top-p, and top-k, and experiment with your prompts. You can also visit the models page to compare inference costs and make an informed decision. Below is a simple example of using the Together JS SDK.
require("dotenv/config");
const Together = require("together-ai")
const togetherAI = new Together({
apiKey: process.env.TOGETHER_API_KEY,
});
async function main() {
const response = await togetherAI.chat.completions.create({
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages: [
{
role: "user",
content: "What is the capital of India?"
}
],
});
console.log(response.choices[0].message);
}
main();
If you've been following this series, you'll notice that the code is very similar to the OpenAI SDK. This means you can easily use both OpenAI's models and Together AI in your application without a steep learning curve. You can even migrate from OpenAI to the latest open-source models with ease.
Anthrophic's Claude LLMs
Anthropic, founded in 2021 by former OpenAI members, focuses on creating safe and reliable AI systems. Backed by major companies like Amazon and Google, Anthropic has advanced its research significantly.
Their latest model, Claude 3, comes in three versions: Haiku, Sonnet, and Opus, each offering different levels of capability. With a context window of up to 200K tokens, Claude 3 can handle long conversations and complex tasks effectively. In June 2024, Anthropic released Claude 3.5 Sonnet, which is faster, more accurate, and excels at complex tasks like customer support and multi-step workflows.
Go to Anthropic's console and create your first API key. You'll get $5 in credit. Remember, Claude is a proprietary model, not open source. You can check its pricing here. Below is the code to make a request to Anthropic's LLM.
require("dotenv/config");
const Anthropic = require("@anthropic-ai/sdk");
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPHIC_API_KEY,
});
async function main() {
const response = await anthropic.messages.create({
model: "claude-3-5-sonnet-20240620",
max_tokens: 1000,
temperature: 0,
system: "You are a teacher.",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What is the capital of India?",
},
],
},
],
});
console.log(response.content[0].text);
}
main();
Cohere
Cohere is a leading AI platform focused on natural language processing (NLP) for enterprise applications. In 2024, they launched Cohere Command, a suite of generative AI models designed for complex business tasks like customer support automation and content creation.
Cohere's SDK is packed with tools and connectors that enhance their language models, making it easy for businesses to integrate advanced NLP features. You can perform tasks like text generation, translation, and classification, all accessible through the Cohere dashboard.
One unique feature is "connectors," which allow you to add tools like web search, enabling the model to access the latest information from the web. Cohere is a powerful platform that I use in my applications, and I plan to create a separate tutorial to cover all its features in detail.
require("dotenv/config");
const { CohereClient } = require('cohere-ai');
const cohere = new CohereClient({
token: process.env.COHERE_API_KEY,
});
async function main() {
const response = await cohere.chat({
chatHistory: [],
message: 'What is the price of Nvidia stock?',
// perform web search.
connectors: [{ id: 'web-search' }],
});
console.log(response);
}
main();
Check the model's response, and you'll see a documents
key—this contains all the information it retrieved from a web search. You can even create custom connectors to access your own data, like from OneDrive. This makes Cohere very powerful.
Mistral
Mistral AI is a French startup, founded in April 2023 by ex-Meta and Google DeepMind employees, focusing on open-source large language models (LLMs). Their model family includes the powerful Mistral 7B with 7 billion parameters, the efficient Mixtral 8x7B with up to 45 billion parameters, and the versatile Mistral Large. People love Mistral AI for its commitment to open-source, customization options via fine tuning the models, compute efficiency, and accessibility through various platforms, making advanced AI tools available to everyone.
Head to the Mistral console, sign up for a free trial, create your API keys, and follow the code below.
require("dotenv/config");
const { Mistral } = require("@mistralai/mistralai");
const mistral = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
async function main() {
const chatResponse = await mistral.chat.complete({
model: "mistral-large-latest",
messages: [{ role: "user", content: "What is the capital of India?" }],
});
console.log("Chat:", chatResponse.choices[0].message.content);
}
main();
Conclusion
Besides the providers mentioned, we also have Anyscale AI, Eden AI, and cloud providers like AWS, Azure, and Google which offer proprietary models like ChatGPT and Anthropic’s models, along with open-source options like Mistral and Llama. No matter which model you choose, always check its capabilities, pricing, and response time. In the next tutorial, we’ll use everything we’ve learned so far to build a pizza chatbot with Node.js. 🍕🤖. Until next time, keep coding!
Subscribe to my newsletter
Read articles from Arsalan Yaldram directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Arsalan Yaldram
Arsalan Yaldram
Experienced Lead Full Stack JavaScript Developer | React, Typescript, Node.js Crafting excellence for 5 years, I specialize in creating robust solutions with React, Typescript, and Node.js. My proficiency extends beyond coding—I foster collaborative environments, mentor teams, and cultivate knowledge-sharing cultures.