How to use Gemini with the OpenAI SDK (JavaScript)


Google’s Gemini models can be used through OpenAI compatible endpoints. That means you can keep using your existing OpenAI SDK or the new OpenAI Agents SDK, and simply point it at Gemini by swapping the base URL and API key.
This guide will show you step by step how to connect Gemini to the OpenAI SDK in JavaScript, and then how to run an agent using the OpenAI Agents SDK.
1. Prerequisites
Before you begin, make sure you have:
Node.js v18 or higher
A Gemini API key from Google AI Studio (copy this key and keep it safe)
Dependencies installed
npm install openai @openai/agents dotenv zod
2. Project setup
Create a .env
file in your project root:
BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai/
API_KEY=your-gemini-api-key
MODEL_NAME=gemini-2.5-flash
Here
BASE_URL
points to the Gemini OpenAI compatible endpointAPI_KEY
is your Gemini API keyMODEL_NAME
is the Gemini model you want to use
3. Wiring Gemini into OpenAI SDK
We now set up a custom OpenAI client that uses Gemini’s endpoint.
import OpenAI from "openai";
import 'dotenv/config';
config();
if (!process.env.BASE_URL || !process.env.API_KEY || !process.env.MODEL_NAME) {
throw new Error("Please set BASE_URL, API_KEY, and MODEL_NAME in .env");
}
const openai = new OpenAI({
apiKey: process.env.API_KEY,
baseURL: process.env.BASE_URL,
});
// Simple test call without Agents SDK
async function simpleChat() {
const res = await openai.chat.completions.create({
model: process.env.MODEL_NAME,
messages: [{ role: "user", content: "Write a short poem about JavaScript" }],
});
console.log(res.choices[0].message.content);
}
simpleChat();
If you run this file, you should get a response from Gemini.
This proves your Gemini key and endpoint are working. Only after confirming this step should you integrate with the Agents SDK.
4. Using Gemini with the OpenAI Agents SDK
The OpenAI Agents SDK lets you build agents with tools, function calling, and structured workflows. To make it work with Gemini, we set up a custom provider.
Here is a working example:
import {
Agent,
Runner,
setTracingDisabled,
tool,
OpenAIProvider,
setDefaultOpenAIClient,
setOpenAIAPI,
} from "@openai/agents";
import OpenAI from "openai";
import 'dotenv/config';
import { z } from "zod";
// Validate env
if (!process.env.BASE_URL || !process.env.API_KEY || !process.env.MODEL_NAME) {
throw new Error("Missing BASE_URL, API_KEY, or MODEL_NAME in .env");
}
// 1. Create a custom OpenAI client pointing to Gemini
const openaiClient = new OpenAI({
apiKey: process.env.API_KEY,
baseURL: process.env.BASE_URL,
});
// 2. Set up provider and defaults
const modelProvider = new OpenAIProvider({ openAIClient: openaiClient });
setDefaultOpenAIClient(openaiClient);
setOpenAIAPI("chat_completions"); // Gemini supports chat completions API
setTracingDisabled(true);
// 3. Example tool
const getWeather = tool({
name: "get_weather",
description: "Get the weather for a city",
parameters: z.object({
city: z.string().describe("The city to get weather for"),
}),
async execute(input) {
console.log(`[debug] getting weather for ${input.city}`);
return `The weather in ${input.city} is sunny.`;
},
});
// 4. Run agent
async function main() {
const agent = new Agent({
name: "Assistant",
instructions:
"You only respond in short sentences. Always mention temperature in Fahrenheit and wind speed as well.",
model: process.env.MODEL_NAME,
tools: [getWeather],
});
const runner = new Runner({ modelProvider });
const result = await runner.run(agent, "What's the weather in Tokyo?");
console.log(result.finalOutput);
}
main();
5. Key Points to Remember
Always use the chat completions API for Gemini with OpenAI SDK.
baseURL
must be set tohttps://generativelanguage.googleapis.com/v1beta/openai/
The
MODEL_NAME
must match exactly the Gemini model you want (gemini-2.5-flash
,gemini-2.0-pro
, etc).Disable tracing unless you have a real OpenAI API key from platform.openai.com
6. Debugging Common Issues
401 Unauthorized
Check that your Gemini API key is valid and correctly set in.env
.Model not found
Double check theMODEL_NAME
. If you mistype it, Gemini will reject the request.Wrong API type
If you forgetsetOpenAIAPI("chat_completions")
, the SDK may try to use the newerresponses
API which Gemini does not yet support.
Conclusion
With just a few tweaks, you can connect Gemini to the OpenAI SDK and run agents through the OpenAI Agents SDK. The key is setting the baseURL
, apiKey
, and model
correctly. Start small with direct chat calls, then move to agents with tools once your setup is confirmed.
Subscribe to my newsletter
Read articles from Shravan Bhati directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Shravan Bhati
Shravan Bhati
Generative AI & Web Developer My Main Projects: YukiNihongo Study in Japan Jobiki Visit Japaro.space for more info!