What is RPC? Make Network Calls feel like Function Calls


RPC ( Remote Procedure Calls ) abstracts the network calls and makes them feel like local function calls from the same server, so that you can focus on the business logic instead!
Imagine you want to make LLM calls in your PHP project using the GPT-SDK, but since there is no official SDK for PHP you have to use something like Node.js — this is something where you can use RPCs
This way,
I’ll save myself from writing and managing a REST API for every AI interaction.
I’ll not need to parse JSON, or handle HTTP status codes manually.
I’ll get typed request & response objects.
I’ll be able to make API calls as if it was a local method.
This makes development faster, safer, and more maintainable compared to traditional REST APIs.
I’ll be using gRPC — it is an open-source RPC framework by Google.
You can build a custom RPC system, but -
You’ll have to handle serialization (JSON or binary)
Define and enforce contracts (APIs)
Manage streaming, auth, retries, etc.
Make integration seamless across multiple languages
And much more… it’s usually better to use gRPC (Google Remote Procedure Call) or any other frameworks like tRPC, JSON-RPC, XML-RPC unless you need something super custom or lightweight.
Stubs and Stub interface defination
This is the most important part of an RPC system, stub interface defination → the .proto file — this file defines the remote functions you will be able to call and the request and response objects you will be using.
How does abstraction work here? — using Stubs.
When sending data from our PHP client in the request cycle, we need a way to convert the data into a format that any gRPC server can understand.
This is where Protocol Buffers or Protobuf (a high performance alternative to JSON/XML for structured data exchange and is used in gRPC ) is used.
So, our PHP client converts the data into Protobuf format using stubs and sends it over the network using gRPC. On the Node.js server, gRPC automatically deserializes the incoming Protobuf data back into usable JSON using stubs.
In the response cycle, the same thing happens in reverse.
This process abstracts away all the low-level networking details and lets both services communicate like they are calling local functions—which is the core power of gRPC.
How will this work?
Write a .proto file with all messages and services.
Use protoc (the Protobuf compiler) to generate client & server sided code which will be used to make the required network calls.
Just implement the new functions in your code which were generated using protoc.
Sample .proto file
syntax = "proto3";
// Define the package name (helps organize generated code in namespaces/modules)
package llm;
service LLMService {
// Define an RPC method named "Summarize"
// It takes a PromptRequest as input and returns a SummaryResponse
rpc Summarize (PromptRequest) returns (SummaryResponse);
}
// Define the request message that the client sends to the server
message PromptRequest {
// The prompt text to be summarized
string prompt = 1; // "1" is the field tag.
}
// Define the response message that the server sends back to the client
message SummaryResponse {
// The summarized output generated from the prompt
string summary = 1; // "1" is the field tag
}
Node.js Server
// This reads the .proto definition and converts it into usable JavaScript objects
const packageDefinition = protoLoader.loadSync('llm.proto', {});
const llmProto = grpc.loadPackageDefinition(packageDefinition).llm;
// This will be called when a client makes a gRPC request to LLMService.Summarize
async function summarize(call, callback) {
// Extract the 'prompt' string from the incoming request
const prompt = call.request.prompt;
// logic
// Extract and send back the response summary
callback(null, { summary: response });
} catch (err) {
callback(err);
}
}
function main() {
const server = new grpc.Server();
// Register the LLMService and its RPC method implementations
server.addService(llmProto.LLMService.service, {
Summarize: summarize
});
// Bind the server to port 50051 and start it
server.bindAsync('0.0.0.0:50051', grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log('gRPC server running on port 50051');
});
}
PHP Client
<?php
$client = new \LLM\LLMServiceClient('localhost:50051', [
'credentials' => Grpc\ChannelCredentials::createInsecure()
]);
// Build request
$request = new \LLM\PromptRequest();
$request->setPrompt("Summarize what gRPC is in one line.");
// Make gRPC call
list($response, $status) = $client->Summarize($request)->wait();
if ($status->code === Grpc\STATUS_OK) {
echo "GPT Summary: " . $response->getSummary() . "\n";
} else {
echo "gRPC Error: " . $status->details . "\n";
}
Instead of maintaining a bulky REST layer between services, using gRPC lets my PHP backend treat GPT calls like native functions — even though the actual logic runs in Node.js using the official SDK.
$response = $client->Summarize($request)->wait();
Just define once in .proto — and gRPC takes care of everything. Need more functions? Just add them to the .proto file and implement on the server.
Subscribe to my newsletter
Read articles from Ujjwal Pathak directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
