Building an HTTP SQLite Database with Cloudflare Durable Objects
Introduction
Last week during Cloudflare’s birthday week you likely heard that Durable Objects now support SQLite as a mechanism for both storing and accessing data. Prior to this announcement if you were using Durable Objects then you were only provided an interface to do so with KV – a key value storage engine. Giving us a way to interface with stored data via SQL is a huge glow up for DO’s.
Now why in the world would we want to use Durable Object’s as a SQLite database? If you have used Cloudflare D1 databases then you have already interacted with a DO serving as a database because that’s precisely what D1’s are built on top of so we already know they can serve their purpose. Knowing that, we could just simply use D1 and call it a day but where is the fun in that? In this blog post I want to explore how we can essentially build out our own database application interface layer on a DO where both the storage and compute are tied together and we can get near instant results – and add some custom logical features to our own database.
TLDR; We can build a database offering like D1 with Durable Object’s because DO’s are a more lower-level “compute with storage” building block. So let’s do that together.
Let’s Build
To get started, we’ll instantiate a new project from a Cloudflare durable object template by running the following commands in your terminal.
npm create cloudflare@latest -- durable-object-starter
cd durable-object-starter
We need to adjust our wrangler.toml
file to enable SQLite storage instead of the default KV storage. All we need to do is comment, or remove, the new_classes
line in favor of new_sqlite_classes
as shown below.
# new_classes = ["DatabaseDurableObject"]
new_sqlite_classes = ["DatabaseDurableObject"]
Time to shift over to our class declaration for our durable object. This class of code is responsible for accessing the SQL database and all the interactions within. All we’ll do for now is scaffold out what we will need and we can do this in our src/index.ts
file made available to us in the template project.
export class DatabaseDurableObject extends DurableObject {
public sql: SqlStorage
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.sql = ctx.storage.sql;
}
async fetch(request: Request): Promise<Response> {
// TBD
}
}
In order for us to route requests to our durable object we need to declare a fetch handler for our Cloudflare worker. Below our exported class code let’s add that functionality.
export default {
async fetch(request, env, ctx): Promise<Response> {
/**
* Retrieve the Durable Object identifier from the environment bindings and instantiate a
* Durable Object stub to interact with the Durable Object.
*/
let id: DurableObjectId = env.DATABASE_DURABLE_OBJECT.idFromName(DURABLE_OBJECT_ID);
let stub = env.DATABASE_DURABLE_OBJECT.get(id);
/**
* Pass the fetch request directly to the Durable Object, which will handle the request
* and return a response to be sent back to the client.
*/
return await stub.fetch(request);
},
} satisfies ExportedHandler<Env>;
Realistically now we have a fully deployable durable object that can have a request passed into it. This allows us now to add in some path handling and determine what actions we should run inside our durable object – which in our case we need to handle the ability to execute queries against our SQLite database.
Lastly, we need to be able to handle the requests within our durable object to execute queries against our database. Let’s fill in the blanks to complete that functionality.
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (request.method === 'POST' && url.pathname === '/query') {
return this.queryRoute(request);
} else {
return new Response("Unknown operation", { status: 400 });
}
}
async queryRoute(request: Request): Promise<Response> {
try {
const { sql } = await request.json() as any;
const result = this.sql.exec(sql).toArray();
return new Response(JSON.stringify(result), {
headers: { 'Content-Type': 'application/json' },
});
} catch (error) {
return new Response(JSON.stringify({ error }), { status: 500 });
}
}
Above our fetch
call listens for a POST
request to the /query
endpoint of our worker, and passes that request to another function, queryRoute()
. From there we can execute a SQL query from our POST body request into the durable objects SQL object and receive a result.
Deploy & Test
As part of the Cloudflare template we used to quick start our project, our package.json
file has the following command available to us to quickly deploy, so let’s run the following.
npm run deploy
In the deployment response you will be provided a URL that your worker has been deployed to. You can use the following cURL requests below to test and verify our database is working over HTTP. The requests do the following:
Create a table
Insert data into the table
Receive data from the table
Create Table
curl --location --request POST 'https://YOUR-WORKER-URL.workers.dev/query' \
--header 'Content-Type: application/json' \
--data-raw '{
"sql": "CREATE TABLE IF NOT EXISTS artist(artistid INTEGER PRIMARY KEY, artistname TEXT);"
}'
Insert Data
curl --location --request POST 'https://YOUR-WORKER-URL.workers.dev/query' \
--header 'Content-Type: application/json' \
--data-raw '{
"sql": "INSERT INTO artist (artistid, artistname) VALUES (123, '\''Alice'\''), (456, '\''Bob'\''), (789, '\''Charlie'\'');"
}'
Receive Data
curl --location --request POST 'https://YOUR-WORKER-URL.workers.dev/query' \
--header 'Content-Type: application/json' \
--data-raw '{
"sql": "SELECT * FROM artist;"
}'
And if all goes to plan (we hope it does) you should see the following response after executing your third cURL command:
[
{
"artistid": 123,
"artistname": "Alice"
},
{
"artistid": 456,
"artistname": "Bob"
},
{
"artistid": 789,
"artistname": "Charlie"
}
]
Considerations & Limitations
This is a basic implementation on how to expose the SQLite database over HTTP in its most simple form. Converting this into production-ready code we need more error catching in areas such as where we attempt to exec
our SQL queries, as well much more event logging, authorization verification and more.
Synchronous. All I/O is executed synchronously which means long query executions can become a bottleneck.
Server Location. The location of the durable object is based on request origins which means it’s better for a given locale vs global access.
Billing. SQLite with durable objects are billed on three rules: rows read, rows written and SQL storage.
Join the Adventure
We’re working on building an open-source database offering described with the basic building blocks we talked about above. When we get to join compute with storage the way we can with durable objects, we think there’s a lot more that can be offered to users than what some other providers are capable of.
Twitter: https://twitter.com/BraydenWilmoth
Github Repo: https://github.com/Brayden/starbasedb
Outerbase: https://outerbase.com
Subscribe to my newsletter
Read articles from Brayden Wilmoth directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by