Exploring MCP Server for Oracle Database & APEX

Scott SpendoliniScott Spendolini
18 min read

When I first heard the term MCP used in the context of AI, I was immediately taken back to the 80s when the movie Tron was released. For those who have not seen it, Tron is a futuristic movie about a curious user who is literally sucked into a computer and has to fight for his life. The main bad guy in the movie is not actually a guy, but rather the Master Control Program - or MCP. The MCP was a pre-cursor to today’s AI, as it was self-aware, got smarter on its own and had plans for world domination.

Fast-forward a bunch of years, and that three-letter acronym - MCP - is making its way around the AI crowd again. But this time, it stands for Model Context Protocol. This modern version of MCP is not trying to kill you as much as it provides a critical link between your private data and any LLM. The MCP protocol was designed by Anthropic in November 2024 and was quickly adopted as an open source standard by all of the major players in AI shortly thereafter, making it one of the fastest growing areas of AI. It’s often compared to a USB hub, as it connects one type of service (AI) with another (data).

Why All the Hype?

MCP servers are getting a lot of hype these days, and rightly so. To understand why, let’s take a step back and think about how we interact with LLMs today. The LLMs that we use today - ChatGPT, Grok, Anthropic, etc. - are all hosted on the public internet. Anything we send to them is received, stored and processed on their hardware.

LLMs are great at answering questions based on the data that they were trained on. This includes almost all facets of programming in almost any modern language. Need a regular expression written? LLM’s got you. Want to validate a JSON document? Just throw it at ChatGPT. How about a quick check of my code for security issues? Done and done.

Where LLMs quickly fall apart is when you need to know about YOUR data. Since your data - or your organization’s data - is private, LLMs were never trained on it and don’t know anything about it. Sure, you can use things like RAG to give LLMs a small sample of your data, but in many cases, that’s just no where near enough.

Before MCP, you really only had a single choice if you wanted to use your data with an LLM: train it. Training an LLM is a significant investment, as it means that you’ll have to procure the processors, servers and disk to load all of your data onto and let the LLM go to town learning about it. Even training small datasets can easily rack up bills in the 6 or 7 digit range.

This is where MCP changes the game.

MCP Server for Oracle Database

Oracle’s new MCP Server for the Oracle Database is an MCP server that serves a specific purpose - act as a broker between an LLM and your data in an Oracle Database. One “end” of the MCP server connects to your database while the other “end” plugs into the LLM of your choice.

The MCP server acts as an intermediary of sorts. In the case of MCP for Oracle Database, the MCP server has specific set of tasks that it can be asked to perform. From the Oracle SQLcl Documentation:

  • list-connections: Discovers and lists all saved Oracle Database connections on your machine.

  • connect: Establishes a connection to one of your specified named connections.

  • disconnect: Terminates the current, active Oracle Database connection.

  • run-sql: Executes standard SQL queries and PL/SQL code blocks against the connected database.

  • run-sqlcl: Executes SQLcl-specific commands and extensions.

The key task here is run-sql. This allows the MCP to interact with the LLM and then run the SQL that the LLM requests. Again, the LLM never has direct access to the database - it can only talk to and receive input from the MCP server directly. In many cases, the LLM will want to run SQL to get bits of data so that it can complete the task it was asked. An example of this is it may need to know which columns are in the EMP table so that it can write a query. Or, if you asked something like “provide a summary of all orders for product A”, the LLM will need to see your data in order to provide the summary. Tools like Cline will allow the end user to see what is about to be sent to the LLM and either approve or reject it.

Get More Information on MCP Server for Oracle Database

Before diving into the rest of this article, it may be helpful to take a look at some other content about the MCP Server for Oracle Database. These are some great places to start:

Install & Configure MCP for Oracle Database

OK, enough background. Let’s get MCP Server for Oracle Database installed & configured so that we can play with it!

Before we go much further, I have to issue the following security warning:

💡
WARNING! Allowing AI to connect to your database and issue commands may have disastrous results. Please exercise extreme caution when connecting any MCP to your database. Strongly consider using a copy of production data or a test database.

Create the Database User

The MCP server will be able to do anything that the schema which it connects can. Thus, one way to limit the blast area is to create a limited privilege schema and connect through that. That schema can be set to only see a limited set of views & APIs, for example. If MCP tried to access another schema object, it will be blocked by the database security. This same principle can be applied to the parse-as schema in an APEX application; see Chapter 13 in a book I wrote many years ago for more details…

In order to both read APEX metadata and call APEX APIs, we’ll want to create a new schema and associate a role to it. It’s much safer this way vs. letting MCP connect to something like SYS, SYSTEM or ADMIN.

This new schema - called APEX_ADMIN_USER - will be granted the APEX_ADMINISTRATOR_ROLE role so that it can see any application in any workspace as well as call APEX administration APIs.

To create the APEX_ADMIN_USER schema:

  1. Connect to your database as SYS, SYSTEM or ADMIN - essentially a user that has DBA privileges.

  2. Run the following commands to create the APEX_ADMIN_USER:

-- create the new user
create user apex_admin_user identified by oracle;

-- add quota to the new users tablespace
alter user apex_admin_user quota unlimited on users;

-- grant connect to the new user
grant connect to apex_admin_user;

-- grant the apex_administrator_read_role to the user
grant APEX_ADMINISTRATOR_ROLE to apex_admin_user;

-- allow the user to create a table & sequence
grant create table, create sequence to apex_admin_user;

Create the Database Connection

Next, we need to create a database connection for our new user in SQL Developer. There’s nothing special about this step at all; you just need to create a uniquely named connection so that the MCP server can use it to connect to the database. I’ve found that it’s easier to use simple words to name the connection - something like apex_admin_user works great.

  1. In SQL Developer for VS Code, click on the “+” in the Connections panel.

  2. Complete the connection details with your specifics. For my example, here’s what I used:

  3. Optionally, click Test to be sure that the connection is valid.

  4. Click Save to save the connection.

Installing and Configuring Cline

Cline is “an AI development assistant which integrates with Microsoft Visual Studio Code. It provides an interface between your IDE and LLMs facilitating code development, increasing productivity and lowering the barrier to entry for new coders…

In other words, Cline can help analyze and generate code via VS Code. It integrates with practically any LLM and provides a chat-like interface accessible from VS Code.

The documentation for SQLcl MCP Server is quite good, and the steps to install & configure Cline are quick & easy.

  1. Navigate to the Oracle MCP Server Documentation.

  2. Follow the section labeled 3.5.2 Configuring Cline in VS Code.

Connecting with Cline

Now that Cline is installed and configured with an LLM, all we need to do is connect to our new schema.

To do this:

  1. Select Cline in the sidebar.

  2. You should see a screen that looks like this:

  3. Enter the following into the prompt and hit enter: connect to apex_admin_user

Cline will then attempt to connect to the database using the connection properties defined in apex_admin_user. As it attempts to connect, it may ask you for permission to execute certain commands. You can approve this each time, or set auto-approve to automatically approve any request to run the same portion of the MCP server - in this case, connect.

Here’s the output from my session - yours may vary depending on the LLM model you’re using as well as how many times you’ve connected in the past.

At this point, we’re ready to roll.

Optional Cleanup

Before we start, there’s an optional step that you may want to perform. The first time Cline connects, the MCP server will create a table called DBTOOLS$MCP_LOG in your schema. This table will be used to log all of the actions that MCP performs. MCP will also add comments to other common log & trace tables such as v$sql, v$session, ASH, AWR, etc. This makes it crystal clear which transactions were performed by the MCP server.

When we created the APEX_ADMIN_USER schema, we granted create table and create sequence so that MCP could create the table and corresponding sequence. If you want to revoke this pair of privileges, you can do so by running the following command:

-- once we have connected via Cline, optionally revoke create table & sequence
revoke create table, create sequence from apex_admin_user;

Using MCP with APEX

Now that we’re connected, we can literally use plain English to query the database about our APEX environment. For this section, I am using my Free Tier Cloud database, which has a decent amount of workspaces & applications. Your results will vary.

Number of Workspaces

Let’s start off simple. Enter the following into the prompt and press enter: how many workspaces are there?

What Just Happened?

Once we connected to the database, we asked Cline a simple question: how many workspaces are there? Workspaces are used in several software applications, so how did it know that I meant APEX workspaces?

That’s because my request was sent to the LLM with a prompt. That prompt provided additional context to the LLM so that it could choose the correct definition of workspace. Since the prompt likely stated that we’re connected to an Oracle Database, the LLM was able to return the correct SQL to run to answer the question.

It’s no different than if I were to open up a new ChatGPT window and ask the same question with a similar prompt:

Keep in mind that at no time at all does the LLM connect to the database or see any of the data. It’s simply taking the question that I sent as well as the prompt that’s embedded into the MCP Server and querying the LLM with that. The results are then returned to the MCP server, where it then turns around and runs them in the database connection that we established.

Mechanics

The LLM actually wraps the results in JSON and returns them to the MCP Server. If we take a look at the JSON, it’s pretty basic and very obvious which SQL is about to be run:

{
  "sql": "SELECT /* LLM in use is claude-sonnet-4 */ COUNT(*) AS workspace_count FROM apex_workspaces;",
  "mcp_client": "cline",
  "model": "claude-sonnet-4"
}

Notice the snippet /* LLM in use is claude-sonnet-4 */ embedded in the SQL. This is how the MCP Server ensures that SQL in log tables are properly instrumented as having been run by an LLM.

Omitting that snippet, we’re left with a fairly simple SQL statement: SELECT COUNT(*) AS workspace_count FROM apex_workspaces

The MCP server will then run that query and return the results, which in my case is 15. Running the same SQL via SQLcl confirms that the answer is correct.

Number of Applications

Sticking with simple, let’s ask this question: how many applications are there?

In this example, it functioned exactly as the last one, with the only difference being the specifics of the question.

Applications per Workspace

So far, while this is cool, the complexity of the SQL and maybe even level of effort required is somewhat negligible. I could probably type the SQL in the same time it takes for me to type the question. Let’s take it up a notch.

This time, we’ll ask the following question: list all applications by workspace; sort workspaces alphabetically and applications by the number of pages from most to least

This is where things get wild and the power of the MCP approach start to become clear. It’s clear that the results are correct; we have a dataset with all applications sorted by workspace name and then by page count high to low.

If we look at the full log, it looks like it stumbled a little bit. When it tried to run the following SQL:

SELECT 
  a.workspace,
  a.application_id,
  a.application_name,
  COUNT(p.page_id) AS page_count
FROM 
  apex_applications a
  LEFT JOIN apex_application_pages p
    ON a.application_id = p.application_id 
    AND a.workspace_id = p.workspace_id
  GROUP BY 
    a.workspace, 
    a.application_id, 
    a.application_name
  ORDER BY 
    a.workspace ASC, 
    page_count DESC

It failed, as the column WORKSPACE_ID does not exist in the table APEX_APPLICATION_PAGES. Since Oracle returned an error code to the MCP Server, it took the entire error stack and sent that to the LLM and likely prompted that with something like “Help me fix this error”. The LLM was then able to realize that column did not exist - nor was it necessary - so it removed it from the join statement and sent another JSON document with the updated SQL statement.

Neat.

Create a Workspace

Querying the APEX views is not the only thing we can do. Since we have the APEX_ADMINISTRATOR_ROLE, we can call any of the APEX APIs to do things like create and delete workspaces, users and alter instance settings.

Let’s try this: create a new workspace called blog. that workspace should parse as a new schema called blog. the blog schema should have a strong password.

This time, Cline performed two, distinct actions:

  • created the Database Schema BLOG

  • created the APEX Workspace BLOG and associated the BLOG schema with it

We can confirm the schema and workspace creation by logging into the APEX Instance Administration console:

Create an APEX User

Lastly, we’ll need a user created in that workspace so that we can login. Let’s create one now: create a user in the blog workspace called admin. use a strong password and set the email address to noreply@oracle.com.

The first thing it did was try to call APEX_UTIL.CREATE_USER, but it used the wrong parameters. Here’s what it tried to call:

begin
  apex_util.create_user
    (
    p_user_name     => 'admin',    
    p_web_password  => 'Cg6^zpP4nWq2Xt#5',    
    p_email_address => 'noreply@oracle.com',
    p_workspace     => 'blog'  
    );
end;
/

Unfortunately, there is no p_workspace parameter in APEX_UTIL.CREATE_USER. Thus, it failed and sent the error stack back to the MCP Server.

💡
For the curious, you would need to set the workspace first via APEX_UTIL.SET_WORKSPACE. The correct API call would look like this:
begin
  apex_util.set_workspace(p_workspace => 'BLOG');
  apex_util.create_user
   (
    p_user_name     => 'admin',    
    p_web_password  => 'Cg6^zpP4nWq2Xt#5',    
    p_email_address => 'noreply@oracle.com'
    );
  commit;
end;
/

Here’s where things went wrong.

When it sent the error stack back to the LLM, the LLM replied with the following:

The call to APEX_UTIL.CREATE_USER failed due to a wrong number or types of arguments. This strongly suggests either the API signature has changed or the arguments are not correct for the current APEX version.

In modern APEX (20.x+), workspace user creation is best performed via APEX_UTIL.CREATE_WORKSPACE_USER, which requires these parameters (with many as optional)…

This is simply not true. There is no such API as APEX_UTIL.CREATE_WORKSPACE_USER whatsoever, as confirmed by this search in the APEX API Documentation:

This is the infamous AI hallucination that you’ve heard about. The fact remains that sometimes, the AI just makes stuff up, and this is one of those times.

My experience is that AI for APEX administrative tasks works better with Lllama 4 or Grok 4 vs. GPT 4.1. Your results may vary, but if and when you hit one of these hallucinations, swapping out a different model - or even re-phrasing the question with more context - is all you need to do. It’s not a reason to give up on MCPs all together, as the quality of the LLMs gets better and better with each iteration.

Create an APEX User: Take Two

Lets switch models to Grok 4 and see if we can get it to work with the exact same prompt.

This time, the LLM was aware that either the workspace or security group ID had to be set before creating a new APEX user. The SQL that it ran was this:

  BEGIN
    APEX_UTIL.SET_SECURITY_GROUP_ID(48876018747798056); 
    APEX_UTIL.CREATE_USER
      ( 
      p_user_name => 'ADMIN', 
      p_email_address => 'noreply@oracle.com', 
      p_web_password => 'Adm1n$ecure2025!', 
      p_developer_privs => 'ADMIN' 
      ); 
    COMMIT;
  END;

It even added a COMMIT - something that is also needed to complete any transaction.

Models matter. They are all trained on different data sets, and one may be missing critical details that cause it to send back false information, while another is perfectly aware on how to perform a task.

It would be interesting to have a “bake off” of sorts across the popular LLMs to see which performed the best in the context of MCP Server & APEX…

Analyze PL/SQL Processes

Let’s kick things up a notch to demonstrate just how powerful the MCP Server really is.

This time, we’re going to ask the MCP to do a number of things at once:

  • Search all applications for processes of type PL/SQL

  • Return all of the corresponding PL/SQL code

  • Provide an analysis of that code with advice as to how to optimize it

To make things even more fun, let’s just post this list as a single, condensed prompt. Something like this: show me any page process that uses plsql and provide suggestions as to how to optimize the code. return results in a JSON document

Wow, there’s a lot to unpack here. Let’s walk through all of the steps that the MCP Server took to get to the answer.

Trust the Process

First of all, the it struggled to get the right column name & table name - failing first on PROCESS_SOURCE_TYPE_CODE (it should be PROCESS_TYPE_CODE) and then failing on APEX_APPLICATION_PAGE_PROCESSES (it should be APEX_APPLICATION_PAGE_PROC). Not wanting to admit it was wrong, it then looked for APEX_APPLICATION_PAGE_PROCESSES in ALL_OBJECTS. Again, it was nowhere to be found, since it simply doesn’t exist.

Next - and maybe as a sanity check - it wanted to verify which user it was connected to Oracle as. It then wanted to see which schema contained APEX%. Looks like it was trying to get its bearings to ensure that 1) APEX was installed and 2) it had privileges to see the views.

Still not 100% satisfied, it ran yet another query to see if APEX_240200 owned a table named APEX_APPLICATION_PAGE_PROCESSES. To be fair, I’ve also struggled with this view over the years, as I would expect it to be called APEX_APPLICATION_PAGE_PROCESSES or even APEX_APPLICATION_PAGE_PROCS. Is it possible to feel bad for an LLM?

Next, it widened the search by querying all APEX views that start with APEX_APPLICATION_PAGE. And there it is - APEX_APPLICATION_PAGE_PROC.

Now that it has the correct view, it inspects all of the columns. Remember - the initial error was that the column PROCESS_TYPE_SOURCE_CODE was invalid. Having seen the full list, it is able to construct and run another query with the correct column names. While the query now runs successfully, we now have a data error. It’s looking for rows that have a PROCESS_TYPE_CODE of NATIVE_PLSQL and finds none, since APEX now uses just PLSQL to identify PL/SQL processes.

Persistent as ever, it now asks for a list of distinct values of PROCESS_TYPE_CODE, hoping that will shed some light on what the value really is. And once again, it finds what it’s looking for.

Now that all of the pieces are in place, the last query needed can be executed. Once it has the results, it sends them to the LLM with the initial ask (provide advice for optimization and return it in JSON) and returns the results back to us.

Failing Forward

What amazed me most is the MCP Server & LLM behaved strikingly similar to how I would have, had I been asked a similar question and did not have a deep understanding of how APEX works.

Basically, it employed the following workflow:

  • Make an assumption and try something

  • If it fails, alter your assumption with another assumption or more facts

  • Try again

  • Repeat as needed

In this case, the MCP Server & LLM were eventually successful, as the results where precisely what I was asking for.

This last example is what I find to be the most fascinating thing about using MCP Servers with LLMs. It was given a plain English prompt and using the power of the LLM, was able to navigate through the database to find what it needed and deliver the correct result.

Summary

MCP Servers are evolving extremely fast, and there’s no sign of them slowing down. They represent a critical link that extends the power of AI to your internal data without the compromise of having to share the data with the LLM directly.

While MCP Servers can perform simple tasks, their true strength is when you challenge them with multi-step, complex tasks that would take a significant amount of time without automation. In the context of APEX, I can see multiple use cases for MCP Servers from both a developer’s point of view as well as the end user. People are already starting to kick the tires and sharing their experiences on social media, and the initial results are amazing.

MCP Servers aren’t magic - but paired with an LLM, they can feel like they are. Give them a simple request in plain English and they’ll reason, guess, retry, and adapt until they get it right with the tenacity of a person looking for a solution.

That’s the real shift that MCP servers represent: you’re not just running queries, you’re collaborating with a system that can navigate and interpret your data in all kinds of ways. And once you see that in action, it’s hard to go back to the old way.

End of line.

4
Subscribe to my newsletter

Read articles from Scott Spendolini directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Scott Spendolini
Scott Spendolini

"Bumpy roads lead to beautiful places" Senior Director @ Oracle 🧑‍💻 #orclapex fan since '99 🛠️ https://spendolini.blog 💻 Oracle Ace Alumni ♠️ Bleed Syracuse Orange 🍊 Golf when I can ⛳️ Austin, TX 🎸🍻 Views are my own 🫢