Adding AI to APEX-SERT

Table of contents

Recently, I’ve been asked several times: when will APEX-SERT have AI embedded in it? It’s a valid question, considering the unprecedented proliferation of AI across nearly every product. Heck, even my mouse has an AI button now, so why not put it into APEX-SERT?
APEX-SERT works well largely due to APEX’s declarative architecture. Rules can be configured to inspect a column in a view: if the value meets a specific criterion, the rule passes; otherwise, it fails. There’s literally no room for ambiguity when it comes to deriving the value of the rule and determining whether or not that value meets a pre-determined criteria.
Offloading this task to AI introduces the potential for less than 100% precision, which is unacceptable for our needs. And given that our current rules engine works 100% of the time, why would we make any changes to it and potentially decrease the accuracy?
Thus, given the following:
APEX-SERT is a security tool
Security tools - by nature of their design and purpose - need to be precise
AI regularly hallucinates and makes stuff up
At first glance, using AI with APEX-SERT might seem inadvisable.
But is it?
Make it Make Sense
AI is here to stay and is one of the most transformative events to occur in IT. Thus, blindly stiff-arming it - even with APEX-SERT - is not the right answer. In fact, it’s not even the right question. Put more succinctly, the right question is: how do we make AI work with APEX-SERT, not should we.
Using it to replace the rules engine still doesn’t make sense yet, as there’s still too many drawbacks. First, if we were to rewrite APEX-SERT to work exclusively with AI, we would likely price out a segment of our users. AI is not free, and adding that cost as mandatory is also a non-starter.
Second, many organizations have strict rules as to where and how AI can be used. Going “all in” on an AI-only version of APEX-SERT would also shrink its potential user base, as some organizations may not be allowed to use it on their applications anymore.
After some thought, it finally hit me where AI and APEX-SERT could meet: Evaluating the quality of exceptions.
Exception Quality
Exceptions are, by and large, one of my favorite features of APEX-SERT. They enable developers to provide a rationale when they believe an attribute was incorrectly flagged - a fairly common occurrence. Exceptions also need to be approved by another developer, establishing segregation of duties and making APEX-SERT more compliant with regulations.
When creating an exception, the only requirement is to enter a value. Thus, developers can enter any value they wish, from something comprehensive and detailed to simply the letter “x”. APEX-SERT treats both of these examples as valid and advances the workflow to the next level.
It is not uncommon to have several exceptions per application, creating a significant amount of work for any approver to do, as they will need to evaluate and either approve or reject each of them. We’ve added the ability to bulk add and bulk approve/reject exceptions to make this task easier, but those features do not consider the quality of the exception itself.
This is where AI can and does make a difference.
The code to evaluate the quality of exceptions is amazingly simple. If you look at the procedure below, this is called each time an exception is created. To save cost, if an exception is created for multiple items at the same time, the score & reason is only computed once.
procedure get_exception_score
(
p_rule_id in number
,p_exception in varchar2
,p_exception_score out number
,p_exception_score_reason out varchar2
)
is
l_valid_exceptions varchar2(4000);
l_summary clob;
begin
-- determine score of exception using AI
if reports_pkg.get_pref_value(p_pref_key => 'AI_ENABLED') = 'Y' then
-- get the list of valid exceptions
select valid_exceptions into l_valid_exceptions from rules where rule_id = p_rule_id;
-- prepare the prompt and send to AI
if l_valid_exceptions is not null then
l_summary := apex_ai.generate
(
p_prompt => 'Evaluate the quality of the following exception: ' || p_exception
,p_system_prompt => replace(reports_pkg.get_pref_value(p_pref_key => 'AI_EXCEPTION_PROMPT'), '{VALID_EXCEPTIONS}', l_valid_exceptions)
,p_service_static_id => reports_pkg.get_pref_value(p_pref_key => 'AI_STATIC_ID')
);
-- log the results
apex_debug.message(l_summary);
-- parse the AI response to get the score and reason
select
json_value(l_summary, '$.score') as score
,json_value(l_summary, '$.reason') as reason
into
p_exception_score
,p_exception_score_reason
from
dual;
end if;
end if;
end get_exception_score;
Basically, the code will call the LLM using apex_ai.generate. The LLM will then return only a JSON document that is parsed out via SQL and returned to the OUT parameters. That’s it!
With AI, it’s less about the code and more about the prompt. Writing a clear, concise prompt will go a long way when working with LLMs. In fact, I typically ask AI to get me started by creating a prompt for me, and then I edit and modify it as needed. Here’s the one that is used to evaluate the exceptions in APEX-SERT:
You are an Oracle IT security expert reviewing an exception provided by a user in
response to a flagged vulnerability from the APEX-SERT tool. The user believes
the flag is a false positive.
You are provided with a list of acceptable exceptions for this rule:
{VALID_EXCEPTIONS}
Evaluate how well the user's exception aligns with the acceptable exceptions.
Assign a score from 1 to 5, where:
1 = Poorly written or irrelevant exception
3 = Partially acceptable, needs improvement or clarification
5 = Clearly aligns with acceptable exceptions and is well-justified
Return only a JSON document in the following format:
{ "score": <integer from 1 to 5>, "reason": "" }
Keep the explanation concise (1–2 sentences) and do not return any additional
commentary outside the JSON.
This prompt tells the LLM to compare each exception to a set of probable exceptions for a given rule and produce a score from 1 to 5, with 1 indicating that the exception is of poor quality and no where near a match to the anticipated ones and a 5 indicating that the exception is of high quality and a very close match to the anticipated ones. The {VALID_EXCEPTIONS} token will be replaced with valid exceptions for the specific rule in question each time. Based on the instructions, the prompt will return a simple JSON document that can easily be parsed out.
Using this score, a reviewer can readily identify which exceptions likely need to be rejected as well as those that are likely valid. This reduces the amount of time it will take a reviewer to approve or reject a batch of exceptions considerably, as they can use the score and reason that AI provides as guidance.
The exception scoring has been added to the Exceptions Report, which has be relocated under the Exceptions button on the main evaluation page. Here’s a preview of what it will look like:
We stopped short of allowing AI to refuse poorly written exceptions, as we want to be sure that we gradually and pragmatically apply AI to APEX-SERT. I can see adding this feature in at some point, with the option to disable it if desired.
Next Steps
Once the AI-enabled version of APEX-SERT is released - which should be soon - enabling the AI portion will be a 100% optional and opt-in process. You will not be forced to use the AI feature at all and the rest of APEX-SERT will continue to function without it for the foreseeable future.
You will need to provide your own LLM to use the AI features. As you know, AI calls are not free, so each user will be responsible for the associated costs.
We will continue to explore other ways to augment APEX-SERT with additional AI capabilities. One area we are likely to explore is deeper analysis of PL/SQL code, including both code embedded in APEX and named packages, procedures, and functions. Using an LLM and a well-defined prompt should produce better results than writing PL/SQL code to evaluate other PL/SQL code. The AI is just better at detecting string patterns and can do a much more comprehensive job with tasks like this.
Let me know if there’s another area of APEX-SERT that you think AI would be a good fit for.
Title photo by Lianhao Qu on Unsplash
Subscribe to my newsletter
Read articles from Scott Spendolini directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Scott Spendolini
Scott Spendolini
"Bumpy roads lead to beautiful places" Senior Director @ Oracle 🧑💻 #orclapex fan since '99 🛠️ https://spendolini.blog 💻 Oracle Ace Alumni ♠️ Bleed Syracuse Orange 🍊 Golf when I can ⛳️ Austin, TX 🎸🍻 Views are my own 🫢