How We Built Our Internal E-E-A-T Content Analyzer with Froala

Idera Dev ToolsIdera Dev Tools
5 min read

Our team launched the E-E-A-T & Helpful Content Analyzer a few weeks ago. You might have noticed it’s built with the Froala editor. We made this key technical decision immediately (and it was a no-brainer).

Our content team needed to move away from subjective, manual content reviews. We wanted an automated tool to check our work against Google’s standards. This project also serves as a great example of how easily Froala integrates into modern, AI-powered applications.

This article details the technical thinking behind the tool. We’ll cover why we built it and how its components, including Froala and the DeepSeek API, work together to provide a seamless experience.

Key Takeaways

  • For AI content analysis, a <textarea> is insufficient; preserving HTML structure with an editor like Froala provides essential context.

  • Getting structured data from an LLM requires strict instructions; embedding formatting rules directly into the prompt is non-negotiable for a predictable response.

  • A simple html.get() method call is all that’s needed to pull clean, complete HTML from the Froala editor for processing.

  • You don’t need complex parsers for LLM responses if you enforce a consistent format; a simple regular expression can extract the data you need.

  • Focus your engineering effort on the core problem — in this case, the AI prompt — not on rebuilding commodity components like a rich text editor.

The Need for an Objective Tool

Our team used to manually check an article against Google’s E-E-A-T and Helpful Content guidelines. It was slow and, worse, subjective. One person’s interpretation of “authoritativeness” could easily differ from another’s.

We needed an objective, automated tool to give us structured feedback. The goal wasn’t to replace our writers but to give them a consistent feedback loop. The tool had to do three things well.

  1. Accept rich text with all its formatting.

  2. Analyze it against specific E-E-A-T criteria.

  3. Present structured, actionable feedback.

How the Analyzer is Structured

The analyzer is a simple two-column layout. Input on the left, results on the right. There’s no need to over-engineer the UI when a clean workflow is the priority. We used basic CSS Flexbox to keep it responsive and straightforward.

.main-layout {
   display: flex;
    flex-grow: 1;
    padding: 20px;
    gap: 25px;

}

.column {
    flex: 1;

}

.input-column {
    order: 1;
}

.results-column {
    order: 2;
}

This keeps a clear, logical separation between the user’s content and the AI’s analysis.

Why a <textarea> Wasn’t Enough

For the content input, a standard <textarea> element was a non-starter. Modern articles depend on headings, lists, links, and other formatting. These structural elements are critical for readability and are a key signal in Google’s “Helpful Content” evaluation. If you send plain text to an AI for analysis, you lose half the context.

This is where we plugged in the Froala editor. It’s designed to handle complex, structured content out of the box. This is all the code required to embed the editor and configure its toolbar.

document.addEventListener('DOMContentLoaded', () => {
   try {
        new FroalaEditor('#editor', {
            placeholderText: 'Paste your article HTML or text content here...',
            heightMin: 150,
            toolbarButtons: ['bold', 'italic', 'underline', '|', 'align', 'formatOL', 'formatUL', '|', 'insertLink', 'undo', 'redo'],
        });
    } catch (e) {
        console.error("Froala Editor initialization failed:", e);
    }
});

With the editor in place, all the important structural elements of an article are preserved. When it’s time to send the content for analysis, grabbing the complete, clean HTML is a single method call.

let content = FroalaEditor.INSTANCES[0].html.get();

This line provides the complete HTML content, which is ready for analysis.

Engineering a Predictable AI Prompt

Getting a consistent, structured response from a large language model requires giving it precise instructions. Anyone who has worked with an LLM API knows the pain of getting back unstructured, unpredictable text. We solved this by creating a very strict system prompt that commands the AI’s behavior and a user prompt that injects the article content.

The system prompt tells the AI to act as an expert content analyst and defines the exact output structure, including the use of Markdown and specific Score: and Priority: formats. This formatting is the most critical part. It turns the AI’s free-form response into something we can reliably parse.

Our buildAnalysisPrompt function wraps the article content with these instructions.

function buildAnalysisPrompt(content) {
    // The user prompt includes strict formatting instructions.
    return `Please analyze the following article content based on Google's E-E-A-T and Helpful Content guidelines.

    Follow these formatting instructions precisely for each category:
    1. Provide a clear Markdown heading (e.g., "## Content Quality").
    2. Assess the content for that category.
    3. Offer specific, actionable recommendations.
    4. Include the score line EXACTLY as: "**Score: [score]/10**"
    5. Include the priority line EXACTLY as: "**Priority: [High/Medium/Low]**"

    --- START ARTICLE CONTENT ---
    ${content}
    --- END ARTICLE CONTENT ---`;
}

The system prompt is a detailed instruction set telling the AI to act as an expert content analyst. It defines the exact output structure, including the use of Markdown headings and the specific Score: and Priority: formats. This strict formatting is crucial because it allows our application to parse the AI’s response and display it in a structured way, like the score table.

The Output: Displaying Actionable Insights

After the analysis is complete, the results are formatted and displayed. The raw Markdown response from the API is processed to create a score table and a detailed feedback section.

We use a regular expression to find and extract the score from each category in the Markdown text.

const scoreRegex = /\*\*Score:\s*(\d{1,2})\/10\*\*/;

This allows for the automatic creation of a summary table. The detailed text feedback is then converted from Markdown to HTML for clear presentation, providing the specific recommendations that make the tool useful.

A Real Tool for a Real Problem

We built the analyzer to solve an internal bottleneck. It’s a practical example of how a robust front-end component like Froala is critical for building useful, AI-driven tools.

By combining a solid editor with a capable API, we created a workflow that helps our team produce better content. It’s not about finding shortcuts; it’s about using the right tools for the job so you can focus on the actual work.

This article was published on the Froala blog.

0
Subscribe to my newsletter

Read articles from Idera Dev Tools directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Idera Dev Tools
Idera Dev Tools

Idera, Inc.’s portfolio of Developer Tools include high productivity tools to develop applications using powerful and scalable technology stacks.