From Hall Tickets to HTTP: Powering University Exams in a Digital-first World


🧠 Introduction:
In one of my recent projects, I got to work on something pretty exciting—a full-blown online learning platform for universities. Not just your typical student portal, but an end-to-end system that handles almost everything in a student’s academic journey, from the time they show interest in a course to the day they graduate.
We built it to support a whole bunch of roles—students, faculty, program heads, document verifiers, counsellors, exam admins, external evaluators—you name it. We even connected it with external CRMs and partner channels so that leads could flow in from different sources. Think of it like a university's digital nervous system.
Out of all the modules, the one that really stood out (and kept me up at night 😅) was the Examination and Evaluation System. Why? Because it had to be rock-solid, scalable, and smart enough to handle complex academic rules without messing up someone’s final grade.
I worked on the backend development for this part, using Java Spring Boot and PostgreSQL. Some of the key challenges we tackled included:
Setting up dynamic assessment configurations and exam schedules
Handling internal, external, and even re-evaluation flows
Managing answer sheet uploads and validation logic
Applying moderation rules, grace marks, and relative grading
Calculating CGPA/SGPA using program-specific rules
Implementing result locks, audit trails, and versioning
In this blog series, I’ll take you behind the scenes of how we built the exam system—covering architecture, domain design, grading logic, workflows, and even some Drools-based rule magic. If you’ve ever wondered what powers those "Submit Exam" buttons or how grades are processed at scale, this should be a fun ride.
✅ Part 1: Modeling the Exam World
→ Designing the domain: students, courses, assessments, evaluators, and attempts
Key entities and relationships
Handling academic year/semester logic
Supporting multi-evaluation (internal, external, re-eval)
How we kept it flexible with minimal coupling
Link →
🧠 Part 2: Exam Configuration & Scheduling Logic
→ Making the exam system dynamic for admins and programs
Assessment creation flow
Course-assessment mappings
Timed exams, attempt limits, and visibility settings
Audit-ready scheduling updates
Link →
📤 Part 3: Submission Engine – How Students Upload Answer Sheets
→ Handling multipart submissions, file metadata, and validation rules
API design for answer uploads
Validation of formats, timestamps, and re-uploads
S3 or Blob storage strategy
Mapping answers to questions
Link →
🔍 Part 4: Evaluation Workflow – From Internal to External
→ Enabling flexible evaluator assignments and tracking evaluations
Handling multiple evaluators per answer
Locking evaluations to avoid overwrite
Re-evaluation vs moderation
Marking comments and visibility
Link →
🎯 Part 5: Grading Engine – Where the Real Magic Happens
→ Calculating marks, applying rules, and generating grades
Raw marks to grade conversion
Grace mark application
Relative grading using Drools
Program-wise grading policies
Link →
📊 Part 6: Final Results, Locking, and CGPA Calculation
→ Generating summaries and enforcing academic policies
How we compute SGPA/CGPA
Locking results before publication
Handling edge cases (failures, absentees, incomplete evaluations)
Audit trails and rollback scenarios
Link →
🚀 Part 7: What We Learned (and What We’d Do Differently)
→ Scaling tips, lessons from live exam events, and what broke
Mistakes, surprises, and last-minute fires
Optimizations in query design and data access
Load testing, concurrency, and evaluator load balancing
Thoughts on future improvements
Link →
Let’s Begin
✅ Part 1: Modeling the Exam World
Designing the backbone for assessments, attempts, and academic flow
When building the exam module, the first big decision was how to model the academic ecosystem—in a way that was both flexible (to support different programs and grading rules) and stable (to prevent things like duplicate evaluations, incorrect grades, or dangling submissions).
Let’s break down how we approached it.
🎓 1. The Core Idea: What Is an “Exam” in This World?
It’s not just a date and a question paper. In our system, an exam (technically an Assessment
) is:
Tied to a course (e.g., “Data Structures”, “Problem solving with c” etc)
Configurable per academic year & semester
May involve multiple components: internal evaluation, externals, re-evaluation, practicals, etc.
Needs rules for marks, grading, grace, moderation, etc.
So we started with some core entities:
Key Entities:
Assessment
– master config for an exam (type, startTime, endTime, List of programs etc.)AssessmentCourseMapping
– ties a course to an assessment for a given semester/programStudentAssessmentAttempt
– a student’s individual submission for that assessment (consists each questions and answers from answersheet).EvaluationSummary
– evaluator marks, comments, statusStudentAssessmentSummary
– final computed outcome after all evaluations.studentGrades – final breakup of computed outcome after all evaluations/grading/moderation
🧩 2. Relationships that Matter
To keep things manageable but extensible, we separated concerns:
One Assessment could map to many Courses
Each student could have multiple attempts (within allowed limits)
We used a composite primary key in some places (student_id + course_id + assessment_id + attempt_no
) to avoid duplicates and make querying efficient.
We also kept versioning in mind by storing audit trails using created_by
, created_date
, etc., and by modeling attempt_status
, evaluation_status
, and even grace_applied
flags separately.
🔁 3. Handling Academic Year & Semester
Here’s a challenge: academic sessions in universities don’t always follow neat calendar patterns. So instead of storing literal dates, we modeled them like:
academic_year_semester = "2024-Jan"
This let us:
Query everything tied to a semester easily
Lock/unlock results for a specific batch
Separate out calculations like SGPA/CGPA cleanly
Also, our foreign keys to Program and Batch ensured we always knew who the rules applied to.
🧮 4. Supporting Evaluation Logic
Each student’s attempt was broken down like this:
StudentAssessmentAttempt
: raw submission (uploaded file, metadata, etc.)EvaluationSummary
: evaluator marks + commentsStudentAssessmentSummary
: merged marks.StudentSemesterGrade : merged result, moderation, grade, etc.
We needed to support internal + re-eval logic dynamically. That meant our model had to:
Track which evaluator worked on what
Lock records after marking
Support rule-based mark adjustments (grace, moderation, etc.)
🔐 5. Guardrails: Preventing Data Inconsistencies
We added:
Unique constraints on submission and evaluation entries
Enum validations on statuses
Partial indexes (e.g., only fetch non-archived submissions)
Audit fields on all core tables
Soft deletes for cases like duplicate uploads
These helped us avoid:
Double submissions
Evaluators accidentally overriding each other’s work
“Ghost” attempts showing up in result generation
🔄 TL;DR: What Worked Well
✅ Entity separation made the logic modular
✅ Composite keys helped prevent duplication
✅ Semester-based design made batch operations easy
✅ Enum-driven evaluation logic gave us flexibility
✅ Audit fields saved us during post-mortems 😅
🧠 Part 2: Exam Configuration & Scheduling Logic
Giving admins the tools to set up complex exams (without needing a developer every time)
After getting the core data model in place, the next big step was making the exam system configurable. Universities deal with a lot of variations:
Different types of assessments (written, viva, practical)
Programs with their own evaluation workflows
Internal/external/re-evaluation required in some, not in others
Rules that change every semester
So we couldn’t just hardcode everything. We needed a flexible way for admins to create and schedule assessments, assign evaluators, and tweak evaluation rules — all from the UI, without pushing code.
🏗️ 1. Dynamic Assessment Setup
We introduced an Assessment Configuration interface where admins could:
Create assessments for specific courses and semesters
Define the type (e.g., End-Semester, Mid-Sem, CA)
Specify evaluator roles (internal, external, re-evaluator)
Set max attempts, pass marks, grace rules, and more
This mapped to the backend assessment
and assessment_course_mapping
entities.
Example: A config JSON behind the scenes
{
"assessmentType": "ESE",
"courseId": "DS101",
"academicYearSemesterId": "2024-EVEN",
"maxAttempts": 2,
"passMark": 40,
"totalMarks":100
"examSchema": {"MCQ": {"unit":1, "questionCount": 10..},"ScenarioBased": {"unit":1, "questionCount": 10..}
"graceAllowed": true,
"graceLimit": 5
}
We stored these configs and plugged them into downstream workflows like evaluation and grading.
🗓️ 2. Scheduling & Visibility Logic
Admins could also define:
Start and end dates for submission
Whether late submissions are allowed
When an assessment becomes visible to students
Auto evaluation or manual evaluation
This meant we had to:
Validate overlapping assessments for the same course
Enforce deadlines (e.g., block submission APIs after due date)
Used Dates and timeslot for visibility of exams on student dashboard on planned schedules.
Tech Tip:
We cached active assessments using caffein cache so that we could fetch the current active schedule per student instantly, without hitting the DB repeatedly.
👥 3. Assigning Evaluators Dynamically
In many universities, evaluators (especially externals) aren’t known in advance. So we built:
An evaluator assignment API for program heads to allocate evaluators dynamically
Role-based checks (e.g., an internal can’t see external marks)
A fallback logic in case no evaluator is assigned (e.g., pool-based auto-assignment)
We tracked assignments in an evaluator_mapping
table tied to:
assessment_id
evaluation_type
evaluator_id
course + batch + program (scope level)
🧩 4. Rules Without Hardcoding
Some programs wanted:
Relative grading
Custom grace mark policies
Fixed evaluators per student group
Program-specific pass marks or bonus logic
Instead of bloating our service layer, we designed a rule plugin system using Drools (which I’ll dive deeper into in Part 5).
For now, we just made sure our config tables could store:
JSON-based rule inputs (e.g., thresholds, grace percentages)
Rule activation flags (is_grace_enabled, is_moderation_required, etc.)
🔐 5. Guardrails & Validation
To prevent chaos in the UI, we built backend validations like:
Don’t allow duplicate assessments for the same course + semester + type
Don’t allow submission windows in the past
Warn if evaluator count is zero
Warn if grace rules are enabled but graceLimit is not set
Also, all config updates were audited so we could track changes if anything went wrong.
✅ What We Achieved
Program admins could create, configure, and schedule assessments without dev intervention
Evaluators could be added dynamically, with the right access boundaries
All logic downstream (submissions, evaluations, grading) adapted based on the configuration
We set the stage for rule-based evaluation workflows
📤 Part 3: Submission Engine – How Students Upload Answer Sheets
Designing a stress-proof system for exam submissions (a.k.a. when 1,000 students hit "Upload" at once)
Once assessments were created and scheduled, it was time for the real deal: students submitting their answers.
This is one of the most sensitive pieces of the entire system. If uploads fail, or answers go missing, the trust is broken. So we had to make sure the submission engine was fast, reliable, and auditable—even under load.
Here’s how we built it.
🧾 1. The Student Submission Flow
From a student’s perspective, the process was simple:
Log in to the portal.
View the list of active assessments
Write answers inside answer block or Upload a scanned PDF or image file (sometimes multiple files)
Click Submit and get a confirmation
But behind the scenes, here’s what actually happened...
⚙️ 2. Backend Architecture for Uploads
We exposed a secure, multipart API like this:
POST /api/assessment/submit
Headers: Authorization, Content-Type: multipart/form-data
Payload: {
studentId,
courseId,
assessmentId,
attemptNumber,
files[], // multiple files allowed
metadata: JSON // page count, file size, etc.
}
Backend Steps:
✅ Validate submission window (check if it’s within allowed time)
✅ Validate student eligibility (enrolled in course, attempt not exceeded)
✅ Check for duplicate submission (based on student + course + assessment + attempt)
✅ Store files in S3 or equivalent object storage with unique keys
✅ Save metadata and map the files to the attempt
🗂️ 3. Storing & Structuring Submissions
We didn’t store files in the DB. Instead, we used:
- AWS S3 (or Azure Blob/GCS) with a path pattern like:
/bucket/tenant/studentId/courseId/assessmentId/attemptNumber/filename.pdf
This gave us:
Clear folder-level access control
Easy archival logic per semester
Predictable file URLs for evaluators (secured with pre-signed URLs)
🧪 4. File Validation & Integrity Checks
To prevent junk or incorrect uploads, we performed:
File type validation (
.pdf
,.jpeg
, etc.)Max file size check (e.g., 10MB per file)
Total page count check (optional but used in oral/practical evaluations)
We also checked if the same file was uploaded multiple times (hash-based duplicate detection).
🔄 5. Resubmissions, Late Uploads & Errors
Let’s face it—students will forget, get disconnected, or upload the wrong file.
So we allowed:
Resubmissions, but only until the deadline
Auto-incremented attempt numbers (with a cap)
Optional grace period for late uploads (configurable)
We tracked every submission with:
student_id + course_id + assessment_id + attempt_no
And flagged late attempts using a is_late
column.
📋 6. Linking Submissions to Evaluators
Once a submission was successful, it was:
Added to the evaluation queue
Marked as
READY_FOR_EVALUATION
inStudentAssessment
SummaryMapped to the evaluator(s) based on:
Assigned internal/external
Or random-distribution logic (Evaluator request answersheet and random student’s answer sheet would be picked.)
Limit on number answersheet the evaluator can evaluate during a day.
After evaluator opens up answer sheets, he assigns marks based the rubrics provided for that question and submits the marks.
✅ What Worked Well
Multipart upload APIs were fast and fault-tolerant
Using S3 + metadata storage kept things scalable
Attempt-based design prevented accidental overwrites
Audits saved us during re-evaluation requests
🔍 Part 4: Evaluation Workflow – From Internal to External
How we built a flexible, secure, and trackable marking system for student submissions
Once students submit their answer sheets, the spotlight shifts to evaluators — internal faculty, external examiners, sometimes even a second evaluator for rechecks. The challenge here was designing a system that was:
Flexible enough to handle different evaluation rules across programs
Secure so evaluators couldn't overwrite or access unauthorized data
Transparent for moderation and audit purposes
And of course, easy to use for the academic staff
Let’s walk through how we tackled it.
👥 1. Who Evaluates What?
Each assessment had a dynamic evaluation flow. For example:
A typical End Semester Exam (ESE) required:
One internal evaluator
One external evaluator
Optional re-evaluation if the student requested
So we modeled this with:
enum EvaluationType {
INTERNAL,
EXTERNAL,
RE_EVALUATION
}
And each evaluator was mapped in a separate table (evaluator_mapping
) with fields like:
assessment_id
evaluation_type
evaluator_id
course_id
,program_id
,batch_id
(scope)
This let us assign evaluators per course/batch instead of globally.
🗃️ 2. The Evaluation Summary Table
Each time an evaluator marked a submission, we stored the result in a dedicated table:
evaluation_summary
included:
student_id
,assessment_id
,attempt_no
evaluation_type
(INTERNAL / EXTERNAL / RE_EVALUATION)marks
(overall or section-wise)comments
status
(IN_PROGRESS, UNDER_EVALUATION,EVALUATED)evaluator_id
,evaluated_at
timestamps
This kept the raw evaluator input separate from the final grade.
🔐 3. Locking Mechanism
We implemented soft locks to ensure that:
Only one evaluator could work on a submission at a time
Once marked, it couldn’t be accidentally overwritten
Re-evaluation didn’t impact the original marks
Locks were managed using a combination of:
Status flags (
LOCKED
,COMPLETED
)Per-evaluator submission queues
Optional DB-level row locks (
SELECT FOR UPDATE
in PostgreSQL)
This helped avoid data races, especially when multiple evaluators were working simultaneously during exam peaks.
🔄 4. Re-evaluation Logic
Re-evaluation could be:
Manual (requested by the student)
Triggered due to a large internal–external marks gap
Part of moderation workflow
We treated re-evaluations as separate evaluation types, maintaining complete history. So you’d see:
Type | Marks | Evaluator |
INTERNAL | 42 | Prof. A |
EXTERNAL | 38 | Dr. B |
RE_EVALUATION | 44 | Prof. C |
Then we applied a rule engine (Drools) to decide:
Final accepted marks (average? highest? re-eval only?)
Whether moderation is required
More on that in Part 5.
🧑🏫 5. Role-Based Access & Views
We ensured that:
Internal evaluators couldn’t see external marks (and vice versa)
Re-evaluators had no access to previous evaluations
Admins had a full view (for moderation or auditing)
Each API was guarded by role + context checks, e.g.:
if (!evaluatorIsMappedTo(studentId, courseId, assessmentId, evaluationType)) {
throw new AccessDeniedException("Not authorized for this evaluation");
}
✍️ 6. Comments & Feedback
Evaluators could:
Leave overall comments
Add question-wise marks
These were stored separately and passed through approval logic before being visible to students.
📈 7. Evaluation Status Tracking
Every attempt moved through states like:
SUBMITTED → UNDER_EVALUATION → EVALUATED
Each transition triggered:
Event logs
Notifications (optional)
Checks for completeness (e.g., both internal + external submitted)
✅ What Worked Well
Role-based evaluator logic gave fine-grained control
Soft locks prevented overwrites in real-time workflows
Re-evaluations were fully traceable and didn’t overwrite prior marks
Moderation rules could be added without changing core evaluation code
The evaluator UX was simplified by filtering only “assigned” tasks.
🎯 Part 5: Grading Engine – Where the Real Magic Happens
From raw marks to final grades, with grace, moderation, and rule engines
⚡ Bulk Marks & Grading Calculation
Once all evaluations were finalized, the real challenge was to process results at scale — not just for one student, but for an entire program, semester, and all electives in one go.
Instead of calculating grades student-by-student (which would have been painfully slow and heavy on the DB), we built a bulk processing engine.
How It Worked:
Pulled all eligible students for a program + semester in one batch
Collected their course-wise attempts, marks, evaluations
Processed grade conversion, grace/moderation, SGPA/CGPA in bulk
Stored results back into
student_semester_grade
and summary tables
Performance Optimization:
To avoid hammering the database and blocking threads:
We split students into batches of 200 records
Each batch was handed off to an
ExecutorService
workerWorkers ran in parallel (multi-threaded)
Once a batch finished, results were persisted transactionally
This approach gave us:
✅ Load distribution across multiple threads
✅ Faster execution (parallelism reduced processing time significantly)
✅ Fault isolation (a failure in one batch didn’t crash the whole job)
🎯 2. Moderation & Grace Marks Logic
One of the most debated (and sometimes controversial) parts of grading is moderation and grace. Universities often want to ensure that a student who is just short of passing isn’t failed for a couple of marks.
In our system, moderation rules were explicitly configurable and auditable.
The Rule:
👉 If a student’s final percentage lies between 36% and 40%, they’re given just enough marks to reach exactly 40%, and these extra marks are logged as graceMarksHistory.
So, for example:
A student scoring 38% would be bumped to 40% (+2 grace marks).
A student scoring 36% would be bumped to 40% (+4 grace marks).
But anyone below 36% got no grace (they would still fail).
Drools Rule Snippet (Simplified)
rule "Apply Moderation for Marginal Failures"
when
StudentMarks(percentage >= 36 && percentage < 40 && graceAllowed == true)
then
int required = 40 - percentage;
modify(StudentMarks) {
total = total + required,
graceApplied = true,
graceMarks = required
}
end
Storage in DB
We didn’t just overwrite marks. Every moderated case was:
Recorded in
student_semester_grade
with fields likegraceMarks
,isGraceApplied
Logged into
graceMarksHistory
for audit, showing:Original marks
Grace marks added
Final updated marks
This made the process transparent, so students, faculty, and admins could see where grace was applied — and importantly, where it wasn’t.
✅ Outcome:
Marginal students got a fair chance to pass.
Admins had full visibility of where grace was applied.
No “mystery marks” — everything was auditable and rule-driven.
🧮 3. Grade Mapping: Absolute or Relative
We supported two grading models:
A. Absolute Grading
Fixed thresholds (e.g., 90+ = A, 80–89 = B, etc.)
These were stored in a grade_mapping
table per program/semester.
B. Relative Grading
This depended on class performance (mean, std dev, highest mark, etc.)
We used a rule-driven approach here too:
rule "Grade A if score > classMean + 1.5 * stdDev"
when
ClassStats(mean > 40, stdDev > 5)
StudentMarks(score > mean + 1.5 * stdDev)
then
modify(StudentMarks) { grade = "A" }
end
Drools rules allowed academic councils to tweak logic without code changes.
📊 4. CGPA / SGPA Calculations
Once all courses in a semester were evaluated, we triggered CGPA computation.
This used:
Grade points from the grade (e.g., A = 10, B = 8, etc.)
Course credit units
Weighted average formulas
Example formula:
cgpa = sum(gradePoint * credit) / totalCredits
We made sure:
Failed courses (F) were not included in CGPA
Re-attempted courses had only the latest attempt considered
All calculations were versioned and traceable
🔐 6. Locking Results
Once results were finalized:
They were locked (can’t be updated unless re-evaluation is opened)
Visible to students via portal or API
Report cards could be generated (PDF via backend service)
All locks were based on:
academic_year_semester
program_id
,batch_id
Result lock flags + audit entries
✅ What Made This Work
Drools helped offload grading logic from code to configuration
Rules were program-specific, and configurable by admins
Grace and moderation were auditable and trackable
Relative grading didn’t affect absolute grading programs
CGPA logic was clean, testable, and extendable
📊 Part 6: Final Results, Locking, and CGPA Calculation
Tying it all together — how we generate final grades, lock them, and ensure students get what they deserve
After all evaluations, grading logic, grace marks, and moderation are applied, we now arrive at the most awaited stage: generating and publishing the final results.
This step may seem like just another DB update, but in reality, it's one of the most critical and sensitive operations in the academic system.
If anything goes wrong here — like CGPA miscalculation, premature result publication, or inconsistent records — it can create panic among students and chaos for the university.
Here’s how we handled it carefully, yet scalably.
📦 1. Generating the Final Result Summary
For each student, once all their assessments for the semester were evaluated and finalized, we generated a Student Assessment Summary per course:
courseId
finalMarks
grade
graceApplied
moderationApplied
attemptNo
evaluationStatus
(EVALUATED)
All of these were stored in a centralized summary table that fed into:
Student dashboards
Grade cards
CGPA/SGPA calculators
Reports for academic boards
This ensured consistency — everyone was reading from the same computed record, not raw evaluation entries.
🧮 2. SGPA & CGPA Calculation
Once summaries for all courses in a semester were available, we calculated:
SGPA (Semester GPA) = weighted average of grade points in that semester
CGPA (Cumulative GPA) = weighted average of all passed semesters
We respected:
Course credits
Grade points (e.g., A=10, B=8, etc.)
Exclusion of failed/absent courses
Inclusion of latest attempt if the course was repeated
Formula example:
sgpa = sum(gradePoint * credit) / totalCreditsInSemester
cgpa = sum(gradePoint * credit) / totalCreditsAcrossAllSemesters
To handle edge cases, we also flagged:
Incomplete results (e.g., ongoing re-evaluation)
Failed semesters (SGPA not calculated)
Grade improvement scenarios (highest grade retained)
🔐 3. Result Locking: No More Changes After This
Once the academic board verified everything, results were locked.
Locking ensured:
No accidental changes to grades or marks
Evaluators could no longer update evaluations
Students couldn’t request re-evaluation unless formally reopened
Data was snapshot for report card generation and publishing
We locked results at various levels:
Assessment level (per course assessment)
Semester level (batch/program/academic year)
Student level (per student override, in special cases)
Lock info was stored in a grading_calculation_lock
table with audit metadata (who locked it, when, why).
📋 4. Report Card Generation
Once locked, we exposed a PDF generation endpoint for report cards, which included:
Course-wise marks and grades
SGPA & CGPA
Evaluation remarks (if any)
Result status: PASSED / FAILED / WITHHELD
We cached generated PDFs (e.g., in S3) and exposed them via secure, tokenized URLs for download.
🔄 5. Handling Re-evaluation & Late Updates
Even after locking, some students may:
Request re-evaluation
Have pending moderation (due to manual override)
Submit grace/exception forms
In such cases:
We unlocked just the affected records (
grading_calculation_lock
by studentId)Re-triggered the grading + CGPA computation
Regenerated the report card with a new version (keeping audit of the old one)
This modularity helped us support post-result changes without breaking the rest.
🧾 6. Audit Trails, Versioning & Rollback Safety
Every major step — evaluation submission, grading update, grace application, CGPA calculation — was:
Versioned (row-level tracking + timestamp + user)
Audited (with request IDs and user roles)
Linked to job execution logs (if part of bulk computation)
This gave us full traceability, especially during board reviews, moderation audits, or in case of disputes.
✅ What Worked Well
Final summaries simplified downstream data handling
Locks ensured result stability before publication
Modular CGPA logic handled complex academic rules
Re-evaluation support was isolated and rollback-safe
Students always saw the latest finalized snapshot
🚀 Part 7: Lessons Learned – What Worked, What Broke, and What We’d Do Differently
The final part: reflections from the backend trenches of university examinations
Building the digital examination system was one of the most challenging and rewarding experiences in my engineering journey. It wasn’t just about writing APIs and storing marks — it was about designing trust.
Trust that:
Student uploads won’t fail under pressure
Evaluators will have a smooth experience
Grades are accurate, auditable, and fair
Nothing breaks when results go live 🎓
Now that the system has gone through real exam cycles, here are some takeaways I wish someone had told me before we began.
⚙️ 1. Design for "Academic Chaos"
University systems are not linear. There are:
Grace mark policies that vary per semester
Evaluators assigned after exams are over
Students who upload blank answer sheets (and still want results 😅)
Moderation rules that change post-evaluation
What worked: We made everything rule-driven and modular (thanks, Drools).
What we’d do differently: Model these exceptions earlier instead of bolting them on later.
🕓 2. Deadline-Heavy Systems Need Elasticity
There are peak times when:
1,000+ students upload within 15 minutes
Evaluators log in last-minute and bulk-evaluate
Results must be published at exactly 6 PM, no excuses
What worked: Caching, async queues, and background jobs helped handle load.
What broke: File uploads briefly slowed down when too many concurrent connections hit S3.
What we’d change: Introduce rate-limiting and autoscaling S3 pre-signed URL generation services.
🔐 3. Locks Are Lifesavers (Until They're Not)
Result locking saved us from accidental updates. But…
Overly aggressive locking during batch operations caused deadlocks
Re-evaluation + locked result flow sometimes conflicted
Fix: Introduce fine-grained locks — per student/course instead of per batch — and proper rollback strategies.
🧠 4. Audits Are Not Optional
We had multiple instances of:
“Why is my grade lower than before?”
“Who changed this moderation mark?”
“The evaluator says they submitted, but it’s blank.”
Because we had:
Audit tables
Evaluation history
Submission timestamps
…we could always trace what happened. This saved us. Repeatedly.
If I were to summarize it: log everything. You’ll thank yourself later.
🧪 5. Testing with Real Academic Data > Any Dummy Set
We initially used mock data to simulate evaluations and results.
Reality check:
Real students submit in strange formats (scanned sideways, upside-down, zipped weirdly)
Real programs have exceptions that no one documented
Evaluators sometimes write comments in ALL CAPS with emojis
So we built:
Test suites using anonymized real data
Simulated concurrency tests (students + evaluators + result processing)
🧭 6. Know Your Personas Better
This wasn’t just an “admin and student” app. There were:
Students
Internal faculty
External evaluators
Program heads
Exam admins
Moderators
IT admins
Each needed a different view of the same data. Building role-based views early on made the rest of the system much easier to maintain.
✅ Summary: What We’d Keep and Improve
What Worked Well | What We’d Improve |
Drools-based grading logic | Early modeling of edge cases |
Submission + evaluation separation | Better file handling under peak load |
Strong audit trails | More intuitive evaluator interfaces |
Locking + state management | Smarter conflict resolution in re-eval |
Modular CGPA engine | Integrated test harness with real data |
🎓 Final Thoughts
Exams are stressful — not just for students, but also for the systems that power them.
What we built wasn’t just a backend — it was a trust framework for academic evaluation in the digital age.
If you're building something similar — whether for education, assessments, or high-stakes workflows — I hope this series gave you a real-world view into the architecture, trade-offs, and lessons that go beyond the API layer.
Subscribe to my newsletter
Read articles from Pracheth M Harish directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
