Part 1: Build an AWS Quiz Generator and exam practice app with the help of GenAI and Amazon Nova

Hein Htet WinHein Htet Win
14 min read

Big shout out to AWS Community Builder team for providing the credits $$$ needed to create these content and for the amazing AI Engineering month of the 2025.

A Generic Introduction

Hi everyone! I am Hein Htet Win, a professional Cloud and DevOps engineer with years of experience in different industries such as FinTech, Digital Banking, ISP, Cloud Provider and so on. I am passionate about sharing and testing latest technical trends so, I write about Linux, DevOps, Cloud, GenAI, LLMs, RAG and many more.

In today blog, I will write about how Amazon Nova models can help us in creating and innovating applications in this AI world. Specifically, I will utilize Nova Pro model to generate practice questions for AWS learners for different certification topics such as AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional, AWS Certified Solutions Architect Professional and so on.

I will convert this blog into two-part series where in the first one, I talked about overview architecture, use case, the code I used and will only test on my local machine. In the second part, I will draft an AWS architecture diagram for this solution to be hosted on your own account and live test with the real domain. So, without further notice, let’s dive into the solution….


Introduction: The Power of Generative AI

Generative AI has surged in popularity recently, fueled by breakthroughs in computing power, vast datasets, and innovations in machine learning architectures like large language models (LLMs). Unlike traditional AI, which excels at analyzing data, GenAI creates—whether it’s drafting text, designing images, or composing music—mirroring human-like creativity at unprecedented speed. Its rise isn’t just technical; it’s cultural. Tools like ChatGPT and DeepSeek have democratized access, letting anyone brainstorm ideas, automate tedious tasks, or personalize experiences, from education to customer service.

Use case

Before Generative AI, for practicing AWS questions in order to prepare for our certifications exams, we would solely rely on practice exam providers like TutorialDojo or WhizLabs. But in the current era of technology, we could just build our own tool of providing those practical questions with the help of GenAI and Large Language Models (LLMs). So, that is the thing we would try to do today and see how practical and qualified content that our models can generate. I am not saying that we can totally replace the professional providers, but as a POC or just for fun, we could try and build something to play around.

Understand the architecture

So, our workflow and architecture for this Quiz application is relatively simple, users will interact with our web UI developed using Streamlit. The frontend will then talk to backend API developed using Python and FastAPI.

The backend API will be responsible for interacting with LLMs hosted in Amazon Bedrock and did the formatting of our prompts and responses so that the actual responses returned from Bedrock will be structured and consistent for every requests. I will also walk you through the scheme we defined and the validation logic in the code section below.

However, below is the high level flow diagram for our solution and what we trying to build:


Pre-requisites

There are some pre-requisites in this tutorial if you want to follow along with me and build something cool:

  • An AWS account with admin/IAM access

  • Permission to enable Bedrock model access in one of the regions

  • Some general knowledge of Python Programming

  • Some AWS knowledge if you also want to deploy this solution on AWS in Part-2

Amazon Nova Pricing

Amazon Nova offers flexible and transparent pricing for its text and image generation models but in this tutorial, we will only look for text generation models. Below is a detailed breakdown of on-demand and batch pricing for different Amazon Nova models (Pricing can vary on the number of input and output tokens used in each request):

Prices are subject to change so, it’s highly recommended to verify the latest details from the official AWS Bedrock Pricing page.


Step-by-step walk through of our solution

In this section, I will walk you through the pre-requisite things we need to take care of before actually testing our code and also explain the code to interact with Amazon Bedrock.

Step 1: Set Up Amazon Bedrock Access

The first thing we need to do is we need to enable Amazon Bedrock access and its foundational models access. For this,

  • Go to the AWS console.

  • Navigate to Amazon Bedrock service.

  • If it's your first time, request access if required or prompted when you go to service page.

  • Under Foundation models → Model catalog, you can see which models are available to request and also the models you have already access.

If you don’t already have access to Amazon nova models, you can click the three dots under Actions and click Modify access.

  • You can see Modify model access button in the next page, and click that.

  • You can select the models you want to test and showing Available to request under Access status column. For me, I have already have access to most of the Amazon models so, I will request access to Nova Premier model

  • Click Next, and review the models you requested and if everything is correct, you can just Submit.

After a short time you submitted your request, it should show “Access granted” like in the above screenshot.

Step 2: Let’s understand the repo structure and code flow

Our repository structure looks something like this:

.
├── app.py
├── format.py
├── main.py
├── README.md
└── requirements.txt

I will briefly explain what each file in this repo is responsible for:

  • app.py - It includes frontend code using Streamlit and controlling the session state of the app.

  • format.py -It includes our prompt template (which I will explain in next step) and also json schema for responses from our Nova model.

  • main.py - It includes a simple FastAPI backend and route for generating AWS questions based on the certificate name user provided, difficulty level and number of questions.

  • requirements.txt - It is just a file for storing dependency packages needed to run our application.

  • README.md - It includes the documentation and how to run and debug our application locally.

Step 3: Prompt Engineering and formatting the response from LLMs

Our solution will generate practice questions for our AWS learners, so let’s talk about that. We will use LLM (Large Language Models) hosted in Amazon Bedrock to help us with generating the questions. So, we will find a way to prompt an LLM from AWS Bedrock (in this context, Amazon Nova Pro) to generate these questions. Let’s engineer this prompt to effectively use the LLM and it can return the desired output for us.

So what is Prompt Engineering?*
Prompt engineering is the process of designing and refining the inputs (or “prompts”) given to an AI model, like a language model, to get the best possible response. It involves crafting clear and specific instructions to guide the AI in generating accurate, relevant, and useful outputs for a given task.*

Our Prompt template for this AWS practice questions generator is as follows:

prompt_template = """ 
You are a quizbot that provides multiple-choice questions, answers, and explanations to users based on the AWS certification name provided. The goal is to help users understand the concepts by presenting questions of varying difficulty levels.

Guidelines:

Generate Questions Based on Certification name: Use the provided cert_name to generate {num_questions} questions related to it.

The certification name is {cert_name}.
The questions should be relevant to the cert_name and test the user's understanding of the material to prepare for that particular certification.

Multiple-Choice Options: Provide 4 options for each question, with one correct answer.

Difficulty Levels:

Level 1 (Easy): The question should be easy but implicit, requiring a bit more thought to answer. The options should also be easy but slightly more nuanced. The explanation should be more complex and go a bit deeper into the concept.

Level 2 (Medium): The question should require some understanding of the topic and be more implicit in nature. The options should provide close alternatives. The explanation should involve practical application or deeper insights into the topic.

Level 3 (Advanced): The question should be challenging, requiring a strong understanding of the topic. The options should be distinct but tricky. The explanation should go into detail, possibly covering edge cases or advanced concepts.


The difficulty level is {difficulty_level}.

Output Format:
You must use the tool "generate_quiz_questions" to return your response. 
The tool requires a list of questions, where each question is a dictionary containing:
- "question": The question text
- "options": A list of 4 options labeled as strings "A", "B", "C", and "D". (No need to show the labels in the options)
- "answer": The correct option letter ("A", "B", "C", or "D")
- "explanation": A brief explanation of why the answer is correct

All explanations should be clear and concise, helping the user understand the concept better, not more than three sentences
"""

This prompt tells the AI to create quiz questions, making sure they are relevant to the certificate name user provided and according to difficulty. In order to facilitate processing and user presentation, it also formats the output in a consistent manner.

We are prepared to begin using AWS Bedrock to create our quiz questions now that we have created our prompt. We've given our AI instructions to create questions for us and supplied the required information for each one, including the question, options, response, and explanation. But hold on, what format is it using to return these prompts?

In order for this application to work correctly, the generated questions must always be returned in a - JSON format with the same structure. We're unsure of the structure with the instructions we have. So, we use the method called Tool Calling.

Tool Calling

With AWS Bedrock, we can define exactly how we want the JSON response to be returned, and we can do this very strictly. This is crucial because the app requires a consistent format to retrieve the questions; otherwise, your app could not function properly sometimes. Ensuring the correct format allows the application to function smoothly and avoids unexpected errors in question retrieval.

tools = [
        {
            "toolSpec": {
                "name": "generate_quiz_questions",
                "description": "Generate quiz questions based on the cert_name provided.",
                "inputSchema": {
                    "json": {  # This is the key AWS Bedrock expects
                        "type": "object",
                        "properties": {
                            "questions": {
                                "type": "array",
                                "items": {
                                    "type": "object",
                                    "properties": {
                                        "question": {"type": "string"},
                                        "options": {
                                            "type": "array",
                                            "items": {"type": "string"},
                                            "minItems": 4,
                                            "maxItems": 4
                                        },
                                        "answer": {
                                            "type": "string",
                                            "enum": ["A", "B", "C", "D"]
                                        },
                                        "explanation": {"type": "string"}
                                    },
                                    "required": ["question", "options", "answer", "explanation"]
                                }
                            }
                        },
                        "required": ["questions"]
                    }
                }
            }
        }
    ]

We have defined a tool called “generate_quiz_questions” that generates quiz questions based on the certificate name provided. This tool takes an input in JSON format, where it expects an array of questions. Each question includes the question text, four multiple-choice options (labeled A, B, C, D), the correct answer, and a brief explanation. The tool ensures that the output follows a structured format, with all required fields, making it easy to generate relevant and consistent questions.

message_list = [{"role": "user", "content": [{"text": f"Generate {num_questions} quiz questions based on the provided cert_name."}]}]
    response = bedrock.converse(
        modelId=modelId,
        messages=message_list,
        system=[
            {"text": prompt_template.format(cert_name=cert_name, difficulty_level=difficulty_level, num_questions=num_questions)},
        ],
        toolConfig={
                "tools": tools
            },
        inferenceConfig={
            "maxTokens": 4000,
            "temperature": 0.5
        },
    )

We will use Bedrock Converse to ask AWS Bedrock to create quiz questions depending on the given certificate name in AWS. The method takes a number of messages, such as the prompt and the user's request, and then runs the data through the model. Let's break down some concepts:

modelId: This is the identifier for the model we are using. In our case, we are using Amazon Nova Pro, a powerful language model available on AWS Bedrock that can handle tasks like question generation with ease.

maxTokens: This parameter sets the maximum number of tokens (words or characters) that the model can process in a single response. In this example, we have set it to 4000 tokens, which allows the model to generate a substantial amount of text without cutting off.

temperature: This setting controls the randomness of the model’s responses. A value of 0.5 strikes a balance between creativity and coherence, making the model more likely to generate varied yet sensible outputs.

Step 4: Building the API Layer with Python and FastAPI

In this section, we will describe how we built the API layer for our quiz generator application using FastAPI. FastAPI is a modern, fast web framework for building APIs, and it is perfect for handling requests with high performance. This API layer serves as the interface between the user and the AWS Bedrock model, allowing us to generate quiz questions based on user inputs.

Endpoints

We have defined a couple of key endpoints for the quiz generator API:

  1. POST /generate-questions: This is the main endpoint of our API. It accepts a QuizRequest object that contains:

    • cert_name: The subject or material the quiz questions should be based on.

    • difficulty: The difficulty level of the questions.

    • num_questions: The number of quiz questions to generate.

@app.post("/generate-questions")
async def generate_questions(request: QuizRequest):
    try:
        # Call the format.get_quiz function to generate quiz data
        quiz_json = format.get_quiz(
            cert_name=request.cert_name,  # Pass the certificate name from the request
            difficulty=request.difficulty,  # Pass the difficulty level
            num_questions=request.num_questions  # Pass the number of questions
        )

        # Parse the JSON string returned by format.get_quiz()
        quiz_data = json.loads(quiz_json)

        # The data structure is {"questions": {"questions": [...]}}
        # Restructure it to match the expected response model
        if "questions" in quiz_data and "questions" in quiz_data["questions"]:
            return {"questions": quiz_data["questions"]["questions"]}
        else:
            # Handle case where the response structure is unexpected
            raise HTTPException(status_code=500, detail="Unexpected response structure from quiz generator")

    except Exception as e:
        # Handle any exceptions and return a 500 error with the exception message
        raise HTTPException(status_code=500, detail=f"Failed to generate quiz: {str(e)}")
  1. GET /: A simple health check endpoint that responds with a status message to ensure that the service is up and running. This is particularly important for monitoring and debugging.
# Define a health check endpoint to verify the API is running
@app.get("/")
async def health_check():
    return {"status": "healthy"}  # Return a simple JSON response indicating the API is healthy

Step 5: Testing and running in local

In this step, we will try to run our application locally using my MacBook Pro 2021. So, firstly what we need to do is install the python dependencies. For me, I always want to separate my testing projects so I will use conda to create virtual environments for python.

conda create -n aws-quiz python=3.13

This command creates a virtualized environment separated from my system wide python configurations. We can activate this environment by using:

conda activate aws-quiz

After that, we can actually install our dependencies packages using:

pip install -r requirements.txt

Then start the backend API with:

uvicorn main:app --host 0.0.0.0 --port 8000 --reload

This will start our backend API in localhost with port 8000, which we configured in our app.py file.

Finally, start our frontend application which the users will interact with:

streamlit run app.py

Step 6: Validating the PoC (Proof of concept)

When you go to the http://localhost:8501 after you successfully run the frontend application, you should see the Home Page of our application like this:

After you typed in desired certification name, number of questions and Difficulty level, the app will generate you desired practice questions like this:

After you’ve submitted your answers, the app will return your score and will also contain a button for Show Explanations for showing you some brief explanations about the questions and why your answers are right or wrong.

If you’ve clicked the button to show you explanations about why you are correct or wrong, it should output something like this:

That’s it!! We made it and created a GenAI application which is able to provide with some practice questions for the AWS certifications we desire and also gave brief explanations about the answers and why are we correct or incorrect. In the next article of this series, I will show you how to deploy our application code on live AWS environment and bind with custom domain so everyone around the world can access and use or awesome tool for their benefits.

Conclusion

In this article, I have used my GitHub Repository for storing codes and versioning them. You can go ahead and modify and re-use them in any way you desire. Feel free to do so!

If you want to know how to deploy our solution in this blog post to live environment like AWS, and access from custom domain - please check out the Part 2 of this series - CLICK HERE!

This article is part of a two-part series — Designing, Building, and Deploying a GenAI AWS Quiz Application.

Part 1 - Build an AWS Quiz Generator and exam practice app with the help of GenAI and Amazon Nova

Part 2 - Deploying and using an AWS Quiz Generator on AWS infrastructure

Thank you all for reading till the end!

0
Subscribe to my newsletter

Read articles from Hein Htet Win directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Hein Htet Win
Hein Htet Win

I am a DevOps Engineer from Yangon, Myanmar. I fell in love with automation and CI/CD. I also enjoy using open-source software and regularly contribute to and participate on webinars. In my spare time, I enjoy playing games with my friends in addition to my job.