Boost Your Study Sessions by Building a Custom Quiz Generator with Ollama and Streamlit

What are we building

In this tutorial, you will learn how to utilize Ollama to build a quiz generator, improve your study sessions. This tutorial is for everyone and anyone who believes it will benefit them. Alright let’s begin.

What is Ollama?

Ollama is a platform which gives users access to 1000s of open source models including Vision Language Models and Large Language Models. Some of their most capable models include Gemma-3, Llama and Deepseek. To install Ollama, go to the website and download the current version of Ollama.

Install ollama and check the version.

ollama --version

In this tutorial we will be using the Llama3. However you can check any capable model of your choice at the model page. You can get access to the model through the command below.

ollama pull llama3

You can then test the model with the command below

ollama run llama3

Now that you have tested your model and it is working, we can move on to our IDE to begin the project.

Setting Up Your Streamlit Project

We will be making use of streamlit in order to build a user interface with minimal effort.

Create a folder for your project, and in that project create a virtual environment to install all our required packages. This is essential for isolating your dependencies. After this, copy the requirements.txt file below

streamlit==1.43.2
ollama==0.4.7
langchain==0.3.19
langchain-community==0.3.19
pydantic==2.10.6
reportlab==4.3.1
pdfminer.six==20240706
docx2txt==0.8.0

and then install the packages by running the following command in your terminal.

pip install -r requirements.txt

In your project directory, create a file called main.py And write the code below

# main.py
import streamlit as st

def main():
    st.header("QuizGenerator app")
    with st.sidebar:
        st.write("My first Streamlit app")

if __name__=="__main__":
    main()

Then in your terminal run

streamlit run main.py

You should see the below

After that clear, the content of the main.py file and let’s begin.

Project Buildout

Our project will be divided into three main files, main.py , utils.py , quiz_utils.py.

  • main.py contains our main logic

  • utils.py contains helper functions

  • quiz_utils.py contains functions specifically for our quiz

As we go on with our project, I will be noting which files to update

Uploading, Processing and Storing files.

To generate our quiz, we’ll have to find a way to provide it as context to the AI model. This is where streamlit and langchain come in. Streamlit already has UI components to upload files of different types and langchain comes with modules to effectively load data from various sources including websites, youtube, pdfs, text documents and more.

Create a new file named utils.py and input the following code

# utils.py
from langchain_community.document_loaders import (
    TextLoader, PyPDFLoader, YoutubeLoader, WebBaseLoader,Docx2txtLoader
)
from langchain_text_splitters import RecursiveCharacterTextSplitter as splitter
import sqlite3 

def save_sources(files):
    loaders={
        "txt":TextLoader,
        "pdf":PyPDFLoader,
        "docx":Docx2txtLoader, 
        "doc":Docx2txtLoader
    }
    try:
        for file in files:
            file_type=file.name.split(".")[-1]
            # Loa
            with NamedTemporaryFile(delete=False, suffix=f".{file_type}") as temp_file:
                temp_file.write(file.read())
            loader = loaders.get(file_type)
            # Load file depending on type
            document=loader(temp_file.name).load()
            # Combines all contents together
            content_list="".join([docs.page_content for docs in document])
            # Split content into smaller chunks
            chunks=splitter( chunk_size=5500,chunk_overlap=30).split_text(contents)
            for chunk in chunks:
                save_to_database(name,chunk)
        st.success("Files sucessfuly loaded")
    except Exception as e:
        st.write(e)
        st.warning("Unsupported file format")

def save_to_database(title, content):
    """
    Saves document title and content to an SQLite database.
    Initializes the database and table if they don't exist.
    """
    # # Connect to an SQLite database (or create it if it doesn't exist)
    conn = sqlite3.connect('content.db')
    # # Create a cursor object using the cursor() method
    cursor = conn.cursor()
    # # Create table
    cursor.execute('''CREATE TABLE IF NOT EXISTS documents
                  (title text, content text)''')
    # # Insert a row of data
    cursor.execute("INSERT INTO documents (title,content) VALUES (?, ?)", (title, content))
    # # Save (commit) the changes
    conn.commit()
    # # Close the connection
    conn.close()
# main.py
from utils import *
def main():
    st.header("QuizGenerator app")
    files = st.file_uploader("Upload files", accept_multiple_files=True, type=["txt", "pdf", "docx"])
    if st.button("Load Sources"):
        save_sources(files)

In the above code, we instantiate Langchain loaders for txt, pdfs and docx files. And by clicking on the “save sources” button we are able to read the contents of the files. Additionally, we used the built in sqlite3 module to store our documents. This enables us not to have to reupload a new file every time we want to create a quiz for ourselves.

Retrieving our data

Now that we’ve uploaded, processed and stored our data, we now need a way to retrieve it when necessary. In the same utils file

# utils.py
import sqlite3

# Previous code for Uploading and Storing

def get_titles():
    """Fetch all titles from the database."""
    conn = sqlite3.connect('content.db')
    cursor = conn.cursor()
    cursor.execute("SELECT title FROM documents")
    titles = [row[0] for row in cursor.fetchall()]  # Extract titles from query results

    conn.close()

def get_content(title):
    """To get content using titles from database"""
    conn=sqlite3.connect('content.db')
    cursor=conn.cursor()
    cursor.execute("SELECT content FROM documents WHERE title= ?", (title,))
    content=cursor.fetchone()
    conn.close()
    return content[0]
# main.py
from utils import *
def main():
    # Previous Code

    titles = get_titles()
    selected_title = st.selectbox("Selected a Documents",titles, key="selected_titles")
    contents = get_content(selected_title)
    with st.sidebar.expander("Pages"): 
        page_num=st.selectbox("Page Number",range(len(contents)))
        st.write(contents[page_num])

In the above we create two functions. One to get our titles and the other to retrieve content. This is essential to know which page we’re on when generating our quiz. Now that we have that sorted out, we can move on to generating our quiz using Ollama

Quiz Generation with Ollama

Before we can utilize Ollama to achieve our goal, we will need to ensure our model generates it’s output in a structured manner(specifically JSON format). Thus, we will first utilize pydantic to define the structure of our generated quiz.

# quiz_utils.py
from pydantic import BaseModel
from Ollama import chat

class Quiz(BaseModel):
    question: str
    choices: list[str]
    correct_answer: str
    explanation:str

class Quizlist(BaseModel):
    quizzes:list[Quiz]

Then we can define our function to generate the quiz

# quiz_utils.py

# previous code including pydantic class for quizzes

def generate_quiz(content_text,question_num):
    SYSTEM_PROMPT= f""" Using the provided lecture content, 
                        create a Master-level multiple-choice exam in strict JSON format 
                        that includes exactly {question_num} questions. 

                        Ensure the structure is:
                        [{{'question': '...', 'choices': ['...'], 'correct_answer': '...',
                         'explanation': '...'}}, ...]\n

                        #Further Instructions
                        -Avoid excessive whitespaces
                        -Always return output as JSON

                        #Content:\n
                        {content_text}
                    """

    response = chat(
    messages=[{'role': 'user','content': SYSTEM_PROMPT}],
    model='llama3', # Replace with your own model
    format=Quizlist.model_json_schema(),
    )

    # Extract response content (which should be a JSON string)
    try:
        quiz_obj = Quizlist.model_validate_json(response.message.content)
        return quiz_obj
    except Exception as e:
        print("Error parsing response:", e)
        return None

Next we need to parse the Quiz

# quiz_utils.py
# Previous code
def parse_quiz(quizlist):
    """
    Parses the quiz data from a Quizlist object into a list of dictionaries.
    Each dictionary represents a quiz question and contains the question,
    choices, correct answer, and explanation.
    """
    parsed_data = []
    for quiz in quizlist.quizzes:  # Access the 'quizzes' attribute
        parsed_data.append({
            "question": quiz.question,
             "choices": quiz.choices,
            "correct_answer": quiz.correct_answer,
           "explanation": quiz.explanation
         })
    return parsed_data

Finally, we will create a function which takes our parsed quiz and saves it as a pdf.

# quiz_utils.py

def save_pdf(parsed_quiz, title):
    """
    Generates a PDF file of the quiz from parsed quiz data.
    Uses ReportLab library to create the PDF document, including questions,
    answer choices as bullet points, and answer explanations.
    """
    doc = SimpleDocTemplate("quiz.pdf", pagesize=letter)
    styles = getSampleStyleSheet()
    Story = []
    question_style = ParagraphStyle(
        name='QuestionStyle',
        parent=styles['Normal'],
        fontName='Helvetica-Bold',
        fontSize=12,
        leading=14,
        spaceAfter=6
    )
    answer_style = ParagraphStyle(
        name='AnswerStyle',
        parent=styles['Normal'],
        fontName='Helvetica',
        fontSize=10,
        leading=12,
        spaceAfter=6
    )
    explanation_style = ParagraphStyle(
        name='ExplanationStyle',
        parent=styles['Italic'],
        fontName='Times-Italic',
        fontSize=10,
        leading=12,
        spaceAfter=12
    )

    for i, quiz_item in enumerate(parsed_quiz):
        question_num = i + 1
        question_text = f"{question_num}. {quiz_item['question']}"
        Story.append(Paragraph(question_text, question_style))

        choices_list = [(choice) for choice in quiz_item['choices']] # bulleted list of choices
        Story.append(ListFlowable([Paragraph(choice, answer_style) for choice in choices_list], bulletType='bullet', start='bulletchar'))
        Story.append(Spacer(1, 0.2*inch)) # space after each question

    Story.append(Paragraph("Answers and Explanations", styles['Heading2']))
    for i, quiz_item in enumerate(parsed_quiz):
        question_num = i + 1
        answer_text = f"{question_num}. Correct Answer: {quiz_item['correct_answer']}"
        Story.append(Paragraph(answer_text, question_style))
        explanation_text = f"Explanation: {quiz_item['explanation']}"
        Story.append(Paragraph(explanation_text, explanation_style))
        Story.append(Spacer(1, 0.2*inch)) # space after each answer explanation

    doc.build(Story)
    return "{title}.pdf"

Putting it All Together

Now we have defined our functions for:

  • Uploading and Storing documents

  • Retrieving stored documents

  • Generating and parsing our quiz

  • and finally storing our quiz as pdf.

We can then combine all of this in our main app

# main.py
import streamlit as st
from utils import *
from quiz_utils import * 

def main():
    st.header("QuizGenerator app")
    upload_file() 
    titles=get_titles()
    selected_title=st.selectbox("Selected a Documents",titles, key="selected_titles")

    with st.sidebar.expander("Pages"): 
        page_num=st.selectbox("Page Number",range(len(contents)))
        st.write(contents[page_num])

    contents = get_content(selected_title)
    show_titles_and_pages(contents) # Display document titles and pages
    num_question = st.slider("How many questions would you like to generate?", 5, 25)  # Min 1 question
    st.write(f"You have chosen {num_question} questions.")

     if st.button("Generate Quiz"):
         quiz = generate_quiz(contents[st.session_state.page_num], num_question)
         st.session_state.parsed_quiz = parse_quiz(quiz)  # Store parsed quiz
         st.session_state.quiz_started = True  # Track quiz state

    if titles:
        parsed_quiz = st.session_state.parsed_quiz # Get parsed quiz from session state
        display_quiz(parsed_quiz)  # Show the parsed quiz
        show_download_button(parsed_quiz) # Show download button after quiz is shown

if __name__=="__main__":
    main()

You can then run the code in your project directory through your terminal like

streamlit run main.py

Then relax and enjoy your quiz generator

Conclusion

Now that we have learned how to use ollama and, most importantly, how to structure its output with pydantic, you can apply this knowledge to other projects like research assistance tools, automated study tools or content recommendation systems.

If you also prefer an online based tool that you can use without stress, you can visit this free website. Hope you enjoy your quiz generator and good luck studying.

0
Subscribe to my newsletter

Read articles from Fikunyinmi Adebola directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Fikunyinmi Adebola
Fikunyinmi Adebola