Maths Revision Material via Amazon Nova: An Experiment

Marika BergmanMarika Bergman
6 min read

Finding effective ways to help primary school children revise maths and find their knowledge gaps must be one of those areas where AI tools are going to be able to massively help us. Around this idea, I recently embarked on a little experiment. My idea was born after I found out that the Oak National Academy that has maths resources across the UK national curriculum, and they actually have an open API to access some of these resources.

As the API is currently in beta and only some of the resources are available, I started by checking what is there that I could work with. The Oak Academy website has video lessons for different maths units, and each of the lessons has an exit quiz. The exit quizzes could be an interesting material to work with, as you could use them to create more similar quizzes that would then test the pupil, for example, at the end of the year across the whole year’s curriculum to see if there are some units or lessons that they are not confident with and to help them revise.

The exit quizzes contain usually six questions each, and most of the questions have an image attached to them in this type of format:

I started by fetching the exit quiz materials for year 1 and 2 and saved them in Amazon S3 as PDF files. Next, I would configure the Amazon Nova models to use these materials as starting point for creating similar content. The idea was to use one Nova model to extract information from the PDF and create content for a new quiz question based on it. Another Nova model would then be used to create an image based on the image description:

Describing the quizzes and creating content with Amazon Nova Lite

In order to use the quizzes that I now have saved as PDF files, I would need to transform them in a format the model can utilise. For this task, I used Amazon Nova Lite, which is a low-cost multimodal model that can process image, video and text input.

I wanted the model to extract some information from the PDF - the lesson name, the question and answer options. I also wanted it to determine what the correct answer to the question is, as well as create a detailed description of the image that is associated with the question. In addition, I wanted it to create a new, similar question-answer pair and a description for an image that could be associated to that new question. The model returned the data in this kind of format:

{
  "lesson_name": "Add and subtract 1 to and from a 2-digit number crossing the tens boundary",
  "original_question": "Which decade is missing on the number line? Tick 1 correct answer the fifties the sixties the forties",
  "original_answer_options": [
    "the fifties",
    "the sixties",
    "the forties"
  ],
  "original_correct_answer": "the sixties",
  "original_imageDescription": "The image shows a number line with numbers 49 and 50 marked. There is a gap between 50 and 60, indicating a missing decade. The options provided are 'the fifties', 'the sixties', and 'the forties'.",
  "new_question": "Which number is missing on the number line? Tick 1 correct answer 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60",
  "new_answer_options": [
    "45",
    "46",
    "47",
    "50"
  ],
  "new_correct_answer": "45",
  "new_imageDescription": "The image shows a number line with numbers 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, and 60 marked. There is a gap between 44 and 46, indicating a missing number. The options provided are '45', '46', '47', and '50'."
}

The model was good at extracting different questions from the document, determining correct answers, and describing the relevant images. Also, the newly created questions seemed to have similar style and were aimed at a similar age group, at least based on the limited tests that I was able to complete within a restricted timeframe. The prompt had to be modified a few times based on different issues I came across, such as the model getting confused by blank spaces in some of the questions. Further tests would most likely reveal more such issues that would help define the prompt further.

Creating a new image with Amazon Nova Canvas

From the process with Amazon Nova Lite I now had a new question - an answer to it and a description for a new image that could be used. The new image could now be created with Amazon Nova Canvas, which is an image generation model that can be used to create images from image/text. I first tried creating images based on purely the text prompt that I had from the previous step, combined with a general prompt that described what the purpose of the image is and what style should be followed. The images were of nice quality, but the difficulty was getting the style right. It seemed that even after trying several types of prompts, the model was unwilling to create the type of images that would be suitable for children’s maths exercises but rather wanted to create more realistic images. For example, if the prompt asked for a clear image of a certain number of apples next to each other that a child could easily count, the created image showed the apples in a more artistic and realistic display so that some apples were only partially visible. Even bigger issue was accuracy - if the prompt asked for an image with three apples, the produced images sometimes contained two or four apples instead and the results didn’t seem very reliable.

I then tried adding the original image as a conditioning image to the prompt - in this example to get the model to create a number line like in the first example:

I thought the conditioning image might help getting the style right and helping the model to understand exactly I need. Style-wise, this worked as expected and the created image was very close to the style of the original image. For example, when the prompt requested a number line and the conditioning image showed a number line, the created image had some kind of number line as well - whereas without the conditioning image the whole concept of the number line was often lost. However, the number line didn’t look otherwise right. I further experimented by changing the control strength. When the strength was closer to 1, I could get images that were very close to the original image (and didn’t follow the prompt requesting the changes to match the new question & answer pair at all). By reducing the strength closer to zero, the results became more interesting and didn’t follow the original image exactly but still displayed, for example, a number line as requested. But as it is clearly visible, the numbers themselves were completely mixed up, and these images wouldn’t work for their intended purpose:

In summary, the workflow of using the existing quizzes as a foundation of creating more similar question-answer pairs worked quite well, apart from the image creating inaccurate numerals. If it was possible to create accurate images, this workflow could be used in creating a large number of questions across the curriculum. As the questions were not simply copies of each other with different numbers, but the model rather had some creativity, the revision quizzes could remain interesting for the children. I will re-try this idea at some point when the models are capable of higher numeral accuracy in image creation.

0
Subscribe to my newsletter

Read articles from Marika Bergman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Marika Bergman
Marika Bergman

Backend-focused full-stack developer with AWS cloud knowledge. AWS Community Builder (Serverless category). Passionate about knowledge sharing.