RockS.T.A.R. AI Interviewer

Joshua OliphantJoshua Oliphant
5 min read

RockS.T.A.R. AI Interviewer

Evergreen

Created: 2023-11-11 Updated: 2024-09-27

software engineering interview preparation s.t.a.r. method openai api web development

A couple of weeks ago, a company that I was interviewing for made it clear that they wanted my responses to be in the S.T.A.R. (Situation, Task, Action, Result) format. I had this idea that I could use ChatGPT to practice this, so after asking if it was familiar with S.T.A.R., I uploaded my resume and copied in the job description and then prompted with:

I am a software engineer and I'm going to interview for a company that uses this interview method. I'm going to give you my resume and their job description and I want you to help me create some scenarios so that I can practice for the interview.

ChatGPT answered with a breakdown of 5 different situations that I could think about, so then I said:

Okay, so you play the role of the interviewer and ask me questions, and I will give an answer using S.T.A.R., then you help me improve my answers.

This was solid gold. After this, we carried out a great conversation, where ChaGPT asked me questions relevant to my resume and the job description. I answered, and ChaGPT gave me great feedback and sometimes rewrote my answers in a nicer way than me. Such great practice gave me an idea. I've been trying to learn how to use the OpenAI APIs, and this seemed like a great tool to write up.

I had already been learning Flask, so that was a no-brainer for the backend. But I've really not done any front-end web apps at all. The only app I've built that has a frontend was using Textual for my Raindrop bookmark viewer.

To keep things super minimal so that I could be effective without having to learn too much extra, I decided it was finally time to learn how to use htmx. From the htmx main page:

htmx gives you access to AJAX, CSS Transitions, WebSockets, and Server-Sent Events directly in HTML, using attributes so that you can build modern user interfaces with the simplicity and power of hypertext

Basically, this allows me to create a response chat interface without bothering with Javascript. On top of that, I had seen LangUI a while back and had bookmarked it, and this gave me the HTML skeleton for my app since it has a bunch of variations on common chat interface components. LangUI also uses Tailwind CSS, which I have been wanting to learn since a friend has been talking it up so much.

Then, OpenAI had their Dev Day and released the Assistant API, which is exactly what I needed to make this work. I had been using the Chat API, but getting it to work with the S.T.A.R. format was a bit of a pain. I had a 3-hour flight from Tucson to Seattle that Tuesday, so I read the Assistants documentation and could bang out a super basic chat that I could interact with via the console. I only had my iPad with me, but I got it working with Replit.

Today, I was able to tie in the LangUI components and get the basic chat working. I still need to add the ability to upload a resume and job description, but I'm super happy with the progress so far. I will keep working on this and post updates as I go. It really didn't take a lot of work to tie it together with HTMX.

Let's review the basic structure and components I have so far.

I copied over "Minimal" Prompt Container Component from LangUI. The modifications that I made to get it to work with htmx are:

  • I gave the Prompt Messages div a unique ID of `id="messages" so that I could target it with htmx.
  • I added hx-post="/ask_star" hx-swap="afterend" hx-target="#messages" to the input form so that when the user submits a message, it will post to the /ask_star endpoint and then swap the response after the messages div.
  • Deleted the sample messages that were in the messages div.

I extracted one of the input and response messages from the example messages div, and replaced the text with {{ user_input }} and {{ gpt_output }} so that in the Flask app, I can replace those with the user input and the GPT response.

@app.route('/ask_star', methods=['POST']) def ask_star(): question = request.form.get('question') app.logger.info(f"question: {question}")

Use the chatbot to get the response

chatbot.create_message(question) run_created = chatbot.create_run() chatbot.wait_for_run_completion(run_created) messages = chatbot.list_messages()

Extract the assistant's answer from the messages

answer = next((m.content for m in messages if m.role == 'assistant'), None) app.logger.debug(f"answer: {answer[0].text.value}") return render_template( 'output.html', user_input=question, gpt_output=answer[0].text.value)

The ask_star endpoint is where the magic happens. It takes the user input from the form, sends it to the OpenAI Assistant, and then extracts the response from the Assistant's messages. Then, it renders the output.html template, which is just the extracted input and response messages.

Using the Assistants API was straightforward. I'm using the thread object to keep track of the conversation. I create a message with the user input, then create a run, and then wait for the run to complete. Once the run is complete, I list the messages and extract the response from the assistant.

class Chatbot:

def init(self): openai.api_key = os.getenv("OPENAI_API_KEY") self.assistant = openai.beta.assistants.create( name="S.T.A.R. interviewing assistant", instructions="""You are a chatbot that interviews candidates for a job. Your interview style is to use S.T.A.R. (Situation, Task, Action, Reasoning) to guide the candidate through a conversation. You should provide helpful feedback to the candidate.""", model="gpt-4-1106-preview") self.thread = openai.beta.threads.create()

def create_message(self, user_input): message = openai.beta.threads.messages.create( thread_id=self.thread.id, role="user", content=f"{user_input}") return message

def create_run(self): run_created = openai.beta.threads.runs.create( thread_id=self.thread.id, assistant_id=self.assistant.id, ) return run_created

def wait_for_run_completion(self, run_created): while True: run = openai.beta.threads.runs.retrieve( thread_id=self.thread.id, run_id=run_created.id ) if run.status == "completed": break return run

def list_messages(self): messages = openai.beta.threads.messages.list( thread_id=self.thread.id ) return messages

I'm sure the code could use some cleaning up, but it's been super fun to hack on. I'm going to keep working on this and will post updates as I go. Some ideas I have for future work:

  • Add the ability to upload a resume and job description
  • Export conversation
  • Persist threads
  • Add chat history view to review or continue prior interviews
  • Audio conversations
0
Subscribe to my newsletter

Read articles from Joshua Oliphant directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Joshua Oliphant
Joshua Oliphant