Build a C# Console Chatbot with Semantic Kernel & Azure OpenAI

Hey lovely readers,

This guide shows you how to connect Microsoft Semantic Kernel, keep your API key safe with User Secrets, and stream answers from GPT‑4o‑Mini inside a .NET console app.

1. Why use Semantic Kernel?

Semantic Kernel (SK) is a lightweight library that helps your code talk to language models. It works with Azure OpenAI, Azure AI Foundry models like Mixtral or Phi‑3, and even local models. The key benefits are:

  • Easy to swap models: you register each model as a service and switch when needed.
  • Built in chat history: a ChatHistory object remembers what was said and lets you choose how much context to send.
  • Plugins: you can expose your own C# methods so the model can call them.
  • Model agnostic: one code base can run on GPT‑4o today and Llama 3 tomorrow.

In this post we keep things small. No plugins, no tools. We just stream answers from GPT‑4o‑Mini to the console. Just so you know how to set up a quick and easy project with Semantic Kernel.

2. What you need

ToolVersion I usedNote
.NET SDK8.0 or 9.0Everything above 7 works; you can also try the .NET 10 preview
Azure subscriptionNeeded to create an Azure OpenAI resource and an Azure AI Foundry project
GPT‑4o‑Mini deploymentGlobal StandardReal time chat requires this type, not Batch Standard

2.1. Get started with a console app

# Make a new console app
mkdir SKConsole && cd SKConsole
dotnet new console -n SKConsole
cd SKConsole

# Add NuGet packages
 dotnet add package Microsoft.SemanticKernel
 dotnet add package Microsoft.SemanticKernel.AzureOpenAI
 dotnet add package Microsoft.Extensions.Configuration
 dotnet add package Microsoft.Extensions.Configuration.UserSecrets

2.2 Save your API key with User Secrets

dotnet user-secrets init                      # adds <UserSecretsId> to the .csproj
dotnet user-secrets set OPENAI_API_KEY "<your‑key>"

dotnet run can now read the key at runtime, and the secret stays out of Git.

3. The whole program

// Program.cs  (top level statements)

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.Extensions.Configuration;

// 1) Load User Secrets and other config
var configuration = new ConfigurationBuilder()
    .AddUserSecrets<Program>()
    .Build();

// 2) Build the kernel and add our GPT‑4o‑Mini deployment
var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: "o4-mini", // name shown in the Azure AI Foundry portal
        endpoint:       "https://myfoundryresource.openai.azure.com/",   // endpoint in the Azure AI Foundry resource overview
        apiKey:         configuration["OPENAI_API_KEY"], // apikey in the Azure AI Foundry resource
    .Build();

var chat = kernel.GetRequiredService<IChatCompletionService>();

const string systemPrompt = "You are a concise assistant."; //alter this to change the personality of your chatbot
Console.WriteLine("Ask me anything (press Enter on empty line to quit)");

while (true)
{
    string? user = Console.ReadLine();
    if (string.IsNullOrWhiteSpace(user)) break; //if enter is pressed without input, exit the program

    // Build a short chat history: system + last user message
    var turn = new ChatHistory();
    turn.AddSystemMessage(systemPrompt);
    turn.AddUserMessage(user);

    await foreach (var chunk in chat.GetStreamingChatMessageContentsAsync(turn))
        Console.Write(chunk.Content);

    Console.WriteLine();
}

What the important lines do

LinePurpose
AddUserSecrets<Program>()Loads secrets.json into IConfiguration so you can read the key.
AddAzureOpenAIChatCompletionRegisters the chat model as a service.
deploymentNameMust match the name you see in the Azure AI Foundry portal.
endpointUse the openai.azure.com host with this builder.
Chat loopSends the system prompt and the latest user question. The model replies in streaming mode.

4. Frequent errors and quick fixes

ErrorWhy it happensHow to fix
404 Resource not foundWrong deployment name or wrong endpointUse the openai.azure.com URL and the exact deployment name.
400 OperationNotSupportedYou used a Batch deployment with the chat APIDeploy the model as Global Standard.
API key is nullKey not loaded before useMake sure AddUserSecrets comes before you read the key.

5. Next steps

  1. Switch models: Use AddAzureAIInferenceChatCompletion and the Foundry endpoint to chat with Phi‑3, Llama 3, and more.
  2. Add memory: Store the full chat and use reducers to keep token costs low.
  3. Try plugins and function calls: Mark C# methods with [KernelFunction] so the model can run them.
  4. Build agents: Combine many skills to reach goals automatically.
  5. Use local models: Connect SK to Ollama or LMStudio with a simple HTTP wrapper.

That's a wrap!

If you liked this guide, please leave a comment or reach out on social media. I plan to write more posts about memory, plugins, agents, local models, and other cool parts of Semantic Kernel. Stay tuned and happy coding! 🍀

0
Subscribe to my newsletter

Read articles from Louëlla Creemers directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Louëlla Creemers
Louëlla Creemers

Heya! I'm Lou. I love coding and trying out new technology. I've been interested in a lot of different tech fields for 5 years now. I like to blog about stuff I'm currently learning about or about topics I hear / speak about on other social media platforms. I'm also Microsoft MVP. Want to hear more from me? Follow me or check me out on Twitter.