AI agents for begineers

  • What are AI agents

  • Parts of an AI agent

  • Code sample and demonstration

Large Language Model:-
to identify a task requested by user to create a plan to complete that task and to perform the actions of plan

Memory:-

the conversation between the suer and the agent or long term, which is the collection of data that allows the agent to improve overtime in completing the task

Tools:-

different services access by API’s that perform an action data to help determine what action to take diff functions that we will run to send information to the agent and combining all of these things

An agent uses the LLM to recognize the task of the user would want to complete

identify what available tools that are needed to complete the task and memory to gather the information and data that’s needed to complete that task.

AutoGen Basic Sample

In this code sample, you will use the AutoGen AI Framework to create a basic agent.

The goal of this sample is to show you the steps that we will later use in the additional code samples when implementing the different agentic patterns.

Import the Needed Python Packages

import os
from dotenv import load_dotenv

from autogen_agentchat.agents import AssistantAgent
from autogen_core.models import UserMessage
from autogen_ext.models.azure import AzureAIChatCompletionClient
from azure.core.credentials import AzureKeyCredential
from autogen_core import CancellationToken

from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console

Create the Client

In this sample, we will use GitHub Models for access to the LLM.

The model is defined as gpt-4o-mini. Try changing the model to another model available on the GitHub Models marketplace to see the different results.

As a quick test, we will just run a simple prompt - What is the capital of France.

load_dotenv()
client = AzureAIChatCompletionClient(
    model="gpt-4o-mini",
    endpoint="https://models.inference.ai.azure.com",
    # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
    # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
    credential=AzureKeyCredential(os.getenv("GITHUB_TOKEN")),
    model_info={
        "json_output": True,
        "function_calling": True,
        "vision": True,
        "family": "unknown",
    },
)

result = await client.create([UserMessage(content="What is the capital of France?", source="user")])
print(result)

Defining the Agent

Now that we have set up the client and confirmed that it is working, let us create an AssistantAgent. Each agent can be assigned a: name - A short hand name that will be useful in referencing it in multi-agent flows. model_client - The client that you created in the earlier step. tools - Available tools that the Agent can use to complete a task. system_message - The metaprompt that defines the task, behavior and tone of the LLM.

You can change the system message to see how the LLM responds. We will cover tools in Lesson #4.

agent = AssistantAgent(
    name="assistant",
    model_client=client,
    tools=[],
    system_message="You are a travel agent that plans great vacations",
)

Run the Agent

The below function will run the agent. We use the the on_message method to update the Agent's state with the new message.

In this case, we update the state with a new message from the user which is "Plan me a great sunny vacation".

You can change the message content to see how the LLM responds differently.

from IPython.display import display, HTML


async def assistant_run():
    # Define the query
    user_query = "Plan me a great sunny vacation"

    # Start building HTML output
    html_output = "<div style='margin-bottom:10px'>"
    html_output += "<div style='font-weight:bold'>User:</div>"
    html_output += f"<div style='margin-left:20px'>{user_query}</div>"
    html_output += "</div>"

    # Execute the agent response
    response = await agent.on_messages(
        [TextMessage(content=user_query, source="user")],
        cancellation_token=CancellationToken(),
    )

    # Add agent response to HTML
    html_output += "<div style='margin-bottom:20px'>"
    html_output += "<div style='font-weight:bold'>Assistant:</div>"
    html_output += f"<div style='margin-left:20px; white-space:pre-wrap'>{response.chat_message.content}</div>"
    html_output += "</div>"

    # Display formatted HTML
    display(HTML(html_output))

# Run the function
await assistant_run()
0
Subscribe to my newsletter

Read articles from PRERANA SURYAKANT KARANDE directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

PRERANA SURYAKANT KARANDE
PRERANA SURYAKANT KARANDE

Engineer at Tata communications limited Enthusiastic about python libraries and modules and currently learning and gaining experience by doing some hands-on projects on docker,Jenkins,CI/CD, ansible tools. Also, started learning about AWS cloud and devops