πŸ€– Your Own AI Agent Using Gemini

Satyam ChoukseySatyam Chouksey
3 min read

You know a schema of table but then write a whole backend from schema is not a interested thing because you have to write same thing every-time just there are changes in column name and type of it.

Currently we are making a agent which work for adonisjs.


What is problem statement?

First of all, you need to clear what is problem and write it down and then think how can you solve it
and then write step by step.


Step 1: Think about solution and it’s code

I created a main function where I configured the gemini model.

from langchain_google_genai import ChatGoogleGenerativeAI

GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY", "testApiKey")
llm = ChatGoogleGenerativeAI(model="models/gemini-1.5-flash", temperature=0)

Step 2: Create Functions

Now we create a function-

  • Migrations

  • Models

  • Controller

  • Validator

  • Routes

These functions will return a prompt which we give to model to generate code

def generate_controller_content(names: dict) -> str:
    prompt = f"""
    Generate the complete TypeScript code content for an AdonisJS 5 HTTP Controller file.
    The controller class name must be `{names['controller']}`.
    Import the `{names['model']}` model from `App/Models/{names['model']}`.
    Import the `{names['validator']}` validator from `App/Validators/{names['validator']}`.
    Import `HttpContextContract` from `@ioc:Adonis/Core/HttpContext`.
    Implement standard CRUD methods: `index`, `store`, `show`, `update`, `destroy`.
    Use the imported Model and Validator appropriately within the methods. Handle potential errors (like record not found). Use async/await.
    Generate only the raw TypeScript code for the file. Do not include markdown fences like ```typescript or ```.
    The output must start directly with imports or comments.
    """
    return generate_code(prompt)

Now we create a function which gives prompt to model.

def generate_code(prompt: str) -> str:
    """Helper to run LLM prediction with error handling and cleanup."""
    if not llm:
        return f"// LLM not initialized. Prompt was:\n// {prompt}"
    try:
        response = llm.invoke(prompt)
        generated_content = ""
        if hasattr(response, 'content'):
             generated_content = response.content
        else:
             print(f"Warning: Unexpected LLM response structure: {response}")
             generated_content = str(response) # Or handle error

        # --- Clean the output ---
        cleaned_content = clean_code_output(generated_content)
        return cleaned_content

    except Exception as e:
        print(f"Error during LLM generation: {e}")
        return f"// Error generating code. Prompt was:\n// {prompt}\n// Error: {e}"

Step 3: Asking Input

Ask developer the table name and it’s schema. Now we get the info.

user_input = input("Enter table name and columns"):
# product, name:string, price:number, description:text

Now we process the input

table_input_name, columns_str = parse_input(user_input)
names = get_adonis_names(table_input_name)

generators = {
        "Migration": (generate_migration_content, f"database/migrations/{int(time.time() * 1000)}_create_{names['table']}_table.ts"),
        "Model": (generate_model_content, f"app/Models/{names['model']}.ts"),
        "Validator": (generate_validator_content, f"app/Validators/{names['validator']}.ts"),
        "Controller": (generate_controller_content, f"app/Controllers/Http/{names['controller']}.ts"),
}

After getting code and path using AI, now we write that code in our project.

for name, (gen_func, path) in generators.items():
        print(f"\nGenerating {name}...")
        # Pass names and columns appropriately
        if name in ["Migration", "Model", "Validator"]:
            content = gen_func(names, columns_str)
        else: # Controller only needs names
            content = gen_func(names)

        if content and not content.startswith("// Error"):
            result = write_file(path, content)
            print(result)
        else:
            print(f"Failed to generate {name} content or generation resulted in error.")
            # Optionally print the error content for debugging
            if content.startswith("// Error"):
                print(content)

def write_file(path: str, content: str) -> str:
    """Writes content to a file, creating directories if needed."""
    try:
        # Ensure content is not empty after cleaning before writing
        if not content:
             return f"Warning: Generated content for {path} was empty after cleaning. File not written."
        path_obj = Path(path)
        path_obj.parent.mkdir(parents=True, exist_ok=True)
        path_obj.write_text(content, encoding='utf-8')
        return f"Successfully written to {path}"
    except Exception as e:
        return f"Error writing to {path}: {e}"

πŸ’‘Final thoughts

This is how I created my first AI agent which helps me to automate backend code generation from schema.

If you have any thought on this, then let me know in comments.

3
Subscribe to my newsletter

Read articles from Satyam Chouksey directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Satyam Chouksey
Satyam Chouksey

πŸ”§ Professional/Polished: Full Stack Developer | AdonisJS, Vue.js, AWS | Building scalable web apps. Writing code, solving problems β€” one full stack at a time. JavaScript Enthusiast | Backend & Frontend Dev | Cloud-ready Solutions. πŸ’‘ Personal & Friendly: Turning coffee into code & ideas into products β˜•πŸ’» I build full stack apps that actually work 😎 Dev by day, debugger by night πŸ”§ πŸ’Ό Focused & Tech-Specific: AdonisJS & Vue.js Developer | Cloud & DevOps Explorer Building modern web apps using TypeScript, Node.js & AWS Code + Cloud = ❀️ | Full Stack @ Webledger