How to Build AI Agents in Your Rails App (2025 Guide)

Table of contents

AI agents are no longer theoretical experiments. Thanks to the rapid evolution of Large Language Models (LLMs) like OpenAI’s GPT-4o, Claude 3.5, Google Gemini, and open-source models like LLaMA 3, Rubyists now have powerful ways to integrate autonomous AI logic directly inside their Rails applications.
As someone building with Ruby on Rails daily, I wanted to share how I’ve explored building AI agents in Rails using the latest gems in the ecosystem. Everything I share here is based on publicly available tooling as of April 14, 2025. No fluff. Just what actually works.
Let’s walk through the tools, approaches, and ideas that help bring AI agents to life in Ruby and Rails.
🔧 The Gems Powering AI Agents in Ruby on Rails
Here are the three most actively developed and relevant gems I’ve explored when building LLM-powered agents in Rails:
1. raix
raix
is a Ruby gem that helps you orchestrate multi-step LLM chains and tools. Think of it like a LangChain-inspired agent framework, but made for Ruby. It supports OpenAI models, tool usage, function calling, memory, and streaming output.
With raix
, you can do things like:
Build a multi-step reasoning chain for an AI assistant
Add tools like web search, ActiveRecord access, or file reading
Support memory using Redis or a vector database
The interface is designed to be easy for Ruby developers, and it fits nicely into Rails apps using background jobs or service objects.
2. activeagent
activeagent
adds AgentController functionality for server-side LLM task execution. It leans into ActiveRecord conventions and Rails-style service layers.
Key features include:
Agent lifecycle management (start, pause, resume)
Tool usage tied to Rails models or controllers
Evented architecture via
ActiveSupport::Notifications
This gem makes it easier to integrate LLM-powered agents as Rails services—think autonomous workflows or assistants that can read/write to your app’s models.
3. sublayer
Sublayer
is a model-agnostic AI agent framework for Ruby. It provides core abstractions like:
Generator
: focuses on content generation from promptsTask
: for procedural multi-step logicAction
: for calling Ruby methods or external toolsAgent
: the orchestrator that uses the above to reason and act
What makes Sublayer standout is its support for any LLM provider, including:
OpenAI (GPT-4o)
Claude (Anthropic Claude 3.5)
Gemini (experimental support for Gemini 1.5 Pro)
Local models (via HTTP APIs)
The framework lets you wire up your own providers and configure which model to use per use-case.
👉 Repo: github.com/sublayerapp/sublayer
🤖 What Is an AI Agent in Rails?
In simple terms, an AI agent is an LLM-driven service that can perceive input, reason, and act autonomously. In a Rails context, that might mean:
Reading user input from a model or controller
Using a prompt or instruction to query an LLM
Acting based on the LLM output (e.g., creating a record, sending a message)
You could build:
Customer support bots that can access your DB
AI pair programmers for your internal tools
Sales assistants that automate emails based on CRM data
Think of AI Agents as intelligent assistants that live inside your app — they can:
Execute tasks like sending emails, querying databases, or triggering workflows.
Hold contextual memory across conversations.
Make decisions and take action without constant human oversight.
These agents are especially powerful when paired with LLMs like GPT-4 Turbo or Claude, giving them natural language understanding and reasoning capabilities.
With tools like raix
, activeagent
, and sublayer
, this is now doable with familiar Ruby code.
🛠️ Setting Up Your First AI Agent in Rails
Here’s a high-level walkthrough using the gems:
Step 1: Add the gems
# Gemfile
gem 'sublayer'
Run:
bundle install
Set up your .env
with the required keys:
OPENAI_API_KEY=your_openai_key
GEMINI_API_KEY=your_google_key
ANTHROPIC_API_KEY=your_claude_key
Step 2: Configure Your AI Provider
With Sublayer:
Sublayer.configure do |config|
config.ai_provider = Sublayer::Providers::OpenAI
config.ai_model = "gpt-4o"
end
You can switch to Claude or Gemini easily by changing the provider and model strings.
📦 Example Use Case: Email Drafting Agent
Let’s say we want an agent that helps generate personalized emails from a client’s recent activity.
With raix
:
chain = Raix::Chain.new do
tool :fetch_user_activity
tool :generate_email_text, model: "gpt-4o"
end
output = chain.run({ user_id: 123 })
With sublayer
, you’d do:
class EmailGenerator < Sublayer::Generators::Base
def generate(context)
prompt("Write a follow-up email based on: #{context[:activity]}")
end
end
This is all structured, testable Ruby code.
🧠 Memory, Tools, and Decision-Making
You can add memory (using Redis or vector DBs) to your agent so it remembers past context. Tools can be anything from database access, file I/O, or third-party APIs.
For example, a Tool
might look like:
class FetchUserActivity < Raix::Tool
def call(user_id:)
User.find(user_id).recent_actions
end
end
You then chain this into your agent logic using raix
or sublayer
.
GPUs or APIs? What’s Best in 2025
As of 2025, Rails devs are choosing between:
API-based models (e.g., OpenAI, Anthropic): Fast to integrate, scalable, and reliable.
Self-hosted models on GPU (e.g., LLaMA 3): Lower cost long-term and more privacy control, but require DevOps investment.
With new gems like torch-rb
and libraries like llama-cpp-ruby
, even Rubyists can now fine-tune or run LLMs on local infrastructure. Personally, I use API-based models in prototyping, and GPU-hosted LLMs in production for high-privacy environments.
⚙️ Best Practices
Rate-limit and throttle your LLM calls to stay within budget
Use background jobs (e.g., Sidekiq) for long-running agents
Log all prompts and responses for auditing
Use eval chains or RSpec to test agent output for reliability
Experiment with models (Claude vs GPT-4o vs Gemini) based on latency, cost, and quality
🚀 Final Thoughts
Building AI agents in Rails is no longer just a dream. With gems like raix
, activeagent
, and sublayer
, we now have robust building blocks to define structured, intelligent behavior using LLMs inside our Ruby on Rails apps.
The Ruby ecosystem might be late to the AI game compared to Python, but it’s catching up fast—and clean DSLs make the experience delightful.
Whether you’re building an assistant, a smart CRM tool, or just prototyping an idea, these libraries give you a powerful foundation to start experimenting with real AI inside your Rails stack.
TL;DR: Tools to Get Started
Tool | Description | Link |
Raix | LLM agent framework for Ruby | Raix on RubyGems |
ActiveAgent | Declarative AI agents, Rails-style DSL | ActiveAgent on RubyGems |
Sublayer | LLM-native Rails engine for AI workflows | Sublayer GitHub |
Want help integrating AI agents into your Rails product? I’m happy to connect, pair, or even geek out over prompts. Drop me a line.
If you're planning to integrate AI agents using GPT, Claude, Gemini, or local LLMs like LLaMA, Ruby and Rails are now ready to roll with modern tools.
Want help or ideas? Just reach out. Let’s build smart things with Ruby. 🔥
Subscribe to my newsletter
Read articles from Chetan Mittal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Chetan Mittal
Chetan Mittal
I stumbled upon Ruby on Rails beta version in 2005 and has been using it since then. I have also trained multiple Rails developers all over the globe. Currently, providing consulting and advising companies on how to upgrade, secure, optimize, monitor, modernize, and scale their Rails apps.