So I Built My Own Social Media AI Crew Because I Didn't Want to Pay for Jasper.ai


This all really started with taking free social media marketing webinars from SCORE over the last 6 months or so. I had just started freelancing as a tutor after getting laidoff and had several clients in between looking for a new job, but I wanted to learn how to market my services nevertheless. I found the marketing strategies very helpful and I learned a lot about what platforms I want to be on.
But what I also found was that almost all of them to be overwhelming in implementation and very time consuming, as you had to do a lot of it manually using spreadsheets or fork out the money to use expensive AI tools.
Around this time, I had also learned another programming language, Python. I had so much “free time”, I decided to also dive head first into building AI agentic crews by taking free courses on DeepLearning.ai.
I built my first one, a vacation crew. And then I built another that wrote blog posts based on research results from Exa and trends from xAI’s Grok, but I didn’t really use it, because I like writing my own.
What I don’t like to do — is to post on social media.
It’s one of the things that I least like about this digital age, but you need other people to read your stuff or buy whatever it is that you are selling, so you kind of need to do it.
But for me, it’s right up there with cleaning the toilet. 🚽
I know that there might be some folks who just “love” all things social media marketing. But I truly don’t.
On the bright side, I finally found a perfect use case for building yet another AI crew, where I could use and take the all tips and strategies from those SCORE webinars to task.
The only platform that does this, that I’m aware of, is Jasper.ai. And though I did try their free 7-day trial after one of those webinars, I absolutely refuse to spend that amount of money per month on anything.
So I decided to build my own crew to market on the socials in such a way as without it sounding like it’s been written by a bot and all I have to do is edit a little bit before posting. Plus I can add as many knowledge assets as I want, since I had blasted past Jasper’s limit, which is only 5 assets at the lowest tier.
Plus my crew is dirt cheap compared to using Jasper.ai’s lowest tier at $60 per month — I literally spend a few pennies per API call.
Given that I hope to ramp up to writing 4 articles and creating about the same amount of Youtube videos every month, it’ll come out to be less than a tall latte with alternative milk from Starbucks!
I also wanted my AI crew to use a social media strategy, so they don't just create posts for the sake of creating them, and I don't have to manage it. BUT they also need to use details about the brand AND have specialist-specific knowledge regarding the various platforms I will be using—like how many hashtags to use per platform, for example.
You wouldn't post the same way on X as you would on Instagram, just like you wouldn't for LinkedIn or even when creating content for YouTube — which is a beast all of its own.
So I dug around CrewAI’s documentation and figured out how best to do it.
What I discovered is that I needed a pretty small crew with Pydantic structured outputs, as I need the results to be reproducible. Plus there’s a bit of simple classifications that they will need to do and a couple of the agents are required to output their results in JSON.
So it will be Option 2 for my project, which according to the documentation, is a Low Complexity, High Precision crew:
Simple workflows that require exact, structured outputs ✅
Need for reproducible results ✅
Limited steps, but high accuracy requirements ✅
Often involves data processing or transformation ✅
Recommended Approach: Flows with direct LLM calls or simple Crews with structured outputs
Example Use Cases:
Data extraction and transformation
Form filling and validation
Structured content generation (JSON, XML)
Simple classification tasks
As I need my crew to classify the content of my articles, I wanted to give them some guidelines on how to do this, so that they do it consistently and I don’t have to worry about making sure that their task description setup is too verbose.
Plus it saves on bandwidth and burning through those pesky LLM API credits, by giving them a single source of truth about how to do their task, so they don’t have to go out to the interwebs and try to figure out exactly what I want and what the heck I am referring to EVERY. SINGLE. TIME.
Here’s where Knowledge comes in, because the analysis and content creation tasks I want them to perform is primarily creative and analytical, they need structure and guidance on how to best to do it — plus I need this information for tracking the social media posts they output (KPIs) in my content calendar and for their future iterations.
Knowledge in CrewAI is a powerful system that allows my AI agents to access and utilize external information sources during their tasks. Think of it as giving your agents a reference library or job manuals that they can consult while working.
Key benefits of using Knowledge is that it:
Enhances agents with domain-specific information,
Supports decisions with real-world data,
Maintains context across conversations, and
Grounds their responses in factual information
The Code
So this is the part of the article where I talk code.
I decided to add the knowledge
directory to the project level. Please note that it will download to the same storage system as memory by default and where exactly that might be is platform dependent, see the documentation for where exactly, because it’s different for every operating system.
import os
from pathlib import Path
# Store knowledge in project directory (Jupyter-safe)
project_root = Path.cwd()
# But change the line above to the following when you add it to Streamlit app:
# project_root = Path(__file__).parent
knowledge_dir = project_root / "knowledge_storage"
os.environ["CREWAI_STORAGE_DIR"] = str(knowledge_dir)
# Now all knowledge will be stored in your project directory
And since I am first testing this out in my Jupyter notebook, I have to use the Jupyter-safe way of storing them, hence why I’m using project_root = Path.cwd()
.
Otherwise, I would use project_root = Path(__file__).parent
.
Now that I have the directory where I want to store it at, I can now save my text files directly into all of my agents’ wee, little AI brains.🧠🧠🧠
I ran this code to see if the LLM was working and if it knew “where” my knowledge
directory was located:
import os
from openai import OpenAI
from dotenv import load_dotenv
import requests
def test_llm_connection():
"""Test if the LLM client can be initialized and make a simple request."""
# Load environment variables
load_dotenv()
# Print the current working directory
print(f"Current working directory: {os.getcwd()}")
# Print the content of the knowledge directory
knowledge_dir = os.path.join(os.getcwd(), "knowledge")
if os.path.exists(knowledge_dir):
print(f"Files in knowledge directory:")
for file in os.listdir(knowledge_dir):
print(f" - {os.path.join(knowledge_dir, file)}")
else:
print(f"Knowledge directory not found at {knowledge_dir}")
# Print reference directory
ref_dir = os.path.join(os.getcwd(), "reference")
if os.path.exists(ref_dir):
print(f"Files in reference directory:")
for file in os.listdir(ref_dir):
print(f" - {os.path.join(ref_dir, file)}")
else:
print(f"Reference directory not found at {ref_dir}")
# Try to initialize the OpenAI client with XAI configuration
try:
client = OpenAI(
api_key=os.getenv("XAI_API_KEY"),
base_url="https://api.x.ai/v1",
)
# Make a simple request
completion = client.chat.completions.create(
model="grok-3-mini",
messages=[{"role": "user", "content": "Hello, Grok!"}],
max_tokens=10
)
print("XAI Client Test Results:")
print(f"Response: {completion.choices[0].message.content}")
print("LLM connection successful!")
return True
except Exception as e:
print(f"XAI client error: {e}")
# Try with regular OpenAI as fallback
try:
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=10
)
print("OpenAI Client Test Results:")
print(f"Response: {completion.choices[0].message.content}")
print("LLM connection successful with OpenAI fallback!")
return True
except Exception as e2:
print(f"OpenAI client error: {e2}")
return False
if __name__ == "__main__":
test_llm_connection()
And when I ran it, it listed where everything was and that my LLM is was able to connect to the mothership:
Current working directory: /Users/user_name/Documents/AI Social Media Agency
Files in knowledge directory:
- /Users/user_name/Documents/AI Social Media Agency/knowledge/content_guidelines.txt
- /Users/user_name/Documents/AI Social Media Agency/knowledge/acmes_brand_kit.txt
XAI Client Test Results:
Response:
LLM connection successful!
Now it’s time to bring in the Knowledge!
You will need a library call and what kind of source you are bringing in, as the knowledge source call will be different depending on what type of file you decide to use, in my case I am using .txt
files.
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
Then, define the sources.
content_guidelines_source = TextFileKnowledgeSource(
file_paths=["content_guidelines.txt"]
)
brand_kit_source = TextFileKnowledgeSource(
file_paths=["acmes_brand_kit.txt"]
)
Finally, I assign the TextFileKnowledgeSource
at the agent or crew level. You will use the knowledge_sources
parameter for both.
Here it is at the crew level, where all the agents have access to that same knowledge that they should all make use of.
# All the agents will have access to the SAME knowledge source
content_creation_crew = Crew (
agents=[
content_analyst_agent,
some_other_agent,
...
],
tasks=[
content_analysis_task,
some_other_task,
...
],
knowledge_sources=[acme_brand_kit_source] # ⬅️ knowledge for the whole crew to use
)
Here’s my set up for just one of my agent’s:
content_creator_agent = Agent(
config=agents_config['content_creator_agent'],
# llm=llm_gpt,
knowledge_sources=[content_guidelines_source] # ⬅️ knowledge for just this agent to use
And just like that, they know how to analyze an article for my blog or transcript in the way I want them to, so they can parse the content the “right” way for posts, while also using the brand kit that I gave them and what they should be analyzing for — before outputting anything.
Now it’s time to test and observe, by feeding them one of my articles and seeing what they do with it.
The results…
...are so much better than I had hoped. 🙌
I can now give them updates to their Knowledge at either the agent or crew level, and if I need to modify their behavior — I can at any time!
Since I’ve given them some Knowledge, they really don’t hallucinate (hardly at all), plus setting the LLM temperature to zero also helps with getting a consistent output every time.
The crew is almost ready for battle… er, production.
But first I need to convert this social media crew from my Jupyter notebook into a “real” program, so if you want to begin watching the series — check out my Youtube video on how I setup the project.
The only way to really see if they are successful is to use what they output and track the KPIs over time, but that’s for another article.
So stay tuned.
References
Subscribe to my newsletter
Read articles from Shani Rivers directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Shani Rivers
Shani Rivers
I'm a data enthusiast with web development and graphic design experience. I know how to build a website from scratch, but I sometimes work with entrepreneurs and small businesses with their Wix or Squarespace sites, so basically I help them to establish their online presence. When I'm not doing that, I'm studying and blogging about data engineering, data science or web development. Oh, and I'm wife and a mom to a wee tot.