Building a CLI to Automate Etsy Listings with AI and Fetch Hacks

Luiz TanureLuiz Tanure
7 min read

Building a CLI to Automate Etsy Listings with AI and Fetch Hacks

My GF is a tattoo artist that paints and creates several art objects. She had around 140 product photos and needed some help to put her products on Etsy, but there were too many products, and the forms are complex.

So I decided to help. This was the perfect opportunity to leverage AI tools - Claude Code as my pair programmer, OpenAI's vision models for product analysis, and MCP tools for context management.

What started as "maybe I can automate this" turned into a full AI-powered CLI that orchestrates multiple AI services: Claude Code helped me architect the solution, GPT-4 Vision analyzed product images, and custom MCP tools ensured consistent Etsy-compliant data generation.

2 days of AI-assisted development. Here's how I solved it.

The plan (that didn't work)

My plan was to use the Etsy API. Applied for the API key, but they are slow to answer.

So I first generated some JSON with the data, but the content was not good, varying too much.

I tried some approaches to group images of the same product, but wasn't happy with the coding solutions or AI solutions, so I abandoned that for this phase.

AI-powered structured data generation

This is where AI really shined. Instead of fighting inconsistent text generation, I turned to the AI SDK with structured output - letting the AI do what it does best while constraining it to exactly what I needed.

I changed my approach, used node ai-sdk + zod schemas to pass a predetermined JSON format. This was classic problem-solving: the AI was giving me the right content, but in the wrong format. So I gave it structure.

import { z } from "zod";
import { generateObject } from "ai";

const ProductSchema = z.object({
  title: z.string().max(140),
  description: z.string().max(1000),
  tags: z.array(z.string()).max(13),
  category: z.string(),
  materials: z.array(z.string()),
  style: z.string(),
  price: z.number(),
});

const result = await generateObject({
  model: openai("gpt-4"),
  schema: ProductSchema,
  prompt: `Generate Etsy product data for this image: ${imageData}`,
});

But I had the problem of the specific values. I tried to send them to the OpenAI API, but the model was losing context, so I used the tool / MCP to give the possible values to the API.

This is where my problem-solving approach with AI really paid off. Instead of cramming everything into prompts and hoping the AI would remember Etsy's specific categories and materials, I built MCP tools that could provide that context on-demand. The AI could now ask for valid options when it needed them.

// MCP tool to get valid Etsy categories
const getCategoriesFunction = {
  name: "getValidCategories",
  description: "Get valid Etsy category options",
  parameters: { type: "object", properties: {} },
  execute: () => etsy_categories,
};

// Now the AI could ask for valid options
const prompt = `
Generate product data for this image.
Use the getValidCategories tool to ensure correct category selection.
`;

After several iterations, refining the tone, what to describe, checking best practices for titles, tags, descriptions, I got a good version of text, and with the MCP, the values like style, materials, to generate a good JSON with all the data.

This iterative approach with AI is crucial - you don't get perfect results on the first try. Each iteration taught me something new about how to prompt the AI, how to structure the data, and what constraints were actually needed. The AI became more useful the more I understood how to work with it.

Puppeteer was a trap

My plan was to use puppeteer to fill the data, but after some tries, I discovered that Etsy is really good at blocking puppeteer automation... and to make puppeteer fill the form in the right way, was giving too much work, adjusting sequence, ids, selectors...

I was planning to go for a browser Claude controlled to fill the data, but decided for a simpler approach.

This is where good problem-solving beats brute force automation. Instead of fighting Etsy's anti-bot measures, I stepped back and thought: what's actually happening when someone creates a product? I created a fake draft product, inspected the network to see the request, and found that it was all created in a single request. Gotcha!

Sometimes the best solution is the simplest one - and AI helped me generate clean, working fetch scripts instead of battling complex browser automation.

// Generated fetch script
fetch("/api/v3/application/shops/12345/listings", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "X-Requested-With": "XMLHttpRequest",
  },
  body: JSON.stringify({
    title: "Handmade Abstract Canvas Art",
    description: "...",
    tags: ["abstract", "canvas", "art"],
    // ... all the AI-generated data
  })
})
  .then((r) => r.json())
  .then((d) => console.log("Created:", d.listing_id));

I decided to just generate fetch scripts that I can copy paste to the dev tools console and create products, no worries about login, CORS, etc.

Worked like a charm since the first version (just some minor adjustments in the script to avoid some nulls and malformed strings).

Adding photoroom integration

One thing that I decided to add, was some image generation too.

2 weeks ago I created a photoroom CLI tool to remove background for images, and I decided to integrate this in the flow, generating liveshoot images, images without backgrounds and a clear, standard view for all photos across products.

For each image:

  1. Resize to low-quality for AI analysis (save tokens)

  2. Resize to 2000x2000 for Photoroom processing

  3. Generate product data with AI

  4. Create image variants with Photoroom

  5. Save everything to organized folders

The final AI-orchestrated pipeline

This is where all the AI pieces came together beautifully. What started as a manual problem became a fully automated AI pipeline:

Drop a lot of images in a folder, run a terminal script.

My AI-powered app will read all images, and for each one:

  • Resize the image to low quality, to send to OpenAI, saving tokens

  • Resize to a 2000x2000 px to send to photoroom, to generate one snapshot, one live photo, one image without background

  • Send the smallest image to OpenAI, generate the specific data with Zod format, validated

  • Send the image and generate the final versions

  • Save all, images, script to insert, JSON data, a text version, to an output folder, one folder for each product, with SKU as folder name, the SKU was generated based on the product data, using an MCP tool. All with logs.

node generate-products.js
# Outputs:
# ./output/SKU-ART-001/
#   ├── images/
#   ├── product-data.json
#   ├── etsy-script.js
#   ├── description.txt
#   └── logs.txt

After generating all, around 80 products, got nice, cleaner, consistent product descriptions. Happy dev, happy girlfriend. Just checking the products, probably will be published tomorrow.

Ahh... one last idea I added.

I decided to use the generated images to create some image-to-videos, but was not happy with any API, so I added one extra info in the generated txt, a small video prompt, that matches the product and defined style, to generate the videos, after some tries, got really good videos, but was not possible to generate for all, because expired tokens on Veo 3.

What worked

  • 2 days replaced what would have been a month+ of manual work

  • Around 80 products from 140 images

  • Consistent descriptions and tone

  • Professional image variants for each product

  • Copy-paste automation that actually works

  • Happy girlfriend (most important metric)

What I will do different in the future?

  • Some pre-classification of the images, sometimes the AI got wrong about size, materials

  • Store in a database, not files, but was ok for an exploration project

What I learned about AI-assisted problem solving

  • AI works best with constraints: Free-form AI output is inconsistent. Structured output with Zod schemas gives you the power of AI with the reliability of typed data.

  • MCP tools are game-changers: Instead of bloating prompts with context, give AI tools to fetch what it needs. This scales much better and keeps prompts focused.

  • Claude Code as a pair programmer: Having an AI that understands your codebase and can suggest architectural decisions speeds up development significantly.

  • Iterative AI refinement: The first AI output is never perfect. Plan for multiple iterations to tune prompts, constraints, and data structures.

  • Simple solutions beat complex automation: Sometimes the best approach is to step back, understand the problem differently, and let AI generate the simple solution.

  • Real problems make the best AI projects: When you're solving actual pain points, you iterate faster and build more useful tools.

Tech stack

Node.js, AI SDK, Zod, Photoroom API, ImageMagick, Claude Code, MCP tools, bash.


I was planning to go even further, make a single script that will get the data for each one, so I run one time and fill one at a time automagically, but sounded too much... the copy paste was not a problem for a one time (maybe) job, and I can always evolve later.

This project reinforced something important about working with AI: the goal isn't to build the most technically impressive solution. It's to solve the problem efficiently. AI helped me build exactly what was needed, nothing more.

Sometimes, the simple AI-generated solution wins.

Products go live tomorrow.


Originally published at letanure.dev

0
Subscribe to my newsletter

Read articles from Luiz Tanure directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Luiz Tanure
Luiz Tanure