Vibe Coding Custom Tools for Claude Code & AMP: My PR Workflow on Autopilot

Mark StriebeckMark Striebeck
5 min read

Over just a few days, I vibe coded a set of custom tools that completely transformed how I work with Claude Code and Sourcegraph AMP.

The result? A seamless workflow where agents fix Swift code, based on logs pulled and parsed automatically from local build and lint runs—without me ever needing to open the GitHub UI.

It’s not just a productivity gain—it’s a shift in how I interact with my code.


Why I Built It

My old workflow looked like this:

  1. Build fails on GitHub.

  2. I open the GitHub UI and dig through long logs.

  3. I locate the relevant error/warning lines.

  4. I copy/paste them into a coding agent.

  5. The agent figures out the fix.

It worked… but it broke my flow. And worse—it felt like I was doing the agent’s job.


What I Built Instead

Now, I've built a clean separation between:

  • My custom MCP tool server (written in Python)

  • Claude Code or AMP as the coding agent

✅ My Tool Server Does the Plumbing

  • Fetches SwiftLint and build/test logs for the local repo/branch/commit

  • Parses out:

    • SwiftLint violations

    • Build warnings and errors

    • Test failures

  • Responds to the coding agent with a structured list of issues:

    • file name

    • line number

    • error or warning message

🧐 The Coding Agent Does the Thinking

  • Diagnoses the root cause of the failures

  • Edits the source code to fix them

  • Explains the fix in natural language

  • Commits the change
    (I still review and push manually)

This workflow lets the agent focus purely on what it’s best at—reasoning about and fixing code—while my tool gives it exactly the data it needs.


The Tool Server

It’s a simple Python binary that acts as an MCP server. It receives JSON requests over stdin, processes them, and replies with JSON to stdout.

Repo: 👉 github.com/mstriebeck/github-agent

You can hook it into Claude Code or AMP with a config like this:

{
  "name": "github-agent",
  "executable": "/path/to/venv/bin/python",
  "args": ["path/to/server.py"],
  "tools": [
    "get_current_branch",
    "get_current_commit",
    "find_pr_for_branch",
    "get_pr_comments",
    "post_pr_reply",
    "get_build_status",
    "read_swiftlint_logs",
    "read_build_logs"
  ]
}

⚠️ Always use the python from your virtual environment to ensure dependencies resolve properly.


Tools I Added

Tool NameDescription
get_current_branchDetect the local branch
get_current_commitReturn the current commit hash
find_pr_for_branchLook up the PR associated with the branch
get_pr_commentsFetch existing PR comments
post_pr_replyPost a reply to a specific comment
get_build_statusReport build status
read_swiftlint_logsExtract SwiftLint violations
read_build_logsExtract build warnings, errors, test failures

Example: Parsing SwiftLint Logs

async def read_swiftlint_logs(run_id=None):
    try:
        token = os.environ.get("GITHUB_TOKEN")
        if not token:
            return {"error": "GITHUB_TOKEN is not set"}

        repo = await get_github_repo()

        if run_id is None:
            commit = await get_github_commit()
            run_id = await find_workflow_run(repo, commit, token)

        artifact_id = await get_artifact_id(repo, run_id, token)
        output_dir = await download_and_extract_artifact(repo, artifact_id, token)
        lint_results = await parse_swiftlint_output(output_dir)

        return {
            "success": True,
            "repo": repo,
            "run_id": run_id,
            "artifact_id": artifact_id,
            "violations": lint_results,
            "total_violations": len(lint_results)
        }

    except Exception as e:
        return {"error": f"Failed to read SwiftLint logs: {str(e)}"}

Which results in clean, structured output:

{
  "success": true,
  "repo": "mstriebeck/news_reader",
  "run_id": 15667632154,
  "artifact_id": 3331927539,
  "compiler_errors": [],
  "compiler_warnings": [
    {
      "type": "compiler_warning",
      "raw_line": "/workspace/swift/CoreLibrary/LLMServiceKit/Tests/LLMServiceKitTests/Queuing/LLMQueueManagerTests.swift:399:9: warning: no 'async' operations occur within 'await' expression",
      "file": "/workspace/swift/CoreLibrary/LLMServiceKit/Tests/LLMServiceKitTests/Queuing/LLMQueueManagerTests.swift",
      "line_number": 399,
      "column": 9,
      "message": "no 'async' operations occur within 'await' expression",
      "severity": "warning"
    },
    {
      "type": "compiler_warning",
      "raw_line": "/workspace/swift/CoreLibrary/LLMServiceKit/Tests/LLMServiceKitTests/Queuing/LLMQueueManagerTests.swift:465:9: warning: no 'async' operations occur within 'await' expression",
      "file": "/workspace/swift/CoreLibrary/LLMServiceKit/Tests/LLMServiceKitTests/Queuing/LLMQueueManagerTests.swift",
      "line_number": 465,
      "column": 9,
      "message": "no 'async' operations occur within 'await' expression",
      "severity": "warning"
    },
...

The coding agent then picks up this data and knows exactly where to look—and how to fix the issue.


Lessons Learned

🚫 MCP Uses Stdout—So Don’t

MCP servers communicate via stdout. I had a print() call left in for debugging, and the agent silently failed. MCP doesn't tolerate any stray output.

🔥 Output to stderr or a log file—NEVER stdout.


🧠 Cache Issues in VSCode

After editing the server, I noticed changes weren’t taking effect. Turns out VSCode (or Claude Code) caches the subprocess. I had to fully restart the IDE for the new version to run.


🔍 Add Logging Everywhere

Because everything runs behind the scenes, visibility is crucial. I added structured logs to every tool showing:

  • input context

  • tool name

  • return payload

Highly recommended.


✅ Virtual Environments Work Fine

I use venv for everything. The agent config just needs to point to the right Python binary:

/path/to/venv/bin/python path/to/server.py

What’s Next

🤖 Trigger Agent Automatically on Build Failures

Right now, I tell the agent when something fails. I want to auto-trigger the agent when CI fails—so it can immediately analyze logs and start fixing things without my intervention.


💬 Merge with PR Comment Integration

I’ve already built a comment/reply system that lets the agent reply to PR comments directly - via copy/pasting into shell scripts. Next, I want to integrate that into the MCP toolserver—so agents can scan open comments and reply inline on GitHub.


Final Thoughts

This project started as a quick vibe coding experiment—and turned into one of the most effective dev tools I’ve built recently.

The coding agent is now:

  • aware of my local branch and code

  • automatically diagnosing build/test issues

  • fixing my Swift code on demand

  • and even replying to PR reviews (soon)

If you want to try it or fork it, check out:
👉 github.com/mstriebeck/github-agent

Let me know what you're building—I'd love to see how others extend this workflow.

0
Subscribe to my newsletter

Read articles from Mark Striebeck directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mark Striebeck
Mark Striebeck