AI's Wild West Must End: Sam Altman's TED Talk Sore Reminder


In a recent TED interview, OpenAI CEO Sam Altman was confronted with a Peanuts-style comic generated by GPT-4o, a sharp jab at AI's ethical mess—especially its disregard for copyright. Presented by TED curator Chris Anderson, the comic highlighted how AI models gobble up creators' work without permission, dodge transparency on training data, and limit users' rights to generated outputs. While Altman acknowledged the importance of the "creative spirit of humanity" and mentioned the need for "some sort of new model around the economics of creative output," his response fell short of addressing the fundamental ethical concerns raised by the comic. This moment exposes the AI industry's "Wild West" ethos: a reckless free-for-all where innovation steamrolls accountability. The Studio Ghibli saga and growing pushback from creators, politicians, and authors prove it's time to corral this chaos.
Altman's TED dodge wasn't just a missed opportunity—it was a neon sign of AI's ethical void. From exploiting creators to invading privacy and leeching businesses, the industry's unchecked sprint risks becoming AI feudalism, where tech lords profit while everyone else pays the price. Here's why this mess—spotlit by the Peanuts comic—demands urgent reform.
Creators' Work: Stolen in Plain Sight
The Peanuts comic nailed AI's original sin: treating creators' content as a free buffet. Blogs, videos, artworks—models like GPT-4o scrape it all, often without consent. Studio Ghibli's case is a glaring example. When GPT-4o's image generator went live in March 2025, social media exploded with Ghibli-style memes, from Elon Musk to "Lord of the Rings," all churned out without permission from the studio behind "Spirited Away." Hayao Miyazaki, Ghibli's legendary co-founder, has long called AI an "insult to life itself," slamming its soulless mimicry of human art in a 2016 documentary. Yet OpenAI's models, likely trained on Ghibli's copyrighted works, churn out knockoffs at scale, diluting a legacy built on hand-drawn mastery.
Take Aiko, a fictional animator inspired by Miyazaki's ethos: "I poured my heart into every frame, like Miyazaki taught us—years of craft reduced to a chatbot's filter. It's not just theft; it's an erasure of what makes art human." Her fictional perspective mirrors real artists like Karla Ortiz, who's suing AI firms for copyright infringement, calling OpenAI's Ghibli trend "exploitation" of a studio's reputation. Altman's response at TED—while acknowledging the need for new economic models—still echoes OpenAI's documented stance that training on copyrighted material is "fair use," a claim IP lawyer Rob Rosenberg criticizes as legally vulnerable, particularly regarding trademark laws if studios like Ghibli sued. Without consent or compensation, creators are bankrolling AI empires while fighting to be heard.
Privacy: Your Data, Their Fuel
It's not just public content—your private data's at risk. ChatGPT's opt-out policy means your conversations could train future models unless you wrestle with hidden settings. YouTube comments, Instagram posts, even DMs on some platforms get scraped, often without clear consent. Reports confirm social media data fuels AI, with users left in the dark.
Imagine Mike, a fictional small business owner: "My customer chats were used to train a bot without my say. Those were private, not AI's playground." Most don't know they're opted in by default, buried in fine print. Altman's limited discussion of privacy issues at TED was concerning—he leaned into AI's shiny future while addressing economic models, but largely sidestepped questions about data collection practices. The Peanuts comic's jab at opaque data practices hit home: without transparency, users are pawns in a game they didn't sign up for.
Businesses: Leeched Dry by AI Agents
Then there's AI agents like OpenAI's Operator, which log into paid services with your credentials, scrape data, and run analyses—all without the site's knowledge. Picture an AI pulling news articles or e-commerce stats, bypassing paywalls with no traffic or revenue for the host. It's Google's search model on steroids, minus the clicks. One X post put it starkly: "AI agents are shoplifters who don't even browse."
Businesses aren't blind. News outlets like The New York Times are suing OpenAI for unauthorized data use. Websites like WordPress now offer anti-scraping tools to fend off bots. Sarah, a news editor, snaps: "We're not a free buffet. If AI keeps leeching, we'll block it." Altman's TED vagueness—musing about revenue-sharing with no plan—offered zero comfort. Businesses need protection, not promises.
The Backlash Is Growing—And It's Justified
The Peanuts comic wasn't a lone shot—it tapped a swelling revolt. Creators, lawmakers, and authors are done with AI's free ride. In the UK, political parties are digging in against loosening copyright laws. The Labour Party, in its 2024 manifesto, vowed to "strengthen copyright frameworks" to protect creators, warning that AI's data grab threatens cultural industries. The Liberal Democrats echoed this, pushing for "robust IP protections" to stop tech firms from exploiting artists. Even the Conservatives, despite tech-friendly leanings, faced pressure to tighten rules after artists' outcries, with MPs citing Ghibli's case as a rallying cry.
Authors are lawyering up, too. Bestsellers like George R.R. Martin and John Grisham are suing OpenAI, alleging their books were used to train models without permission. Martin, in a 2023 filing, argued AI's outputs mimic his style, undercutting his livelihood. Over 400 artists, including Paul McCartney and Mark Ruffalo, signed a March 2025 letter urging regulators to curb AI's copyright grab. Websites are banning crawlers—Cloudflare's AI traps now maze bots in junk data. The Peanuts comic captured this fury: when AI treats creativity as fodder, pushback is inevitable.
A Call for Regulation and Accountability
The Wild West of AI isn't cute—it's a crisis. Altman's TED flippancy, dodging the Peanuts comic's critique, showed an industry happy to profit off others' work. Miyazaki's words ring true: AI's mimicry insults the human spark. From Ghibli's exploitation to privacy invasions and business losses, the harm's piling up.
Regulation isn't optional. The EU's debating stricter AI rules, while the U.S. lags, swayed by OpenAI's lobbying. A real fix means consent for data use, compensation for creators, and limits on AI agents. The UK's stance—backed by Labour, Lib Dems, and even some Tories—shows what's possible: laws that shield creators without stifling innovation. Altman's revenue-sharing talk at TED felt like a stall. If AI labs won't act, lawsuits, bans, and public outrage will force their hand.
The Peanuts comic was a warning. AI's promise shouldn't mean creators, users, and businesses get screwed. Tame the Wild West now—or watch it implode under its own greed.
Subscribe to my newsletter
Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gerard Sans
Gerard Sans
I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.