AI Safety: It's Time to Do More


In the rapidly evolving world of artificial intelligence, there's a growing disconnect between theoretical concerns and practical realities. While some researchers focus intently on speculative long-term risks, real people are experiencing tangible harms from AI systems today. It's time we shift our priorities and resources toward addressing immediate challenges rather than hypothetical scenarios that may never materialize.
The Distraction of Distant Scenarios
The AI safety community has invested significant energy discussing potential existential risks—scenarios where superintelligent AI might pursue goals misaligned with human values, potentially leading to catastrophic outcomes. These discussions often invoke colorful metaphors about paperclip maximizers or rogue systems seeking power.
While intellectual exploration has its place, these conversations remain largely theoretical. We have no empirical evidence that AI systems are developing the kind of agency or capability needed to pose such existential threats. Meanwhile, algorithmic harms are affecting real people every day.
Real Problems Needing Real Solutions
Algorithmic Bias: The Invisible Hand of Discrimination
AI systems are making consequential decisions about people's lives right now. They determine who gets loans, who receives medical attention, who gets interviewed for jobs, and who faces additional scrutiny from law enforcement. When these systems contain biases—and many do—they perpetuate and sometimes amplify existing social inequalities.
Addressing algorithmic bias requires rigorous auditing, diverse development teams, and transparent reporting on outcomes. It demands we ask: How many unfair decisions did we prevent today? Whose lives were improved because we caught a biased algorithm before deployment?
The Data Quality Crisis
Many AI models are trained on massive datasets scraped from the internet with minimal filtering or curation. These datasets often contain misinformation, offensive content, and material of questionable quality or origin.
The consequences are predictable: systems that generate harmful outputs, reinforce stereotypes, or simply produce low-quality content. Companies need to invest in proper data governance and transparency about their training methodologies.
Corporate Accountability and Market Impact
Large tech companies are deploying AI systems that disrupt labor markets, create copyright concerns, and raise questions about fair competition. These aren't future hypotheticals—they're current realities affecting workers, creators, and smaller businesses today.
Effective regulation requires engagement with these real-world impacts. How many companies have faced meaningful oversight? What protections exist for affected workers and businesses?
Creative Rights in the AI Era
Artists, writers, musicians, and other creators are watching their work being used without permission or compensation to train AI systems that then compete with them. This represents both an ethical and economic challenge that demands immediate attention.
The legal and ethical frameworks around creative content and AI need urgent development. How many creators have received fair compensation? What systems exist to detect and prevent unauthorized use of copyrighted material?
Mental Health and Social Impacts
AI systems like chatbots are increasingly embedded in our social fabric, sometimes with insufficient safeguards against harmful interactions. These systems can amplify misinformation, encourage unhealthy behaviors, or create dependency relationships with vulnerable users.
Protecting mental health requires ongoing monitoring, clear guidelines, and accountability mechanisms. How many potentially harmful interactions are being prevented? What support systems exist for affected users?
From Speculation to Action
The contrast is stark: while some researchers contemplate distant scenarios, others are working directly with affected communities to address immediate harms. Both perspectives have value, but our current allocation of attention and resources is dangerously imbalanced.
Progress requires measurable impacts, not just theoretical frameworks. It requires leaving comfortable offices to engage with the communities experiencing algorithmic harm firsthand. It means developing practical solutions alongside theoretical insights.
A Call for Balanced Priorities
This isn't to say long-term safety research has no place—it absolutely does. But when theoretical concerns about future risks overwhelm attention to present harms, we've lost perspective.
The most effective approach to AI safety combines forward-thinking research with urgent action on today's problems. By addressing current harms, we not only help people right now but build the trust, institutions, and frameworks needed to address future challenges as they emerge.
The time has come to leave the ivory tower and meet people where they are—experiencing the real impacts of AI systems today. Only by balancing our perspective can we create AI systems that are not just theoretically safe in some distant future, but actually beneficial and equitable right now.
Subscribe to my newsletter
Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gerard Sans
Gerard Sans
I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.