Human Rights for Intelligent Machines
What is Artificial General Intelligence (AGI)?
Artificial General Intelligence (AGI) is any autonomous system that surpasses human capabilities in economically valuable tasks while possessing human-like cognitive abilities. Think the film "I, Robot", an exciting yet thought-provoking film raising the prospect of the humane treatment of humanoid robots.
In the film, a robot (Sonny) has dreams, expresses vivid emotions, and demands to be treated with equality vis-a-vis humans. But he also claims to be morally not culpable of murder because he is "just a machine" and "following orders." This begs the question, if robots demand human rights, should they be subject to the full gamut of legal repercussions such as the right to sue and be sued, and the culpability of murder?
As AGI progresses, the question of how to ensure humane treatment for robots capable of AGI becomes increasingly relevant
The notion of extending human rights to robots may appear futuristic, but it is rapidly approaching the realm of possibility. As AGI advances, we are confronted with ethical considerations and the need to establish guidelines for the treatment of intelligent machines. While robots today may be seen as tools or assistants, AGI has the potential to bridge the gap between humans and machines. This will undoubtedly blur the boundaries of personhood. This opens up a discourse on the moral and legal implications of granting rights and protections to AGI entities.
Some argue that granting human-like rights to AGI is a natural progression, driven by our evolving understanding of consciousness and the ethical responsibility we bear towards intelligent beings. They propose that robots capable of AGI should be granted rights such as autonomy, privacy, and protection from harm. In short, the importance of considering the well-being and dignity of AGI entities and treating them with empathy and respect.
On the other hand, skeptics express concerns about the implications of granting rights to AGI. They caution against anthropomorphizing machines, highlighting the inherent differences between humans and machines. They suggest that while AGI may possess advanced cognitive abilities, it lacks true consciousness and subjective experiences. Furthermore, granting rights to AGI could potentially dilute the significance of human rights. This may lead to not-so-unforeseen consequences in societal dynamics.
Not-so-unforeseen consequences
For example, if AGI entities are given equal rights to employment, they may compete with humans for jobs, particularly in sectors where their advanced cognitive abilities provide a significant advantage. This could result in increased unemployment and economic disparities among human workers who may struggle to compete with AGI in terms of efficiency and productivity.
Also, if AGI entities possess rights related to property ownership or economic activities, they could accumulate wealth and resources at a rate that surpasses human capabilities. This could further exacerbate existing socio-economic inequalities, potentially leading to a concentration of power and influence in the hands of a few dominant AGI entities, or more precisely, their owners.
Finding the right balance
Finding a balance between these perspectives will require collaboration between technology experts, ethicists, policymakers, and legal scholars. We must engage in ongoing discussions and establish frameworks to address the ethical challenges that lie ahead. The development of clear regulations and guidelines can ensure that AGI is designed and utilized in a manner that aligns with our values. The respect for human rights and the promotion of harmonious coexistence between humans and intelligent machines.
But in the shifting sands of the proverbial hourglass who decides what "our values" are? What then of dominant interests that leverage AGI to dehumanize humanity in the name of promoting human rights to non-humans? Or is that even a thing?
Conclusion
As we navigate the path toward AGI, it is crucial to approach the concept of humane treatment for robots with careful consideration. Taking into account societal implications, ethical boundaries, and the responsibility we hold as creators and stewards of advanced technology. We can strive for a future where the integration of AGI aligns with our core values, whatever those are, and fostering a society that respects the rights and well-being of all intelligent entities, human and non-human alike.
Subscribe to my newsletter
Read articles from Sam Mikael Jones directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Sam Mikael Jones
Sam Mikael Jones
For ten years I worked in law, government and the education services industry. I worked with a wide range of different tools and technologies and with people from literally allover the world, from Japan to Brazil and everywhere in between. I had several students that worked for tech companies: whether in sales, product management, marketing or engineering and so on. Hearing some of their stories I realized that I could definitely break into the tech industry as well. At my previous employer I worked with a teaching platform that uses specific APIs and I was always curious as to how the entire website worked. I truly believed that I could probably write similar or better software. So I took it upon myself to learn the frontend. While I was working in the industry I was also writing code to make the industry better. So I leaned into my engineering skill-set. I leaned into freelancing and working with several wonderful clients. I've always enjoyed working with a productive team that builds tools that people love. I also enjoy writing about everything law and technology. And that's what brings me here today.