Can robots have rights?
Should robots have rights?
These deceptively big questions regarding “roborights” are shaping up to be among the Big Issues debated by law makers, scientists, engineers, and philosophers in 2019 – and far into the future.
I read David J. Gunkel’s Robot Rights to get a handle on where academia is “at” right now, regarding AI ethics in general, and robot rights in particular (the professional opinion, so to speak, as opposed to the lay-opinions and sensationalist journalistic copy that we are constantly bombarded with on the Internet and through the mass media).
The book did not disappoint. Although a high “reading age” is required to get a good grasp of the content, the ideas are presented in a considered, intelligent, logical fashion, making it accessible to laypeople (like myself) while still being of the intellectually rigorous standard required to be of interest to the professional AI community.
If you want to know what the leading minds in the intellectual, academic artificial intelligence community have to say about the future of robot rights, this book is the primer you are looking for.
The book is logically structured into five sections, each highlighting one of the main (opposing) schools of thought regarding robot rights – complete with summaries and critiques of the most influential papers and philosophers behind each of the core approaches.
I won’t go into detail on the actual ideas discussed (the book itself does a better job of it that I could in a few hundred words here) however, the biggest take away is simply that until we define both “robots” and “rights”, clearly, in a way that all stakeholders (legal, ethical, business and consumer) can agree on and understand, we will only have more questions, not more answers about the practicality and ethics of robot rights.