What is life? What is intelligence? What is consciousness? Are these possible only in naturally evolved, biological creatures like us or might a substrate constructed of metal, plastic, and silicon allow them? And if it could, if an artificial intelligence did come into being, would its intellect grow such that our minds would seem like those of bugs in comparison? If so, what are the consequences?
These are the kinds of questions Max Tegmark, a professor of physics at MIT, addresses in Life 3.0. He doesn’t definitively answer any of them, of course. No one knows if, let alone how, such an Artificial General Intelligence could be made. And certainly no one can predict with accuracy what changes such an AGI would bring about to us lowly humans and to our civilization. It could mean the beginning of a grand new era for humanity, or it could mean its abrupt end. Several alternatives are explored in this book.
Tegmark argues that because the impact of AGI could be so profound, it is vitally important to ensure that it benefits our species rather than harms it. He, along with a group of AI developers, scientists, and philosophers, propose twenty-three principles to guide AI development, which can be found on the website of the Future of Life Institute (https://futureoflife.org/ai-principles/). Since these were established by consensus, they are fairly generic, pie-in-the-sky kinds of statements, but they seem to be a good start.
Personally, I’m not overly concerned about a robot apocalypse. I have little doubt that AI will continue to improve, and that it will impact us in significant ways—materially, culturally, and even psychologically. I also see no reason why a super-intelligent AGI could not exist…some day. But I see no reason for such a thing to be malevolent or even harmful. By definition, AGI isn’t human, so, unlike us, it should be a rational agent, immune to the kinds of psychological flaws that have been at the root of much of human misery. AGI may increase the rate of societal change and stress our capacity to adjust, but we’re an adaptable species. I think we can cope. There will be those who will no doubt attempt to abuse AGI for personal gain or to pursue a favored ideology. This is a concern with any new technology. It’s a human failing, not an inherent problem with AGI, and for every despot with an enslaved AGI, there will be those of nobler temperament with AGIs of their own to counter him. It seems to me that the very existence of AGI would help ensure that conflicts would be foreseen and mitigated before they become an existential threat simply because there are more people who want to avoid an apocalypse than there are those who wish to bring one about. At least, I hope this is true.
I found this an interesting read. I recommend it.