The most important conversation of our time involves Artificial Intelligence (AI) and what we do with this marvellous technology. Max Tegmark’s new book Life 3.0: Being human in the age of Artificial Intelligence squarely addresses what needs to be done and how.
Tegmark is a physicist at Massachusetts Institute of Technology and also the founder of Future of Life Institute – a thinktank dedicated to building friendly AI.
In case you are wondering, Life 3.0 refers to the non-biological, intelligent life of future. Humans are 2.0.
Unlike humans who can design their own software (learning new skills, languages, etc.) but are unable to design their hardware, the intelligent life of future would design both its hardware and software. In other words, life will break free from its evolutionary paradigm.
The key underpinning of Tegmark’s book is ‘Intelligence Explosion‘ or what many call ‘Technology Singularity‘ – a tipping point in the trajectory of technological growth where machines have aced the Turing test. This is where machines do most jobs at least as well as humans and have their intelligence growing in proportion to their power.
Intelligence explosion would result in the emergence of ultra-intelligent machines.
Tegmark points out at the beginning of the book, “The first ultra-intelligent machine is the last invention that man need ever make.” Once such a machine is created, it would design even better machines than humans possibly ever can.
Overall, there are 6 broad areas which the book delves in:
1. Evolution of Smart Matter
In an initial chapter called ‘Matter turns Intelligent‘, Tegmark explains how seemingly dumb matter learns. The chapter, though a little wonkish, is fundamental to the understanding of AI and incredible destinations it could take us to.
He underlines that information can take on a life of its own owing to substrate independence. For example, a WolframAlpha app installed on your iPhone8 should solve all queries just as well as the one installed on my Xiaomi mobile. Hardware doesn’t matter here!
2. AI Safety Research
Tegmark leads the vanguard of AI safety research with his foundation FLI doing a remarkable job in it.
While most suspicions about AI being inherently evil are farfetched and imaginary, Tegmark dispels ambiguity about its capabilities. On the contrary, he stresses that AI will be highly competent and goal-oriented. And this is exactly where the problem lies.
AI’s failure to completely align its goals with those of humans could upset every human effort gone into its development. Hence, the compelling need to exert more efforts into AI safety research.
As you can observe from the graph above that while the investment in AI safety research has grown over the years, it could still require a major hike considering the challenges forthcoming.
Tegmark insists, “Unless the future AI is 100% failsafe and unhackable, you can’t feel safe.” It is like going to bed with demons in your head.
3. Goal-Alignment Problem
Goal-alignment problem is going to be pivotal to where the human race ends up in the future. If you are wondering, what the heck? machines can’t have goals. Well, you are wide off the mark.
Tegmark affirms that machines can exhibit goal-oriented behaviour since we design them that way. An example is a heat-seeking missile with a defined goal.
Get reading recommendations and other interesting stuff in your inbox every fortnight.
On a technical level, the author classifies three unresolved questions that the AI community must find answers to. How to:
- Give AI our goals?
- Ensure it adopts them?
- Make AI retain our goals?
The last one, Tegmark declares, is the hardest of them all.
When a superintelligent AI realizes it has excelled over its human masters, what are the chances it will decide to remain subdued? Will it not pursue its own goals, even if that comes at the expense of mankind?
As children, we all had naive goals. But with a significant increase in intelligence over the years, we outgrow those goals.
A similar thing could happen with AI when it goes through recursive self-improvement and realises the goals humans gave to it are just substandard and do not fall in line with its capabilities.
4. Future Scenarios & AI Takeover
A future where you have a blend of human intelligence and artificial intelligence will be pregnant with possibilities.
Long before we witness a full-fledged human-level AI, we might see cyborgs and mind uploads. However, human-level AI or AGI (Artificial general intelligence) – machines’ ability to do most cognitive tasks at least as well as humans – once built will turn out to be way more competitive than any machine-human interface.
Tegmark emphasizes the importance of knowing what we want from AI, otherwise, we may have to concede control of our fate to machines.
In a later chapter called ‘Aftermath – the Next 1000 Years’, he describes 12 scenarios seemingly plucked out of a Hollywood movie, but which have a plausibility to turn into reality over the next few centuries.
He says there is a chance that human race might witness a ‘libertarian utopia‘ where we happily coexist with non-organic life forms. In another scenario, humans may control superintelligent AI and dictate it to produce great wealth for us.
An adverse chance event may witness Super AI lording over us, turning the inferior mankind into its slaves and it being the Zookeeper. In the worst case scenario, Super AI may just wipe us all out.
Tegmark expresses that there is also a chance that we may never get there. The ‘Self-destruction‘ scenario underlines that before Superintelligence could take shape, humans might exterminate each other in a war brought about by a desire to control AI and autonomous weapon systems.
5. Life 3.0 in the Cosmos
In the final third of the book, things start to get a little heavy as Tegmark gets into metaphysics and cosmology and starts asking questions such as ‘what are the ultimate limits of life?‘, ‘how far can life reach?‘, ‘are we alone in this universe?’ and so forth.
A friend of mine who was reading Life 3.0 on my recommendation got disillusioned at this point and skipped the rest of the book.
There is no doubt that a lot of stuff Tegmark mentions here goes far beyond our wildest imagination – Dyson Spheres, Evaporating Black holes, Sphalerizers, intergalactic settlements and so on. However, I suggest you still read it to gain an insight into what could really unfold beyond our lifetime.
Afterall, he is not thinking about life in a 1000-year-context. He envisages how life would manifest in the next billions of years.
Tegmark makes it amply clear that his expositions of the life’s ultimate limits lie well within the framework imposed by the laws of physics. So while they may sound implausible, they could actually metamorphose from fiction into reality.
The problem Tegmark calls “the elephant in the room” is consciousness.
Artificial consciousness with a pea-sized brain, in one second, could experience unimaginably more than a human brain. The reason is the electromagnetic signals which travel at the speed of light and are millions of times stronger than the neuron signals in our biological circuitry.
Simply put, if AI consciousness ever happens, the repercussions will be widespread.
Although rendered in a clear, straightforward style, Life 3.0 is not undemanding.
The narrative at times gets really complex and there are chapters that kind of brim with technical know-how. And while Tegmark maintains that he wants even lay people to join in the most important conversation of our lifetime, I am afraid, not everyone might accept his invitation.
For the enthusiasts though, Life 3.0 is an immersive read from stem to stern. It could be one of the most thorough books on AI ever written. Tegmark manages to share a ton of information in a way that’s compelling and exhaustive.
He answers scores of thorny questions about the impact of AI. The moment you think he has addressed a moot issue, he besots you with another set of potential problems.
All in all, Life 3.0 an enthralling read and a notable entry in the AI genre. Let’s hope we get a few more.
©BookJelly. All rights reserved