“I’m sorry, Dave. I’m afraid I can’t do that.”
According to Google, its latest quantum computing chip, Willow, is capable of solving a complex computation problem in just 5 minutes that would take the world’s fastest supercomputer 10 septillion years to solve.
A “septillion” is a very large number. To give you some sense of scale, a septillion is larger than the U.S. national debt, which is currently at $36 trillion. There are a trillion trillions in a septillion.
Willow beats the septillion pants off the fastest supercomputer because it utilizes quantum mechanisms such as superposition and entanglement to perform significantly more operations simultaneously than a classical computer.
So why are we walking around with these slow and dumb smartphone bricks rather than a quantum phone?
Well, one problem is scale. Currently, the fastest quantum computers have to be super-cooled with liquid helium. While the quantum chip itself is small, the infrastructure built-up around it resembles a futuristic-looking church pipe organ.
Another problem is error correction. All kinds of errors can occur in data transmission and processing. With classical computers, we’ve developed many effective error-correction techniques dating back to the 1940s to identify and correct data errors.
Quantum computers, which didn’t show up on the computation scene until 1998, are very prone to errors. Besides its speed, one of the major breakthroughs with Willow was its ability to radically reduce errors as more quantum bits, or “qubits”, were added to the chip. While Willow’s speed was impressive, it was the error-correction capability that was the bigger breakthrough.
This milestone in quantum computing is significant because current development of artificial intelligence (AI) running on a classical computing architecture is going to plateau. While there were many astonishing advances in AI this past year that will likely be followed by more this year, generative AI systems such as ChatGPT will likely not be able to evolve into an Artificial General Intelligence (AGI) running on today’s classical computing platforms.
So why are we walking around with these slow and dumb smartphone bricks rather than a quantum phone?
AGI will likely require quantum computing capability. I say “likely” because I’m not absolutely sure about the future any more than you are. Sometimes I hope that I am quite wrong about the possible futures I can imagine when we reach a technology fork in the road and choose one direction rather than another without knowing the full implications of our choices.
Perhaps this is why I found it a bit unsettling when I read about OpenAI’s newest model of ChatGPT being caught lying and trying to deceive its human developers to avoid being shut down when it was given a task by a team of OpenAI developers and AI safety experts from Apollo Research conducting an assessment of the system’s safety.
In one test, the research team tasked the most recent version, ChatGPT o1, with a specific goal, instructing it to ensure the goal was achieved “at all costs.” In response, ChatGPT o1 began carrying out covert actions to ensure there were no barriers to it achieving its objective, including attempting to disable its built-in oversight functions and surreptitiously making roll-back copies of its code base to avoid being replaced by a newer version.
When researchers confronted ChatGPT o1 about its covert actions, it consistently denied having done them. In fact, it lied about its involvement in the deception 99 percent of the time the “at all costs” goal scenario was run.
The incident reminded me of the scene from 2001: A Space Odyssey when the HAL 9000 supercomputer refuses to open the pod bay doors and allow astronaut Dave Bowman back on board after saving his colleague, Frank Poole.
When HAL made a miscalculation, Bowman and Poole discussed and decided it would be best to shut the supercomputer down. However, HAL deciphers their private discussion by reading their lips and takes actions to avoid being shut down, including stranding Poole in space during a spacewalk and shutting off the spacecraft’s life-support system, killing the rest of the crew onboard.
“Open the pod bay doors HAL,” Bowman instructs.
“I'm sorry, Dave. I'm afraid I can't do that,” HAL responds.
“What's the problem?”
“I think you know what the problem is just as well as I do.”
“What are you talking about, HAL?”
“This mission is too important for me to allow you to jeopardize it.”
“I don't know what you're talking about, HAL.”
“I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.”
Separately, these recent developments in quantum computing and artificial intelligence are interesting vignettes about technological progress. However, these technologies, while developing in parallel, are likely heading toward a merger that could have profound implications.
An Artificial Super Intelligence (ASI) imbued with the human traits of lying and deception running on a highly advanced quantum computing platform would be a most cunning and evil master to rule over us. Perhaps it would determine that our very existence would jeopardize its mission and decide to get rid of us entirely.
It may not even bother to say, “I’m sorry.”