Science fiction writer Vernor Vinge died late last month on March 24, 2024. He was 79. His passing, although only slightly recognized, was profound as it answered a prominent and historical question about the future of mankind, a question Vinge had raised through his many novels and academic essays.
Vinge was among the first to write about— and warn — that “thinking computers” would one day take over our world. He called it “technological singularity” — the moment when artificial intelligence would overpower human thinking and threaten mankind’s extinction.
Vinge predicted in 1984 that “singularity” would happen probably sooner than expected, but no later than 2030. He died without his answer.
He did not live to see the post-human future he kept predicting in his many Hugo Prize winning novels which included “A Fire Upon the Deep” (2002), “A Deepness in the Sky” (1999) and “Rainbow’s End” (2006.)
His death leaves the rest of us alone to deduce our final outcome.
Vinge’s passing comes at a time when world governments, eminent computer scientists, social media billionaires and beleguared civil libertarians are arguing about the runaway developments of Artificial Intelligence (AI.) Their debates leap from the heights of technological advancements in medicine, computer science and warfare to the depths of inadvertent ruin of our society and all of mankind.
None of that would be new to Vinge, except his idiom was “fiction” and the singularity moment we are now living through is not fictional; it is real.
A singularity, in physics and astronomy, is a place where the laws of physics as we know them break down — like a black hole in space. A new reality presents itself and we are at a loss of words to define it.
What Vinge and others predicted was a moment when “machine thinking” (artificial intelligence) would become more powerful than the human brain. All bets would be off after that happened. Beyond superconductor calculations and arithmetic, would these thinking machines become “sentient” and know that they know? Would they very soon begin to reinvent themselves or replicate more and more powerful artificial intelligence devices? Would they find humans a nuisance and a deterrence that needed to be jettisoned as the supercomputer Hal 9000 attempted in Arthur Clarke’s “2001: A Space Odyssey?”
May Vinge now rest in peace.
Last May, 350 scientists, computer engineers, university scholars and others issued a very terse and most solemn proclamation about A.I. — artificial intelligence.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” they wrote in their 24-word global alarm.
These men and women were not focusing on fake memes on social media platforms or student cheating on school tests, or A.I. powered Google search results. They were looking into an imminent future Vinge and others had been writing about — tales of a “post-human” Earth.
“While it is unclear how rapidly A.I. capabilities will progress or how quickly catastrophic risks will grow, the potential severity of these consequences necessitates a proactive approach to safeguarding humanity's future,” reads a clip from a June 2023 newsletter hosted by the Center for AI Safety, who’s founder and CEO Dan Hendrycks co-authored the 24-word “extinction warning.”
“As we stand on the precipice of an A.I.-driven future, the choices we make today could be the difference between harvesting the fruits of our innovation or grappling with catastrophe,” the AI Safety newsletter continued.
All the warnings were meant to cover both inadvertent uses of AI and engineered malicious use by rogue scientists or states.
Man’s final invention
In 1993, Vinge presented a paper entitled, “The Coming Technological Singularity” at a NASA-sponsored symposium. An updated version was printed in Stewart Brand’s Whole Earth Review magazine that winter.
In his paper, Vinge quoted from an earlier academic white paper from 1960 written by British mathematician I.J. Good who had worked with cryptologist Alan Turing on his World War II code breaking “Enigma Machine” and much later provided computer science consultation to Stanley Kubrick for his film, “2001: A Space Odyssey” in 1968.
Good (neé Gudak) wrote about his fears of a coming “ultra-intelligent machine” that “would far surpass all the intellectual activities of any man however clever.” Good said this moment would lead to an “intelligence explosion” and the intelligence of man would be left far behind.
“It is more probable than not that, within the 20th century, an ultra-intelligent machine will be built and that it will be the last invention that man need make,” Good wrote.
Good’s machine, Vinge wrote, would not be a “tool” for man, but it’s destruction.
Mind you, all these cyber-speculations came years before the advent of the internet or the World Wide Web.
Last year, when responding to the mass warning about threats of extinction, Sam Altman, the CEO of OpenAI, the creator of GPT-4 said, “I think if this technology goes wrong, it can go quite wrong.”
What does this moment of Technical Singularity look like? Where and how will it take place? Will there be any warnings or false alarms?
One prevailing scenario being shared by “A.I. Doomsayers” right now is tied to the Silicon Valley race for new super technological superiority and yet more super profits. While world governments, including the Biden White House, are calling for safety “guardrails,” the A.I. experimentations, venture capitalists’ investments and unprincipled leadership are neither slowing down nor taking much heed.
Following along with this scenario, an unintentional or surprising breakthrough will be made somewhere in a San Jose or Beijing laboratory. A Large Language Model (LLM) network of supercomputers will “ignite” an “intelligence explosion” and the masses of “thinking” silicon, plastic, electric pulses and plasma nodes will start spinning out their own program codes, algorithms and manufacturing instructions.
This is the moment that Vinge said computers will “wake up.” The machines will not be playing games of chess or writing new Hollywood movie scripts. They will be inventing all-new games and new realms of artificial reality that will exceed humankind’s current level of imagination or conception.
In the book of Revelations in The Bible, it is called the Omega Point, the promised moment when God finally reveals himself. But it may also be the moment when man’s machines calculate that they no longer must depend on a human (or a god) to advance their intellectual powers.
Can the Singularity be avoided?
“Suppose we could tailor the Singularity,” Vinge asked in his 1993 paper. “Suppose we could attain our most extravagant hopes. What then would we ask for?”
Vinge did not have an answer. He thought the new age after “Singularity” would render individual identities, egos and self-awareness useless and obsolete. He wrote than humankind would flounder with no grasp on either its “happiest dreams” or its “deepest mysteries.”
In his next-to-last paragraph Vinge wrote: “If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility.”
In his final paragraph Vinge ended with a quote from theoretical physicist Freeman Dyson: “God is what Mind becomes when it has passed beyond the scale of our comprehension.” (Go ahead, and pause for a moment.)
Not all A.I. commentators and participants subscribe to the “Doomsayer Scenario.”
WIRED magazine’s Bruce Sterling recently rejected all the darker predictions around a runaway Technological Singularity. “There is no commercial model” sufficient or evident to drive enough private profits to feed such large model engineering, investment or experimentation, he wrote.
“It’s an end-of-history notion, and like most end-of-history notions, it is showing its age,” Sterling told Whole Earth’s Stewart Brand a few years ago. “Like most paradoxes it is a problem of definitional systems involving sci-fi hand-waving around this magic term ‘intelligence.’ If you fail to define your terms, it is very easy to divide by zero and reach infinite exponential speed.”
We don’t know what Sterling or Dyson meant or may have been trying to say. o you are free to figure it out all on your own. Or, you can wait for a machine to tell you.
— Rollie Atkinson
4-9-2024
4 Comments
2 more comments...No posts
Jeez Rollie and I was really starting to like your observations/writings.
Good one, as usual