I was too young to be at the dawn of the Atomic Age when, in 1942, a team of scientists first split atoms in two and created a new super power of nuclear fission and an infinite source of energy. (Actually, I had not even been born.)
But I was an up-close eyewitness 30 years later when teams of scientists split the first DNA molecules of living genes and planted them into other living material. And, now, I am still alive and witnessing mankind’s most treacherous twists of nature’s laws by coding machines with artificial intelligence (A.I.) that could split apart the master keys to our universe, transmute our basic language and change the definition of what it is to be a human.
Once again, we face an Aladdin-like moment with our hands rubbing a magical bottle. Once again we are releasing a Genie that promises unimaginable life-giving gifts — but comes paired with the equal powers of self-annihilation. We already know this Genie can never be put back into the bottle. We are left to fear that we have doomed ourselves to a future that our past has not prepared us for.
With all three of these bottles and Genies, there have been worldwide warnings from scientists, researchers, engineers and policy makers. Each time we rubbed the bottle and released new godlike powers there have been calls for moratoriums, government regulation and higher spiritual guidance. So far, in all cases, the lure of profits, prowess and politics has overruled the concerns of morality, higher wisdoms and the preservation of our species.
Today’s controversy over A.I. is still in its earliest stages. Some of A.I.’s leading innovators have written a series of open letters and newspaper commentaries full of self-confessions and urgent calls. “I think if this technology goes wrong, it can go quite wrong,” said Sam Altman, the CEO of OpenAI, the creator of GPT-4.
A recent open letter by A.I.’s most prominent executives, researchers and computer engineers has signaled an ultimate warning. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” read the one-sentence warning by 350 A.I. pioneers, including two men considered to be the “godfathers” of the intelligence-splitting computer-based algorithms and “large language models.”
Hit the brakes
There have been multiple hearings in Congress and an urgent White House meeting with President Biden. But, so far, there have been no controls or guardrails erected around the rapid advancement of A.I.
It all sounds eerily reminiscent of the urgent warning by the world’s leading microbiologists that I witnessed in the mid-1970s. The public alarms over gene splicing were the first time in history when the scientific community invited public scrutiny. The scientists (not unanimously) did not trust themselves alone to open the Genie Bottle that contained the tools for recombinant DNA. They foresaw a moment when humans would be crossing into a new universe where man-made living organisms could replace, mutate or eliminate natural, “god-made” organisms.
Most likely the reason the first atomic scientists also did not sound their own urgent public warnings was because the world was at war and it was feared the Germans might be first to create a nuclear bomb. Famously, the United States’ atom-splitting work was so secret that incoming President Harry Truman was not told of the nuclear bomb’s existence until just weeks ahead of his decision to drop it on the Japanese cities of Hiroshima and Nagasaki.
The radioactive mushroom clouds and the images of burning flesh at Hiroshima shocked the world’s population and its leaders into rapid action to put boundaries around nuclear fission. By 1957, the United Nations created the International Atomic Energy Agency (IAEA) that continues to this day to contain the spread and use of nuclear power and devices. (IAEA is currently doing field inspections of Ukraine’s nuclear power plants that are occupied by Russian forces.)
U.S. President Dwight Eisenhower in 1957 launched the Atoms For Peace program and a series of weapons treaties during the USA – USSR Cold War has kept mankind from blowing itself up and keeping all those Sci-Fi stories about an endless Nuclear Winter in the fiction category so far.
Beginning in the early and mid 1970s, biological scientists including Paul Berg, Herbert Boyer and Stanley Cohen, among several others, successfully produced intracellular strands of DNA which Berg called “recombinant DNA.” Almost immediately after his successful laboratory work, Dr. Berg began to circulate questions and warnings about this new gene-splicing technology.
In 1975, following a smaller summit at the same location, Dr. Berg and associate Dr. Maxine Singer convened a conference on recombinant DNA at Asilomar State Beach on the Monterey Bay in California. Among the 140 invitees were lawyers and practicing journalists. The journalists were required to embargo all their news coverage until the conference was ended three days later.
The key recommendation from the conference was a call for international guidelines on the experimental research and applications. U.S. Senate hearings were held, chaired by Senator Ted Kennedy. Risk assessment experiments were conducted at Fort Detrick, Maryland in maximum containment laboratories first used for the nation’s biological and chemical warfare experiments. The congressional hearings and the Fort Detrick laboratory attracted protesters, some carrying placards that asked “Who Should Play God?” The mayor of Cambridge, Massachusetts signed a moratorium on Harvard and M.I.T. genetic engineering work. He evoked fears that the scientists might create a new Frankenstein monster.
At Fort Detrick, the risk assessment work was delayed for several months when local attorney Ferdinand Mack won a temporary restraining order claiming the experiments represented an “unacceptable gamble” and “potential threat” to his infant son’s well-being. A federal judge ultimately dismissed Mack’s claims.
By 1976, the National Institutes of Health, under the direction of Dr. Donald S. Frederickson issued guidelines for recombinant DNA research that remain in effect to this day, following many amendments as the science and biotechnology have continued to advance. Still, the NIH rules prohibit human experiments and limit many other procedures. At the same time, a very lucrative and productive field of biotechnology and genetic engineering has prospered. In 1980, scientists Boyer and Cohen received the first U.S. patent for a living organism under the corporate mantle of Genentech.
At the conclusion of the Asilomar conference, Dr. Frederickson wrote a “moral” to the historic moment. “Faced with real questions of theoretical risks, the scientists paused and then decided to proceed with caution,” he wrote. “It will occur again in some areas of scientific research, and the initial response must be the same.”
How do you spell G-O-D?
The fears over artificial intelligence go beyond young students having machines do their homework for them or having computers write poetry, TV scripts or various pieces of art. The chief fear is not about a massive loss of jobs. There have been many inventions and advancements throughout our industrial and digital ages that have eliminated old jobs and created new ones. This is just another one of those times.
The main fear of extinction being raised by hundreds of A.I. execs is the fear that their invented A.I. computer tools will soon take control over all our language. ADI. will set new definitions of reality (already confused) and will control our money, political debates, legal discourse and even the future coding and algorithms of all computers.
We know an atomic bomb when we see one and we have learned to label our foods with GMO stickers. But we will not be able to trust our internet portals, social media platforms or global search engines to tell us the truth the way poets, philosophers, statesmen and clergy have done for us. A.I. is already dominating our political debates, fueling our culture wars and controlling our robots.
Science fiction novels have warned us for decades about the coming days when machines would take over our universe. We all remember HAL 9000, the villainous computer who (pick a pronoun) tried to kill his human crew in Arthur Clarke’s “2001: A Space Odyssey.”
Other sci-fi narratives have told us about how our brains would be implanted with microchips and convert us to cyborgs. In “The Matrix” movies, people were connected to a central computer network that controlled their attention spans and focus, their likes and dislikes, their consumer purchases, who they picked as “friends” and colored their political preferences. (Sound familiar?)
We have been living in worlds heavily influenced by A.I. since at least the spread of social media platforms like MySpace and Facebook. We have involuntarily given over 99 percent of our access to the Internet to Google, Microsoft and Apple. Just like the mega-corporations that own and control the wires and WiFi feeds that connect to our homes and businesses, A.I.-run Internet companies control our news feeds and the other online sounds and images we “experience.” Google, Tik Tok, Facebook and the others invade our privacy, convert us into sets of data points and place us in our own safe and predictive viral cocoon.
To enter most websites we must click a box that says, “I am not a robot.” And, then a robot lets us in. As humans, we “Google” more times a day than we do anything else, including eating, talking, hugging, playing or being mindful.
Most alarming about the recent warning that we now “risk extinction” is also the admission that we are still in the early and primitive days of the A.I. age and universe. A recent survey found that the majority of A.I. leaders and researchers admitted they were “doomsayers.”
“There’s a very common misconception, even in the A.I. community, that there are only are a handful of doomers,” said Dan Hendrycks, the director of the Center for AI Safety. Hendrycks and others point to fields where A.I. “is improving so rapidly that it has already surpassed human-level performance.”
Some fear A.I. could “rapidly eat the whole of human culture” even replacing the world’s holy books with new cults. “By 2028, the U.S. presidential race might no longer be run by humans,” one group of A.I. leaders has warned. That warning may be too late. Already, Trump’s and other political campaigns are deploying “deep fake” videos and social media posts to attack opponents and alter reality.
So far, in all the early tests of A.I.’s ultimate powers, none of the scenarios are proving positive for the survival of democracy, free will, fair markets or true intelligence.
Millenniums before the splitting of atoms or the recombination of DNA molecules humans have been constantly redefined by their own inventions and new tools. First there was fire, then agriculture, then language and moveable type. Humans evolved and stood more erect, grew paunchy, then lazy and now find themselves lacking in sentient thought, outmaneuvered by their own tool. We restrain from asking, “who is the hammer and who is the nail?”
Can we still call ourselves a human if we allow artificial intelligence to create an artificial reality (a metaverse) for us to play in? Perhaps. But, if we don’t know when the game is over or how to turn it off, or how to turn the switch back to “Real Reality,” is that when we learn what today’s A.I. doomsayers mean by “extinction?”
— Rollie Atkinson
6-6-2023
Discussion about this post
No posts
From a Strong Towns email.
It is the embedded quote that I think is relevant. There is no ‘designer’ outside the evolved collective consciousness of our species (sic). That ‘collective’ isn’t.
From “Strong Towns” email.
https://ortto.app/-/m/view-online?k=C3N0cm9uZ3Rvd25zAGD1xHS1tbukR0QAYGTpCQUOWEzivMjH7i0ChGEW17Ak9F5w88Lr7A
“
Asia: My colleague Edward Erfurt shared this article with the team, and I audibly yelped when I got this bingo moment:
In order to make transportation safer, Ederer says, engineers and policymakers can’t expect much progress from individuals changing their behavior. Instead, the transportation system should be altered to reduce the overall risk people face when using it.
“We don’t ask everyone to filter their water at their home,” Ederer says. “We build it into the system.
It’s so rare to see this conclusion outside of the Strong Towns orbit. It’s one we hammer at every relevant opportunity and the thesis of every Crash Analysis Studio, including the one that happened earlier today. Modifying design, rather than targeting individual behavior through visual campaigns, is infinitely more effective.
“
AI has the potential for destroying mankind as we know it. The first danger will be the drumming down of our youth, who find it easier to have an AI program do their homework for them. Taken to an extreme the students will no longer have to learn their subjects, but just let AI do it for them. Son we end up with a generation that doesn’t know how to think, or create. New ideas become non-existent. No more breakthrough inventions to solve the world’s problems.
The second danger will be, as heard on 60 Minutes, that the AI programs will ‘learn” from the internet, including all the conspiracy theories. These will get passed back to the AI users as ‘fact’ with no discrimination to the damage that they can do. An example would be if the AI bot would start promoting the idea that the 2020 elections were stolen... it isn’t a big leap to see our fragile democracy destroyed.
Just say no to rampant use of AI... and leave it in the hands of researchers that are working to improve our lives.
Frank Mayhew