Fans and foes of emerging generative artificial intelligence platforms like ChatGPT, DALL-E, Google’s Bard and others have a lot of strong feelings when it comes to what kind of futurescapes these new tools are likely to foster.
And, according to data gathered in a new Deseret News/Hinckley Institute of Politics survey, Utahns have their own strong feelings when it comes to the advancement of artificial intelligence and what should, or should not, be done to regulate further developments.
Geoffrey Hinton is a British Canadian scientist and researcher who is widely considered the “Godfather of AI” and recently quit his job working on Google’s artificial intelligence program so he could speak more openly about his concerns over the new technology. Hinton has said he’s had a change of heart about the potential outcomes of fast-advancing AI after a career focused on developing digital neural networks — designs that mimic how the human brain processes information — that have helped catapult artificial intelligence tools.
“The problem is, once these things get more intelligent than us it’s not clear we’re going to be able to control it,” Hinton said. “There are very few examples of more intelligent things controlled by less intelligent things.”
In a March interview with CBS News, Hinton was asked if AI has the potential to wipe out humanity.
“It’s not inconceivable,” Hinton said. “That’s all I’ll say.”
In an essay published earlier this month titled “Why AI Will Save the World,” Silicon Valley venture capital guru Marc Andreessen argues that the fear of technology rising up to destroy humanity is coded into our culture and the chances of an AI-based program “coming alive” to kill us all is on par with a toaster launching into a murderous rampage.
“My view is that the idea that AI will decide to literally kill humanity is a profound category error,” Andreessen wrote in the June 6 posting. “AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math — code — computers, built by people, owned by people, used by people, controlled by people.
“The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.”
In a statewide poll of registered Utah voters conducted May 22-June 1, 69% of respondents said they were somewhat or very concerned about the increased use of artificial intelligence programming while 28% said they were not very or not at all concerned about the advancements.
Parsing responses by political affiliation, Republicans and Democrats showed almost identical levels of concern, or lack of, over AI advancement but women respondents logged higher levels of unease with the new tools, 76%, than men at 63%.
The polling was conducted by Dan Jones and Associates of 798 registered Utah voters and has a margin of error of plus or minus 3.46 percentage points.
The concerns over AI reflected by Utahns are being widely felt by political leaders, as well, and efforts to figure out a regulatory response to AI advancements are well underway in the U.S. and around the world.
Last month, the U.S. Senate convened a committee hearing that leaders characterized as the first step in a process that would lead to new oversight mechanisms for artificial intelligence programs and platforms.
Sen. Richard Blumenthal, D-Conn., who chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, called a panel of witnesses that included Sam Altman, the co-founder and CEO of OpenAI, the company that developed ChatGPT, DALL-E and other AI tools.
“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Blumenthal said.
Those past mistakes include, according to Blumenthal, a failure by federal lawmakers to institute more stringent regulations on the conduct of social media operators.
“Congress has a choice now,” Blumenthal said. “We had the same choice when we faced social media, we failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them.
“Congress failed to meet the moment on social media, now we have the obligation to do it on AI before the threats and the risks become real.”
Since Altman co-founded OpenAI in 2015 with backing from tech billionaire Elon Musk, the effort has evolved from a nonprofit research lab with a safety-focused mission into a business, per The Associated Press. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.
Altman readily agreed with committee members that new regulatory frameworks were in order as AI tools in development by his company and others continue to take evolutionary leaps and bounds. He also warned that AI has the potential, as it continues to advance, to cause widespread harms.
“My worst fears are that we, the field of technology industry, cause significant harm to the world,” Altman said. “I think that can happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that.
“We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”
Utahns appear to be of mixed sentiment when it comes to upping the ante on government regulation of AI tools. While a plurality of poll participants, 43%, said they’d like to see regulation increased, 19% said a decrease of AI regulation was in order and 26% said the status quo should be maintained.
Republican and Democratic respondents were about on par when it comes to supporting an increase in government regulation of AI but more Republicans than Democrats, 22% to 12%, would like to see regulation decreased.
When it comes to what level of government should be engaging in regulatory oversight of artificial intelligence advancements, a challenge reflected in the current hodgepodge of regulatory efforts by both state and federal lawmakers, a majority of poll participants, 53%, say the feds should be in charge. And while 22% of respondents believe state government should oversee AI, 17% said government should not be involved in regulating tech companies working on artificial intelligence.
Hinton and Altman both signed on to a single-sentence open letter issued by the nonprofit Center for AI Safety last month that’s earned the support of a wide-ranging group of distinguished scientists, academics and tech developers.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.
But Andreessen believes a light-touch regulatory approach is the best way forward, noting that some global players will likely flaunt any supra-national efforts to build protections through regulation.
Instead, Andreessen said the best path forward is one in which both big AI players and new startups in the sector are allowed to “build AI as fast and aggressively as they can.” And, he sees private-public partnerships as the best tool to both be prepared for the inevitable misuses of advanced artificial intelligence technology and to put those advancements to work on their best and highest capacities.
“To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities,” Andreessen wrote. “This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.”