Experts on national defense, the economy, education and the technology behind powerful AI platforms like ChatGPT told Fox News Digital what Americans should be most concerned — and in some cases, excited — about as the technology continues to transform the world.
ChatGPT is an artificial intelligence, or AI, program that can have conversations on topics from history to religion, generate computer code and even write poetry. The platform has a number of big name investors, including Tesla CEO Elon Musk.
Heritage Foundation Director of Tech Policy and former Facebook executive Kara Frederick listed the national security concerns behind AI in an interview. She first pointed out that two of America’s chief foreign adversaries, China and Russia, agree that AI is at the forefront of the tech wars.
ARTIFICIAL INTELLIGENCE 'GODFATHER' ON AI POSSIBLY WIPING OUT HUMANITY: ‘IT'S NOT INCONCEIVABLE’
"President Xi said that by 2030 he wanted China to be the preeminent country in AI research and development. Putin has said that whoever controls AI controls the future," Frederick said.
That should be a clear sign to Americans that if chief competitors of the U.S. on the world stage are invested in AI, then this country should be too, Frederick said.
She also argued that individuals "should absolutely be concerned" about AI and mass data collection because the Chinese Communist Party uses "stolen data" with "open source data" to build intelligence "dossiers."
"They do it to their own people," Frederick said of the CCP.
"They basically aggregate a bunch of behavioral and biometric data, and then they try to figure out if this person is a threat to the regime" or if they need to be targeted "for a visit by a police authority," she said.
Frederick continued: "They’ve got iris scans, they've got voice samples on things like 360 captures of the way individuals walk."
STEPHEN SPIELBERG WARNS AI ‘TERRIFIES’ HIM: 'IT WILL BE THE TWILIGHT ZONE'
On education, conservative Media Research Center Free Speech America Vice President Dan Schneider said that AI threatens to subvert the meaning of education itself.
"The heart of education is to teach people how to think," Schneider said. "But AI programs like ChatGPT threaten to replace human cognition."
"ChatGPT is a complete replacement of [thinking], so that instead of learning how to think, we are just told what to know. This will in fact destroy the whole idea of education, it just turns people into robots that can spew out data points, but not understand the significance of those data points."
On the economy, Harvard Business School Assistant Professor Edward McFowland III was more hopeful. He said that AI, like other disruptive technologies, will likely be more of an "augmentation" tool for workers than a disruption tool.
"I think of it as more of an augmentation story than a replacement story," he said. He added that AI will likely make "certain skills less valuable" because they can be done by technology.
AI EXPERT ALARMED AFTER CHATGPT DEVISES PLAN TO 'ESCAPE': 'HOW DO WE CONTAIN IT?'
The CEO of the company behind ChatGPT, however, was more forceful in his prediction of AI’s effect on the economy.
"It is going to eliminate a lot of current jobs, that’s true," OpenAI CEO Sam Altman said during an interview with ABC News.
"We can make much better ones. The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed," Altman added.
But both McFowland and Altman agreed that AI-powered chatbots are far from being the first disruptive technology in human history.
"Education is going to have to change," the OpenAI CEO said. "But it’s happened many other times with technology. When we got the calculator, the way we taught math and what we tested students on totally changed."
McFowland also compared AI to the calculator, explaining that the tool allowed engineers to shift their focus from pure number crunching to solving higher-level problems, like "getting a rocket into space."
Fiddler.AI CEO and AI tech expert Krishna Gade dove into the technology behind AI services like ChatGPT.
"Lack of explainability, data privacy issues and built in bias are the three main problems," Gade said.
Companies that use ChatGPT to make decisions, Gade explained, have no way of actually seeing the data for themselves, which means that they cannot explain the data.
In other words, it's a "big black box," Gade said.
That also puts individual privacy at risk, Gade argued, because companies also do not know how customer data is affected, or even where it comes from.
On AI bias, Gade gave examples of associations and stereotypes that AI can fall for since they are relying so heavily on the internet. One example, he said, is that AI might conclude that women are almost always associated with "fashion or beauty" or that certain religions are tied to violence.
MySpace Founder Brad Greenspan predicted a world of "specialized AI-bots" that will serve families and customers over a period of years.
"An SAI-BOT infected with malware or manipulated could take advantage of the trust relationships built up over years of service with a perfect track record, only to suddenly become a trojan horse that knows every way to take advantage of the family it was designed to serve faithfully," he said.
The potential for damage, Greenspan said, would be immense.
"Imagine a hack shifting the software into the ultimate phishing, hacking and emotionally manipulating machine you can imagine," he said. "It has the brain power of Albert Einstein installed plus years of private data, audio recordings of your entire family’s voices and with its speech synthesis standard software."
PUBLISHERS PREPARE FOR SHOWDOWN WITH MICROSOFT, GOOGLE OVER AI TOOLS
DataGrade Founder Joe Toscano said that the coming of AI is not an apocalyptic event.
"I believe that we should be optimistic about the future," Toscano said. "It's not going to be the end of the world. I don't believe these machines are going to come take over the government, take over buildings or assault us."
"I think what we need to help people do is learn how to work with these tools instead of being used by them," he added.
The time that Toscano saves, he explained, by using AI in his daily life allows him to "spend more time with his family" and "focus on the most meaningful and human aspects of his life."
Chief Information Officer at the NASA Jet Propulsion Laboratory Chris Mattmann warned about platforms like ChatGPT becoming "deepfake generators."
"Any one of them becomes someone with the capacity, like a media company, to push out a vast amount of content that does not truly exist," Mattman said.
Multiple experts agreed that more "guardrails" were needed to protect Americans from AI.
"We have to be serious about implementing some guardrails on this technology or else we're frankly going to lose the next generation," Frederick, a former executive at Facebook, told Fox News Digital.
"When [AI] gets out of the box," Prof. McFowland said, "it's going to be hard to put back in. We need to protect ourselves from when it goes awry."
As to what exactly those guardrails should be, Prof. McFowland, who has personal experience testing out ChatGPT, said he was not entirely sure.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
But the first step, McFowland said, was to have a "level-headed and temperate debate" on the "pros and cons" of potentially allowing AI into our lives.