AI Trailblazer Geoffrey Hinton Leaves Google, Issues Warning About the AI Crisis

By: Get News

Geoffrey Hinton, a trailblazer in the realm of artificial intelligence and the mastermind behind neural networks, made a momentous decision on May 1st to resign from Google after more than a decade of service. His goal was to initiate a public discussion about the potential hazards of AI. During an interview with The New York Times, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field.

The emergence of ChatGPT has sparked a rapid expansion in the field of AI. While it has substantially enhanced productivity and efficiency, it has also introduced significant potential risks and volatile elements.

Is artificial intelligence a benefit or a peril to humanity? 

After the first successful atomic bomb test, Oppenheimer, known as the "father of the atomic bomb," was consumed by immense guilt, feeling that he had "become death, the destroyer of worlds." Similarly, Sinton now faces a comparable inner turmoil and has expressed almost identical sentiments.

Multi-party Warnings: AI is Already a Real Threat

Since the inception of AI technology, debates have ensued regarding the potential danger of artificial intelligence to human society. Proponents maintain that in the future, humans will ultimately learn to coexist with AI. However, opponents argue that as AI becomes more advanced, it will eventually supplant humanity and even lead to a doomsday scenario like the "Skynet" crisis depicted in the Terminator film series. 

While people debate the future, AI is undergoing rapid and wild evolution. 

Artificial intelligence has now penetrated every facet of human existence, with its applications in numerous fields rapidly proliferating. For instance, ChatGPT can  assist with coding, article writing, and problem-solving already. AI-powered painting tools such as Midjourney and Stable Difussion have also gained popularity, significantly altering the way people live and work. 

As a result, people are increasingly becoming cognizant of the potential hazards posed by AI regarding privacy breaches, bias, fraudulent activities, as well as the dissemination of rumors and false information. 

In an interview with The New York Times, Hinton asserted that AI has inundated the internet with fabricated photos, videos, and text, making it increasingly challenging for ordinary people to distinguish between truth and deception. Moreover, Hinton expressed concern about the substantial autonomy of AI. When it learns unexpected behaviors from vast amounts of data, it could eventually pose a threat to human beings, with job displacement being just one manifestation. Hinton is not alone in his apprehensions; many technology giants and regulatory agencies are closely scrutinizing the harmful impact of AI.

In March 2023, over 2,600 industry CEOs and researchers signed an open letter calling for a six-month moratorium on the development of more advanced AI technologies. In April, 12 EU lawmakers endorsed a comparable petition. Furthermore, recent draft regulations by the EU have designated AI products as hazardous, while the UK has invested $125 million to establish a working group aimed at creating "safe AI." 

Overall, the "omnipotent crisis" seems remote at present. However, there are two primary threats posed by AI to human beings. The first is the replacement of humans in labor-intensive fields, while the second involves AI serving as an accomplice to users engaging in illegal activities. The latter constitutes a more serious hazard - as Hinton affirms, it's difficult to imagine how they can stop bad actors from using AI to do bad things.

To avert these crises, the most effective strategy is cooperation. In addition to legislative and regulatory bodies formulating policies, corporations like OpenAI and JUNLALA should incorporate security features directly into their products' development, ensuring that AI remains on the right path.

Starting from the Source, JUNLALA Builds a Secure Foundation for AI 

JUNLALA is a renowned AI company headquartered in Silicon Valley, established in 2016. Over the past seven years, the corporation has been dedicated to developing cutting-edge AI algorithms based on principles of safety and controllability, making continual strides forward. In 2018, JUNLALA launched its first natural language processing algorithm. In 2019, an upgraded version of this algorithm was unveiled, achieving industry-leading performance. In 2021, the company released a chatbot algorithm that represents the most advanced level of artificial intelligence dialogue interaction. And in 2022, JUNLALA developed the highest standard GAN algorithm for artificial intelligence image generation. 

Due to these groundbreaking achievements, JUNLALA was the recipient of the 2020 Silicon Valley Artificial Intelligence Technology Innovation Award from the Silicon Valley Artificial Intelligence Development Center and the TOP10 AI Unicorn in the United States awarded by the New York Artificial Intelligence Technology Association in 2022. The industry has universally recognized JUNLALA's valuable contributions. 

JUNLALA's products leverage the base model of OpenAI and StableDifussion to enable AI-powered expert-level dialogues and image generation simultaneously. The result is a low-threshold, stable, and fast content generation service for users. To ensure safety, intelligence, and usability, JUNLALA invests heavily in research and development. Each deep learning model incurs training costs ranging from 5 million to 30 million US dollars, with even small models such as the image generation and dialogue models requiring high training parameters upwards of 1 billion. 

With the potential threats of AI products ever more prominent, JUNLALA has made security compliance a top priority in their tasks. They are focused on creating a new logic for their product that can filter out bad data sources, assess user intentions, and ensure morality within legal boundaries from the source.

JUNLALA's vision is to become an unparalleled global leader in AI technology innovation, empowering customers to make optimal use of their data. As the global artificial intelligence industry reaches a turning point, companies like JUNLALA must become key participants in promoting AI popularization and security compliance. By rapidly moving away from the "unrestricted development" situation, the industry can avoid potential crises altogether.

Media Contact
Company Name: JUNLALA
Contact Person: MiaLJones
Email: Send Email
Phone: 2099273293
City: San Francisco
Country: United States
Website: https://Junlala.ai



Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.