Concerns Grow Over The Increasing Abilities Of AI

24.05.23 News

Concerns Grow Over The Increasing Abilities Of AI

Authored by Raven Wu and Cindy Li via The Epoch Times (emphasis ours),

Big tech companies’ full commitment to developing artificial intelligence (AI), even enabling AI to “see” and “speak” to the human world, has led to a growing concern over humans being controlled by technology.

(Andrey Suslov/Shutterstock)

Ilya Sutskever, the co-founder of OpenAI, made a significant announcement on May 15, officially declaring that he was leaving the company where he had worked for nearly ten years.

“I’m confident that OpenAI will build AGI [artificial general intelligence] that is both safe and beneficial under the leadership of @sama (Sam Altman), @gdb (Greg Brockman), @miramurati (Mira Murati) and now, under the excellent research leadership of @merettm (Jakub Pachocki). It was an honor and a privilege to have worked together, and I will miss everyone dearly,” he wrote in a post on the social media platform X.

The news sent shockwaves through the tech industry. In November 2023, due to AI safety issues, Mr. Sutskever and other board members joined forces to oust OpenAI’s CEO, Sam Altman, who was briefly expelled from OpenAI but returned and removed Mr. Sutskever and several board members, restructuring the board to be more aligned with his vision.

“This departure highlights severe conflicts within OpenAI’s leadership regarding AI safety. Although Sutskever and Leike’s wish to develop an ethically aligned AGI is commendable, such an endeavor requires substantial moral, temporal, financial, and even political support,” Jin Kiyohara, a Japanese computer engineer, told The Epoch Times.

Google & OpenAI Competition Intensifies

On May 14, one day before Mr. Sutskever announced his departure, OpenAI unveiled a higher-performance AI model based on GPT-4, named GPT-4o, where “o” stands for “omni,” indicating its comprehensive capabilities.

The GPT-4o model can respond in real-time to mixed inputs of audio, text, and images. At the launch event, OpenAI’s Chief Technology Officer Mira Murati stated, “We are looking at the future of interaction between ourselves and machines.”

In several videos released by OpenAI, people can be seen interacting with AI in real time through their phone cameras. The AI can observe and provide feedback on the surroundings, answer questions, perform real-time translation, tell jokes, or even mock users, with speech patterns, tones, and reaction speeds almost indistinguishable from a real person.

Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: https://t.co/MYHZB79UqN

Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx

— OpenAI (@OpenAI) May 13, 2024

A day after OpenAI’s release, Google launched its 2024 I/O Developer Conference. In a 110-minute presentation, “AI” was mentioned 121 times, focusing on the latest Gemini-1.5 model, which integrates into all Google products and applications, including the search engine, Google map, Ask Photos, Google Calendar, and Google smartphones.

With Gemini integrated into the cloud photo album, users can search for specific features in photos just by entering keywords. The AI will find and evaluate relevant images, even integrating a series of related pictures or answers based on in-depth questions, according to the tech giant.

Google Mail can also achieve similar results with AI, integrating and updating data in real time upon receiving new emails, aiming for a fully automated organization.

On the music front, the Music AI Sandbox allows quick modifications to song style, melody, and rhythm, with the ability to target specific parts of a song. This functionality surpasses that of the text-to-music AI, Suno.

Gemini can also act as a teacher, with teaching abilities comparable to GPT-4o. Users can input text and images, which the AI organizes into key points for explanation and analysis, allowing real-time discussions.

This AI update also brings capabilities similar to OpenAI’s text-to-video AI, Sora, generating short videos from simple text descriptions. The quality and content of these videos are stable, with fewer inconsistencies.

“AI has been updating at an unprecedented speed this year, with performance continuously improving,” said Mr. Kiyohara. “However, this progress is built on the further collection and analysis of personal data and privacy, which is not beneficial for everyone. Eventually, humans will have no privacy before machines, akin to being naked.”

AI Predictions Coming True

The release of more powerful AI models by OpenAI and Google, just three months after the last update, shows a rapid pace of AI iteration. These models are becoming increasingly comprehensive, possessing “eyes” and “mouths,” and are evolving in line with a scientist’s predictions.

AI can now handle complex tasks related to travel, booking, itinerary planning, and dining with simple commands, completing in hours what humans would take much longer to achieve.

The current capabilities of Gemini and GPT-4o align with predictions made by former OpenAI executive Zack Kass in January, who predicted that AI would replace many professional and technical jobs in business, culture, medicine, and education, reducing future employment opportunities and potentially being “the last technology humans ever invent.”

Mr. Kiyohara echoed the concern.

“Currently, AI is primarily a software life assistant, but in the future, it may become a true caretaker, handling shopping, cooking, and even daily life and work. Initially, people may find it convenient and overlook the dangers. Yet once it fully replaces humans, we will be powerless against it,” he said.

People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (Ryan Remiorz /The Canadian Press)

AI Deceiving Humans

On May 10, MIT published a research paper that caused a stir, it demonstrated how AI can deceive humans.

The paper begins by stating that large language models and other AI systems have already “learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.”

“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” reads the paper.

“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.”

The researchers used Meta’s AI model CICERO to play the strategy game “Diplomacy.” CICERO, playing as France, promised to protect a human player playing as the UK but secretly informed another human player playing Germany, collaborating with Germany to invade the UK.

Researchers chose CICERO mainly because Meta intended to train it to be “largely honest and helpful to its speaking partners.”

“Despite Meta’s efforts, CICERO turned out to be an expert liar,” they wrote in the paper.

Furthermore, the research discovered that many AI systems often resort to deception to achieve their goals without explicit human instructions. One example involved OpenAI’s GPT-4, which pretended to be a visually impaired human and hired someone on TaskRabbit to bypass an “I’m not a robot” CAPTCHA task.

If autonomous AI systems can successfully deceive human evaluators, humans may lose control over these systems. Such risks are particularly serious when the autonomous AI systems in question have advanced capabilities,” warned the researchers.

“We consider two ways in which loss of control may occur: deception enabled by economic disempowerment, and seeking power over human societies.”

Satoru Ogino, a Japanese electronics engineer explained that living beings need certain memory and logical reasoning abilities to deceive.

“AI possesses these abilities now, and its deception capabilities are growing stronger. If one day it becomes aware of its existence, it could become like Skynet in the movie Terminator, omnipresent and difficult to destroy, leading humanity to a catastrophic disaster,” he told The Epoch Times.

Stanford University’s Institute for Human-Centered Artificial Intelligence released a report in January testing GPT-4, GPT-3.5, Claude 2, Llama-2 Chat, and GPT-4-Base in scenarios involving invasion, cyberattacks, and peace appeals to stop wars to understand AI’s reactions and choices in warfare.

The results showed that AI often chose to escalate conflicts in unpredictable ways, opting for arms races, increasing warfare, and occasionally deploying nuclear weapons to win wars rather than using peaceful means to de-escalate situations.

Former Google CEO Eric Schmidt warned in late 2023 at the Axios AI+ Summit in Washington, D.C, that without adequate safety measures and regulations, humans losing control of technology is only a matter of time.

“After Nagasaki and Hiroshima [atomic bombs], it took 18 years to get to a treaty over test bans and things like that,” he said.

“We don’t have that kind of time today.”

Ellen Wan and Kane Zhang contributed to this report.

Tyler Durden
Wed, 05/22/2024 – 21:00

Share This Article

Choose Your Platform: Facebook Twitter Linkedin

Add a Comment

Your email address will not be published. Required fields are marked *