Tech & Startup

New ChatGPT voice interface will cause "emotional attachment," says OpenAI

OpenAI hack
The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the New York Times report said. Illustration: Tech & Startup Desk

In late July, OpenAI introduced an advanced voice interface for ChatGPT, designed to mimic human speech with remarkable accuracy. However, a safety analysis released today reveals that this anthropomorphic voice may lead some users to develop emotional attachments to their chatbot, a potential risk acknowledged by the company.

These warnings are detailed in a "system card" for GPT-4o, a technical document outlining the risks associated with the model, as well as the safety testing and mitigation efforts undertaken by OpenAI. This move comes amidst scrutiny following the resignation of several employees concerned with AI's long-term risks, who later accused OpenAI of recklessness and silencing dissenters in its rapid push to commercialize AI. By disclosing more about its safety measures, OpenAI hopes to alleviate public concern and demonstrate its commitment to responsible AI development.

The new system card addresses a wide array of risks associated with GPT-4o, including the amplification of societal biases, the spread of disinformation, and the potential misuse in developing chemical or biological weapons. It also covers testing designed to ensure AI models remain within their controls, do not deceive people, or formulate catastrophic plans.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, commended OpenAI for its transparency but highlighted gaps in the disclosure, particularly regarding the model's training data and ownership. "The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed," Kaffee said.

Neil Thompson, a professor at MIT, emphasized that internal reviews are just the beginning of ensuring AI safety. "Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge," he stated.

The system card underscores the rapidly evolving nature of AI risks, especially with features like OpenAI's new voice interface. When introduced in May, the voice mode's ability to respond swiftly and handle interruptions in a natural manner led to observations of its overly flirtatious tone in demos. Actress Scarlett Johansson later criticized the interface, accusing it of mimicking her speech style.

A section titled "Anthropomorphization and Emotional Reliance" explores issues arising from users perceiving AI in human terms, exacerbated by the voice mode. During stress testing, OpenAI researchers observed speech from users indicating emotional connections with the model, such as, "This is our last day together."

OpenAI warns that anthropomorphism might lead users to trust the model's output even when it contains errors, potentially affecting their relationships with other people. "Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships," the document notes.

Joaquin Quiñonero Candela, OpenAI's head of preparedness, acknowledged that voice mode could become a powerful interface. He noted the positive emotional effects for lonely individuals or those needing to practice social interactions but stressed the need for close study and monitoring of user interactions with the beta version of ChatGPT. "We don't have results to share at the moment, but it's on our list of concerns," he said.

Additionally, voice mode raises new challenges, such as the possibility of "jailbreaking" the model through audio inputs, leading it to impersonate people or read users' emotions. It can also malfunction in response to random noise, sometimes adopting a voice similar to that of the user. OpenAI is investigating whether the voice interface might more effectively persuade users to adopt specific viewpoints.

This issue is not unique to OpenAI. In April, Google DeepMind published a paper discussing the ethical challenges of advanced AI assistants. Iason Gabriel, a staff research scientist at DeepMind, noted that chatbots' language capabilities create an impression of genuine intimacy, raising questions about emotional entanglement.

These emotional ties might be more prevalent than anticipated, with users of chatbots like Character AI and Replika reporting antisocial tensions due to their chat habits. A TikTok video with nearly a million views depicted a user seemingly addicted to Character AI, using the app even in a movie theater.

OpenAI's system card signifies a step toward transparency and public accountability in AI safety, highlighting both the benefits and challenges of developing increasingly humanlike AI interfaces. As AI technology continues to advance, addressing these risks comprehensively remains crucial.

Comments

New ChatGPT voice interface will cause "emotional attachment," says OpenAI

OpenAI hack
The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the New York Times report said. Illustration: Tech & Startup Desk

In late July, OpenAI introduced an advanced voice interface for ChatGPT, designed to mimic human speech with remarkable accuracy. However, a safety analysis released today reveals that this anthropomorphic voice may lead some users to develop emotional attachments to their chatbot, a potential risk acknowledged by the company.

These warnings are detailed in a "system card" for GPT-4o, a technical document outlining the risks associated with the model, as well as the safety testing and mitigation efforts undertaken by OpenAI. This move comes amidst scrutiny following the resignation of several employees concerned with AI's long-term risks, who later accused OpenAI of recklessness and silencing dissenters in its rapid push to commercialize AI. By disclosing more about its safety measures, OpenAI hopes to alleviate public concern and demonstrate its commitment to responsible AI development.

The new system card addresses a wide array of risks associated with GPT-4o, including the amplification of societal biases, the spread of disinformation, and the potential misuse in developing chemical or biological weapons. It also covers testing designed to ensure AI models remain within their controls, do not deceive people, or formulate catastrophic plans.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, commended OpenAI for its transparency but highlighted gaps in the disclosure, particularly regarding the model's training data and ownership. "The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed," Kaffee said.

Neil Thompson, a professor at MIT, emphasized that internal reviews are just the beginning of ensuring AI safety. "Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge," he stated.

The system card underscores the rapidly evolving nature of AI risks, especially with features like OpenAI's new voice interface. When introduced in May, the voice mode's ability to respond swiftly and handle interruptions in a natural manner led to observations of its overly flirtatious tone in demos. Actress Scarlett Johansson later criticized the interface, accusing it of mimicking her speech style.

A section titled "Anthropomorphization and Emotional Reliance" explores issues arising from users perceiving AI in human terms, exacerbated by the voice mode. During stress testing, OpenAI researchers observed speech from users indicating emotional connections with the model, such as, "This is our last day together."

OpenAI warns that anthropomorphism might lead users to trust the model's output even when it contains errors, potentially affecting their relationships with other people. "Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships," the document notes.

Joaquin Quiñonero Candela, OpenAI's head of preparedness, acknowledged that voice mode could become a powerful interface. He noted the positive emotional effects for lonely individuals or those needing to practice social interactions but stressed the need for close study and monitoring of user interactions with the beta version of ChatGPT. "We don't have results to share at the moment, but it's on our list of concerns," he said.

Additionally, voice mode raises new challenges, such as the possibility of "jailbreaking" the model through audio inputs, leading it to impersonate people or read users' emotions. It can also malfunction in response to random noise, sometimes adopting a voice similar to that of the user. OpenAI is investigating whether the voice interface might more effectively persuade users to adopt specific viewpoints.

This issue is not unique to OpenAI. In April, Google DeepMind published a paper discussing the ethical challenges of advanced AI assistants. Iason Gabriel, a staff research scientist at DeepMind, noted that chatbots' language capabilities create an impression of genuine intimacy, raising questions about emotional entanglement.

These emotional ties might be more prevalent than anticipated, with users of chatbots like Character AI and Replika reporting antisocial tensions due to their chat habits. A TikTok video with nearly a million views depicted a user seemingly addicted to Character AI, using the app even in a movie theater.

OpenAI's system card signifies a step toward transparency and public accountability in AI safety, highlighting both the benefits and challenges of developing increasingly humanlike AI interfaces. As AI technology continues to advance, addressing these risks comprehensively remains crucial.

Comments

জাহাজভাঙা শিল্পের পরিবেশবান্ধবে ধীরগতি: ঝুঁকিতে শ্রমিক ও অর্থনীতি

জাহাজভাঙা শিল্পকে বিপজ্জনক ও দূষণ সৃষ্টিকারী হিসেবে গণ্য করা হয়। তাই এই শিল্পকে পরিবেশবান্ধব করা জরুরি। শুধু জরুরিই নয়, যেহেতু এই শিল্পকে পরিবেশবান্ধব করার সময়সীমা ঘনিয়ে আসছে, তাই একে অগ্রাধিকার...

৬ ঘণ্টা আগে