生成數據智能

谷歌警告稱,人工智慧助理用戶可能會對他們產生「情感依戀」 - 解密

日期:

Virtual personal assistants powered by artificial intelligence are becoming ubiquitous across technology platforms, with every major tech firm adding AI to their services and dozens of 專門服務 tumbling onto the market. While immensely useful, researchers from Google say humans could become too emotionally attached to them, leading to a host of negative social consequences.

研究論文 from Google’s DeepMind AI research laboratory highlights the potential benefits of advanced, personalized AI assistants to transform various aspects of society, saying they “could radically alter the nature of work, education, and creative pursuits as well as how we communicate, coordinate, and negotiate with one another, ultimately influencing who we want to be and to become.”

當然,如果人工智慧發展在沒有深思熟慮的規劃的情況下繼續加速發展,這種巨大的影響可能是一把雙面刃。

一項關鍵風險?形成不適當的親密關係-如果助理看到類似人類的形像或臉孔,這種關係可能會加劇。該論文稱:“這些人工智慧代理甚至可能向用戶表達他們所謂的柏拉圖式或浪漫的感情,為用戶對人工智慧形成長期的情感依戀奠定了基礎。”

如果不加以控制,這種依戀可能會導致用戶喪失自主權和社會聯繫,因為人工智慧可能會取代人類互動。

This risk is not purely theoretical. Even when AI was in a somewhat primitive state, an AI chatbot was influential enough to convince an user to commit suicide after a long chat back in 2023. Eight years ago, an AI-powered email assistant named “Amy Ingram” was realistic enough to prompt some users to send love notes and even attempt to visit her at work.

DeepMind 倫理研究團隊的研究科學家、論文的合著者 Iason Gabriel 沒有回應 解密的 要求發表評論。

然而,加布里埃爾在推文中警告說,“越來越個性化和人性化的助理形式帶來了有關擬人化、隱私、信任以及與人工智能的適當關係的新問題。”

加布里埃爾表示,因為“數以百萬計的人工智慧助理可以部署在社會層面,他們可以相互之間以及與非用戶互動”,他認為需要更多的保障措施和更全面的方法來應對這新的社會現象。

研究論文也討論了人工智慧助理開發中價值調整、安全和濫用的重要性。儘管人工智慧助理可以幫助用戶改善福祉、增強創造力並優化時間,但作者警告稱,還有其他風險,例如與用戶和社會利益不一致、將價值觀強加於他人、用於惡意目的以及脆弱性對抗性攻擊。

為了因應這些風險,DeepMind團隊建議對AI助理進行全面評估,並加速開發對社會有益的AI助理。

「我們目前正處於這個科技和社會變革時代的開端。因此,作為開發人員、研究人員、政策制定者和公共利益相關者,我們現在有機會採取行動,塑造我們希望在世界上看到的人工智慧助理。

AI misalignment can be mitigated through Reinforcement Learning Through Human Feedback (RLHF), which is used to train AI models. Experts like Paul Christiano, who ran the language model alignment team at OpenAI and now leads the non-profit Alignment Research Center, warn that improper management of AI training methods could end in catastrophe.

“I think maybe there’s something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, ” Paul Christiano 說過 on the Bankless podcast last year. “I take it quite seriously.”

編輯 小澤賴恩.

隨時了解加密新聞,在您的收件箱中獲取每日更新。

現貨圖片

最新情報

現貨圖片

和我們線上諮詢

你好呀!我怎麼幫你?