Skip to main content

Classic English Curio

《經典多寶格》由【經典美語】的教師與顧問群提供關於留學考試 (GRE, GMAT, TOEFL, IELTS, SAT, ACT)、留學資訊、英語學習、各項國內英語考試的相關資訊和經驗分享交流。
Font size: +

BBC 6 分鐘英語—Can AI have a mind of its own? 人工智慧能有自己的思想嗎?

2023-0126-6min-english-Can-AI-have-a-mind-of-its-own

隨著人工智慧研究的日益進步,各種看起來具有智能能力的聊天機器人常常騙過了人類使用者,讓他們相信自己對話的對象具有思考能力。但事實真是如此嗎?一起來聽聽 BBC 6 分鐘英語對這個話題的討論。

Can AI have a mind of its own? 人工智慧能有自己的思想嗎?

BBC 6 分鐘英語在 2023 年 01 月 26 日播出的節目 中討論的是人工智慧程式。

人工智慧是否能夠產生意識?BBC 6 分鐘英語的主持人尼爾和薩姆討論了此問題,同時在節目中將聽取一位專家的意見,他認為人工智慧並不像我們有時認為的那樣聰明。和往常一樣,節目主持人也教導聽眾一些相關詞彙與表達方式。

本周的問題

發生在谷歌工程師布萊克.里蒙身上的事情與 2013 年的好萊塢電影《雲端情人》奇怪地相似,這部電影由瓦昆.菲尼克斯(Joaquin Phoenix)主演,他是一位與電腦對話的孤獨作家,由史嘉蕾.喬韓森(Scarlett Johansson)配音。但在電影的結尾處發生了什麼?是
a) 電腦活過來了?
b) 電腦夢見了作家?或者。
c) 作者愛上了電腦?

詞彙

chatbot 聊天機器人
旨在通過網際網路與人類進行對話的計算機程序

cognitive 認知性
與思維、認識、學習和理解等心理過程有關的。

wishful thinking 一廂情願的想法
未來不可能實現的事情

anthropomorphise 擬人化
把動物或物體當作人一樣對待

blindsided 傻眼
不愉快的驚訝

get/be taken in (by) someone 被某個人所迷惑
被某個人欺騙了

中英文稿謄本

BBC 6 minute English – Can AI have a mind of its own?

點此看英文原稿

Sam
Hello. This is 6 Minute English from BBC Learning English. I’m Sam.

Neil
And I’m Neil.

Sam
In the autumn of 2022, something strange happened at the Google headquarters in California’s Silicon Valley. A software engineer called, Blake Lemoine, was working on the artificial intelligence project, ‘Language Models for Dialogue Applications’, or LaMDA for short. LaMDA is a chatbot – a computer programme designed to have conversations with humans over the internet.

Neil
After months talking with LaMDA on topics ranging from movies to the meaning of life, Blake came to a surprising conclusion: the chatbot was an intelligent person with wishes and rights that should be respected. For Blake, LaMDA was a Google employee, not a machine. He also called it his ‘friend’.

Sam
Google quickly reassigned Blake from the project, announcing that his ideas were not supported by the evidence. But what exactly was going on? 

Neil
In this programme, we’ll be discussing whether artificial intelligence is capable of consciousness. We’ll hear from one expert who thinks AI is not as intelligent as we sometimes think, and as usual, we’ll be learning some new vocabulary as well.

Sam
But before that, I have a question for you, Neil. What happened to Blake Lemoine is strangely similar to the 2013 Hollywood movie, Her, starring Joaquin Phoenix as a lonely writer who talks with his computer, voiced by Scarlett Johansson. But what happens at the end of the movie? Is it:
a)    the computer comes to life?
b)    the computer dreams about the writer?  or,
c)    the writer falls in love with the computer? 

Neil
… c) the writer falls in love with the computer.

Sam
OK, Neil, I’ll reveal the answer at the end of the programme. Although Hollywood is full of movies about robots coming to life, Emily Bender, professor of linguistics and computing at the University of Washington, thinks AI isn’t that smart. She thinks the words we use to talk about technology, phrases like ‘machine learning’, give a false impression about what computers can and can’t do.

Neil
Here is Professor Bender discussing another misleading phrase, ‘speech recognition’, with BBC World Service programme, The Inquiry:

Professor Emily Bender
If you talk about ‘automatic speech recognition’, the term ‘recognition’ suggests that there's something cognitive going on, where I think a better term would be automatic transcription. That just describes the input-output relation, and not any theory or wishful thinking about what the computer is doing to be able to achieve that.

Sam
Using words like ‘recognition’ in relation to computers gives the idea that something cognitive is happening – something related to the mental processes of thinking, knowing, learning and understanding.

Neil
But thinking and knowing are human, not machine, activities. Professor Benders says that talking about them in connection with computers is wishful thinking – something which is unlikely to happen.

Sam
The problem with using words in this way is that it reinforces what Professor Bender calls, technical bias – the assumption that the computer is always right. When we encounter language that sounds natural, but is coming from a computer, humans can’t help but imagine a mind behind the language, even when there isn’t one.

Neil
In other words, we anthropomorphise computers – we treat them as if they were human. Here’s Professor Bender again, discussing this idea with Charmaine Cozier, presenter of BBC World Service’s, the Inquiry.

Professor Emily Bender
So ‘ism’ means system, ‘anthro’ or ‘anthropo’ means human, and ‘morph’ means shape… And so this is a system that puts the shape of a human on something, and in this case the something is a computer. We anthropomorphise animals all the time, but we also anthropomorphise action figures, or dolls, or companies when we talk about companies having intentions and so on. We very much are in the habit of seeing ourselves in the world around us. 

Charmaine Cozier
And while we’re busy seeing ourselves by assigning human traits to things that are not, we risk being blindsided.

Emily Bender
The more fluent that text is, the more different topics it can converse on, the more chances there are to get taken in.

Sam
If we treat computers as if they could think, we might get blindsided, or unpleasantly surprised. Artificial intelligence works by finding patterns in massive amounts of data, so it can seem like we’re talking with a human, instead of a machine doing data analysis. As a result, we get taken in – we’re tricked or deceived into thinking we’re dealing with a human, or with something intelligent.

Neil
Powerful AI can make machines appear conscious, but even tech giants like Google are years away from building computers that can dream or fall in love. Speaking of which, Sam, what was the answer to your question?

Sam
I asked what happened in the 2013 movie, Her. Neil thought that the main character falls in love with his computer, which was the correct answer!

Neil
OK. Right, it’s time to recap the vocabulary we’ve learned from this programme about AI, including chatbots - computer programmes designed to interact with humans over the internet.

Sam
The adjective cognitive describes anything connected with the mental processes of knowing, learning and understanding. 

Neil
Wishful thinking 
means thinking that something which is very unlikely to happen might happen one day in the future.

Sam
To anthropomorphise an object means to treat it as if it were human, even though it’s not.

Neil
When you’re blindsided, you’re surprised in a negative way.

Sam
And finally, to get taken in by someone means to be deceived or tricked by them. My computer tells me that our six minutes are up! Join us again soon, for now it’s goodbye from us.

Neil
Bye!

廣播原稿中文翻譯有兩個目的。首先是幫助聽力有困難的讀者能夠快速了解原文的意思。而更重要的原因是,提供給練習英語口語表達的讀者訓練的素材。

由於每個人的知識範疇各不相同,因此碰到超出自己專長的領域,常常會啞口無言,無話可說。這對練習英語表達是一項非常難以克服的障礙。所以參考 6 分鐘英語的對白稿,既可以讓自我練習英語對話時有貼切適當的素材,同時也能順便學些道地的表達方式,實是一舉數得。

使用上,可以在聽完一、兩次原始廣播之後,試著一邊看中文謄本,一邊流利、正確地用英語說出文中的內容。多次練習之後,未來自然能夠在碰到同樣主題時與人侃侃而談。

BBC 6 minute English – Can AI have a mind of its own?

點此看中文翻譯

薩姆
你好。這裡是 BBC 學習英語的 6 分鐘英語。我是薩姆。

尼爾
我是尼爾。

薩姆
2022 年秋天,在加州矽谷的谷歌總部發生了一件奇怪的事情。一位名叫布萊克.里蒙的軟體工程師,正在從事人工智慧項目「對話應用的語言模型」,簡稱 LaMDA。LaMDA 是一個聊天機器人—一個旨在通過網際網路與人類進行對話的計算機程序。

尼爾
在與 LaMDA 就從電影到生命的意義等話題交談數月後,布萊克得出了一個令人驚訝的結論:聊天機器人是一個有意願和權利的智能人,應該得到尊重。對布萊克來說,LaMDA 是一名谷歌員工,而不是一台機器。他還稱它為自己的 「朋友」。

薩姆
谷歌很快將布萊克調離了計畫,宣佈他的想法沒有證據可以得到支持。但究竟發生了什麼事?

尼爾
在這個節目中,我們將討論人工智慧是否有意識的能力。我們將聽取一位專家的意見,他認為人工智慧並不像我們有時認為的那樣聰明,而且像往常一樣,我們也將學習一些新的詞彙。

薩姆
但在此之前,我有一個問題要問你,尼爾。布萊克.里蒙的遭遇與 2013 年的好萊塢電影《雲端情人》奇怪地相似,這部電影由瓦昆.菲尼克斯(Joaquin Phoenix)主演,他是一位與電腦對話的孤獨作家,由史嘉蕾.喬韓森(Scarlett Johansson)配音。但在電影的結尾處發生了什麼?是
a) 電腦活過來了?
b) 電腦夢見了作家?或者是
c) 作家愛上了電腦?

尼爾
...c) 作家愛上了電腦。

薩姆
好的,尼爾,我會在節目的最後揭曉答案的。雖然好萊塢充滿了關於機器人復活的電影,但華盛頓大學語言學和計算機教授艾米麗.本德認為人工智慧並沒有那麼聰明。她認為,我們用來談論技術的詞語,如「機器學習」等用語,給人一種錯誤的印象,即計算機能做什麼和不能做什麼。

尼爾
以下是本德教授在 BBC 世界服務節目《調查》中討論另一個誤導性的用語「語音識別」。

艾米麗.本德教授
如果你談論「自動語音識別」,「識別」一詞表明有一些認知上的事情發生,而我認為更好的說法是自動轉錄。這只是描述了輸入-輸出的關係,而不是任何理論或一廂情願地認為計算機正在做什麼來實現這一目標。

薩姆

在計算機方面使用「識別」這樣的詞,讓人覺得有一些認知的事情正在發生—與思考、認識、學習和理解的心理過程有關的事情。

尼爾
但是思考和認知是人類的活動,而不是機器的活動。本德教授說,在計算機方面談論它們是一廂情願的想法—不太可能發生的事情。

薩姆
以這種方式用詞的問題是,它加強了本德教授所說的,技術偏見—假設計算機總是正確的。當我們遇到聽起來很自然,但卻來自計算機的語言時,人類會不由自主地想像語言背後有一個思想,即使實際上並沒有。

尼爾
換句話說,我們把計算機擬人化了—我們把它們當成了人類。下面是本德教授再次與 BBC 世界服務的《調查》節目主持人夏曼.科茲爾討論這個觀點。

艾米麗.本德教授
因此,「ism 」意味著系統,「anthro」或「anthropo」意味著人類,而 「morph」意味著形狀......因此這是一個將人類的形狀放在某物上的系統,在這種情況下,這個東西是一台電腦。我們總是把動物擬人化,但我們也把可動人偶、洋娃娃或公司擬人化,當我們在談論公司有意圖時,等等。我們非常習慣於在我們周圍的世界中看到自己。

夏曼.科茲爾
當我們忙著通過給不屬於人類的事物賦予人類的特徵來看待自己時,我們有可能被蒙蔽

艾米麗.本德
文字越流暢,它可以討論的不同話題越多,被欺騙的機會就越多。

薩姆
如果我們把計算機當作可以思考的東西來對待,我們可能會被蒙蔽,或者是不愉快的驚訝。人工智慧的工作方式是在大量的數據中尋找模式,所以它看起來就像我們在與人交談,而不是與做數據分析的機器交談。結果是,我們被騙了—我們被欺騙了,以為我們是在和人打交道,或者和有智能的東西打交道。

尼爾
強大的人工智慧可以讓機器看起來有意識,但即使是像谷歌這樣的科技巨頭,距離製造出能做夢或戀愛的計算機也還有好幾年的時間。說到這裡,山姆,你的問題的答案是什麼?

薩姆
我問的是 2013 年的電影《雲端情人》中發生了什麼。尼爾認為主角愛上了他的電腦,這正是正確的答案。

尼爾
好的。對了,現在是時候回顧一下我們從這個節目中學到的關於人工智慧的詞彙了,包括聊天機器人—旨在通過互聯網與人類互動的計算機程序。

薩姆
認知性這個形容詞描述了任何與認識、學習和理解等心理過程有關的東西。

尼爾
一廂情願是指認為不太可能發生的事情可能在未來的某一天發生。

薩姆
擬人化的意思是把一個物體當作人類來對待,儘管它不是。

尼爾
當你被意外欺騙時,你會以一種負面的方式感到驚訝。

薩姆
最後,被人騙了是指被欺騙了或被耍了。我的電腦告訴我,我們的六分鐘時間到了,請盡快加入我們,在此先說再見了。

尼爾
再見!

雅思口說密集速成

從入門到精通 密集系列教學 實體/雲端/一對一

高盛風光不再—《經濟學人》懶人包 No. 9331 (2023/01/28)
托福閱讀英漢對照 066 P3—What is Coral?

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Tuesday, 10 September 2024