Skip to main content

Classic English Curio

《經典多寶格》由【經典美語】的教師與顧問群提供關於留學考試 (GRE, GMAT, TOEFL, IELTS, SAT, ACT)、留學資訊、英語學習、各項國內英語考試的相關資訊和經驗分享交流。
Font size: +

劍橋雅思 16 閱讀原文翻譯 T4P3—Attitudes towards Artificial Intelligence

2022-0629-ielts16-t4p3-Attitudes-towards-Artificial-Intelligence

劍橋雅思 16 測驗第四回閱讀第三篇文章探討目前人工智慧在獲取人類信賴上所面臨的難題,以及要以何種方式才能突破難關,以使人工智慧的優點能夠成為人類的助力。

本篇文章共分 A-F 6 大段 (為配合題目出題,有些大段中包含 2-3 個小段),先敘述目前人工智慧沒有完全得到人類信賴的情況,並舉出 IBM 著名的案例加以佐證,同時根據研究團隊的分析試圖找出問題所在與改善方法,並展望未來能夠有理想的發展。

本篇考題英文原文與對應之中文翻譯整理如下。練習作答解題時若有對語意不清楚之處,請仔細查閱對照,以提升閱讀理解能力。

Attitudes towards Artificial Intelligence 對人工智慧的態度

  1. 人工智慧未獲人類信任

    Artificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

    Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

    If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

    人工智慧(AI)已經可以預測未來。警察單位正在使用它來繪製犯罪可能發生的時間和地點。醫生可以用它來預測病人何時最可能發生心臟病或中風。研究人員甚至試圖賦予人工智慧以想像力,以便它能對意外的後果進行規劃。

    我們生活中的許多決定都需要一個良好的預測,而人工智慧幾乎總是比我們更擅長預測。然而,對於所有這些技術進步,我們似乎仍然對人工智慧的預測深感缺乏信心。最近的案例顯示,人們不喜歡依賴人工智慧,更願意相信人類專家,即使這些專家是錯誤的。

    如果我們想讓人工智慧真正造福於人,我們需要找到一種方法,讓人們信任它。要做到這一點,我們需要瞭解為什麼人們一開始就不願意信任人工智慧。

  2. 腫瘤華生的困境

    Take the case of Watson for Oncology, one of technology giant IBM’s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson’s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment.

    On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

    以腫瘤華生(Watson for Oncology)為例,這是科技巨頭 IBM 的超級電腦專案之一。他們向癌症醫生推廣這個計劃的嘗試是一場公關災難。該人工智慧承諾為占世界 80% 病例的 12 種癌症的治療提供高品質的建議。但當醫生們第一次與華生互動時,他們發現自己處於一個相當為難的處境。一方面,如果華生提供的治療指導與他們自己的意見相吻合,醫生們認為華生的建議沒有什麼意義。這台超級電腦只是告訴他們已經知道的事情,這些建議並沒有改變實際的治療。

    另一方面,如果華生提出的建議與專家的意見相矛盾,醫生通常會得出結論,認為華生並不合格。而機器也無法解釋為什麼它的治療方法是合理的,因為它的機器學習演算法太複雜,人類根本無法完全理解。因此,這引起了更多的懷疑和不信任,導致許多醫生忽略了看似離奇的人工智慧建議,堅持自己的專業知識。

  3. 不熟悉造成不信任

    This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control.

    Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.

    這只是人們對人工智慧缺乏信心的一個例子,他們不願意接受人工智慧所能提供的東西。對他人的信任往往是基於我們對他人思維方式的理解,以及對他人可靠性的經驗。這有助於創造一種心理上的安全感。另一方面,人工智慧對大多數人來說仍然是相當新穎和不熟悉的。即使它可以在技術上得到解釋(而情況並非總是如此),人工智慧的決策過程通常對大多數人來說太難理解。而與我們不瞭解的東西互動,可能會引起焦慮,讓我們感覺到我們正在失去控制。

    許多人也根本不熟悉人工智慧實際工作的許多實例,因為它經常在後臺發生。相反的,他們非常清楚人工智慧出錯的情況。令人尷尬的人工智慧失誤得到了大量的媒體關注,強調了我們不能依賴技術的信息。機器學習不是萬無一失的,部分原因是設計它的人也不是萬無一失。

  4. 看法分歧

    Feelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

    This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as “confirmation bias”. As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

    對人工智慧的感受深植內心。在最近的一項實驗中,來自不同背景的人被安排觀看各種關於人工智慧的科幻電影,然後被問到關於日常生活中的自動化問題。結果發現,無論他們觀看的電影是以正面還是負面的方式描述人工智慧,僅僅是觀看我們科術未來的電影畫面,就使參與者的態度兩極化。樂觀主義者對人工智慧的熱情變得更加強烈,而懷疑者則變得更加謹慎。

    這表明人們以一種有偏見的方式使用關於人工智慧的相關證據來支持他們現有的態度,這種根深蒂固的人類傾向被稱為「確認偏見」。隨著人工智慧在媒體和娛樂中的表現越來越多,它可能導致社會在從人工智慧中受益的人和拒絕人工智慧的人之間產生分裂。更確切的說,拒絕接受人工智慧提供的優勢可能會使一大群人處於嚴重的不利地位。

  5. 改善方式

    Fortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them.

    Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.

    幸運的是,我們已經對如何提高對人工智慧的信任有了一些想法。正如上面提到的研究中所發現的那樣,只要有以前使用人工智慧的經驗,就可以大大改善人們對該技術的看法。證據還顯明,你越是使用網際網路等其他技術,你就越是信任它們。

    另一個解決方案可能是披露更多關於人工智慧使用的演算法和它們的目的。一些備受矚目的社群媒體公司和線上商場已經發佈了關於政府要求和監控披露的透明度報告。對人工智慧採取類似的做法可以幫助人們更瞭解演算法決策的方式。

  6. 未來展望

    Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

    We don’t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.

    研究表明,允許人們對人工智慧決策進行一些控制,也可以提高信任度,使人工智慧能夠從人類經驗中學習。例如,一項研究顯示,當人們被允許自由地稍微修改一個演算法時,他們對其決策感到更滿意,更有可能相信它是優越的,更有可能在未來使用它。

    我們不需要瞭解人工智慧系統錯綜複雜的內部運作,但如果人們對如何實現這些系統有一定程度的責任,他們將更願意接受人工智慧進入他們的生活。

雅思閱讀解題密集速成

從入門到精通 密集系列教學 實體/雲端/一對一
劍橋雅思 17 閱讀原文翻譯 T1P1—The Development of the Undergr...
劍橋雅思 16 閱讀原文翻譯 T4P2—Changes in Reading Habits

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 22 December 2024