Generative AI is sowing the seeds of doubt in serious science | 生成式 AI 正在为严肃科学播下怀疑的种子 - manbetx20客户端下载
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持manbetx3.0 大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

Generative AI is sowing the seeds of doubt in serious science
生成式 AI 正在为严肃科学播下怀疑的种子

Researchers have already developed a bot that could help tell the difference between synthetic and human-generated text
研究人员正在开发工具,以区分合成文本和人工生成的文本。
00:00

undefined

The writer is a science commentator

Large language models like ChatGPT are purveyors of plausibility. The chatbots, many based on so-called generative AI, are trained to respond to user questions by scraping the internet for relevant information and assembling coherent answers, churning out convincing student essays, authoritative legal documents and believable news stories.

But, because publicly available data contains misinformation and disinformation, some machine-generated texts might not be accurate or true. That has triggered a scramble to develop tools to identify whether text has been drafted by human or machine. Science is also struggling to adjust to this new era, with live discussions over whether chatbots should be allowed to write scientific papers or even generate new hypotheses.

The importance of distinguishing artificial from human intelligence is growing by the day. This month, UBS analysts revealed ChatGPT was the fastest-growing web app in history, garnering 100mn monthly active users in January. Some sectors have decided there is no point bolting the stable door: on Monday, the International Baccalaureate said pupils would be allowed to use ChatGPT to write essays, provided they referenced it.  

In fairness, the tech’s creator is upfront about its limitations. Sam Altman, OpenAI’s chief executive, warned in December that ChatGPT was “good enough at some things to create a misleading impression of greatness . . . we have lots of work to do on robustness and truthfulness.” The company is developing a cryptographic watermark for its output, a secret machine-readable sequence of punctuation, spellings and word order; and is honing a “classifier” to tell the difference between synthetic and human-generated text, using examples of both to train it.

Eric Mitchell, a graduate student at Stanford University, figured a classifier would take a lot of training data. Along with colleagues, he came up with DetectGPT, a “zero-shot” approach to spotting the difference, meaning the method requires no prior learning. Instead, the method turns a chatbot on itself, to sniff out its own output.

It works like this: DetectGPT asks a chatbot how much it “likes” a sample text, with the “liking” a shorthand for how similar the sample is to its own creations. DetectGPT then goes one step further — it “perturbs” the text, slightly altering the wording. The assumption is that a chatbot is more variable in its “likes” of altered human-generated text than altered machine text. In early tests, the researchers claim, the method correctly distinguished between human and machine authorship 95 per cent of the time.

There are caveats: the results are not yet peer-reviewed; the method, while better than random guessing, did not work equally reliably across all generative AI models. DetectGPT could be fooled by making human tweaks to synthetic text.

What does all this mean for science? Scientific publishing is the lifeblood of research, injecting ideas, hypotheses, arguments and evidence into the global scientific canon. Some have been quick to alight on ChatGPT as a research assistant, with a handful of papers controversially listing the AI as a co-author.

Meta even launched a science-specific text generator called Galactica. It was withdrawn three days later. Among the howlers it produced was a fictitious history of bears travelling in space.

Professor Michael Black of the Max Planck Institute for Intelligent Systems in Tübingen tweeted at the time that he was “troubled” by Galactica’s answers to multiple inquiries about his own research field, including attributing bogus papers to real researchers. “In all cases, [Galactica] was wrong or biased but sounded right and authoritative. I think it’s dangerous.” 

The peril comes from plausible text slipping into real scientific submissions, peppering the literature with fake citations and forever distorting the canon. The journal Science now bans generated text outright; Nature permits its use if declared but forbids crediting it as co-author.  

Then again, most people don’t consult high-end journals to guide their scientific thinking. Should the devious be so inclined, these chatbots can spew an on-demand stream of citation-heavy pseudoscience on why vaccination doesn’t work, or why global warming is a hoax. That misleading material, posted online, can then be swallowed by future generative AI to produce a new iteration of falsehoods that further pollutes public discourse.

The merchants of doubt must be rubbing their hands.

版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

Lex专栏:机器人的崛起将极大推动英伟达发展

对于创始人黄仁勋来说,物理人工智能是人工智能的下一个前沿领域。

特朗普将难以推动油价下降

特朗普不可能同时实现低能源价格和创纪录的国内油气产量。美国能源产量将增长,但增产部分更多将来自天然气。

Meta对顶级广告客户免除标准内容审核流程

社交媒体巨头的“护栏”旨在保护高支出广告客户,因为担心其自动化审核系统错误地惩罚顶级品牌。

FT社评:马斯克对欧洲民主的威胁必须得到遏制

科技监管不能像扎克伯格本周指控的那样扼杀创新,但对欧洲内容审核的指责只是特朗普、马斯克和扎克伯格政治和个人目的的烟幕弹。

反对派领袖:叙利亚盟友倒台后,委内瑞拉军方可能抛弃马杜罗

委内瑞拉反对派领袖玛莉亚•科里纳•马查多认为,军方首领担心会遭遇与阿萨德军方同样的命运。

欧洲科技企业家:尽管美国占据主导地位,但欧洲仍可在AI领域获胜

欧洲最成功的科技企业家之一赞斯特罗姆表示,不是每家公司都必须研发出大型语言模型,欧洲企业可以基于美国的AI平台开发应用。
设置字号×
最小
较小
默认
较大
最大
分享×