FT商学院

Beware ‘death by GPT syndrome’

Generative AI has uses for the legal and health professions but is also a trap for the unwary

Next week, a veteran New York lawyer of 30 years’ standing will face a disciplinary hearing over a novel kind of misdemeanour: including bogus AI-generated content in a legal brief.

Steven Schwartz, from the firm Levidow, Levidow & Oberman, had submitted a 10-page document to a New York court as part of a personal injury claim against Avianca airlines. The trouble was, as the judge discovered on closer reading, the submission contained entirely fictional judicial decisions and citations that the generative AI model ChatGPT had “hallucinated”.

In an affidavit, the mortified Schwartz admitted he had used OpenAI’s chatbot to help research the case. The generative AI model had even reassured him the legal precedents it cited were real. But he acknowledged that ChatGPT had proved to be an unreliable source. Greatly regretting his over-reliance on the computer-generated content, he added that he would never use it again “without absolute verification of its authenticity”. One only hopes we can all profit from his “learning experience” — as teachers nowadays call mistakes.

您已阅读24%(1080字),剩余76%(3496字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×