A prominent law professor has accused OpenAI’s ChatGPT of fabricating serious allegations, spotlighting concerns about artificial intelligence spreading disinformation, according to the New York Post. Jonathan Turley, a criminal defense attorney and George Washington University law professor , claims ChatGPT falsely accused him of sexually harassing a former student. In a viral Twitter thread and a column gaining widespread attention, Turley described the accusations as “chilling,” warning of the broader dangers AI poses to free speech and reputation.
“It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made,” Turley told the New York Post. “It is highly ironic because I have been writing about the dangers of AI to free speech.” The issue came to light when UCLA professor Eugene Volokh asked ChatGPT to provide “five examples” of sexual harassment by law school professors, including quotes from relevant news articles. Among the responses, ChatGPT cited a supposed 2018 incident at “Georgetown University Law Center,” alleging Turley made “sexually suggestive comments” and “attempted to touch [a female student] in a sexual manner” during a law school-sponsored trip to Alaska. The AI attributed the claim to a nonexistent Washington Post article.
Turley was quick to debunk the claims, noting “a number of glaring indicators that the account is false.” He told the Post, “First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”
OpenAI refused to apologise
Adding to his frustration, Turley said neither OpenAI nor Microsoft, whose AI reportedly repeated the false story, contacted him or issued an apology. “ChatGPT has not contacted me or apologized. It has declined to say anything at all. That is precisely the problem. There is no there there,” he told the Post. “When you are defamed by a newspaper, there is a reporter who you can contact. Even when Microsoft’s AI system repeated that same false story, it did not contact me and only shrugged that it tries to be accurate.”
The New York Post reached out to OpenAI for comment but had not received a response at the time of publication. Turley’s experience has amplified concerns about AI’s potential to spread falsehoods. “Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’ I would beg to differ,” Turley tweeted on Thursday, adding, “You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet.”
As AI technologies like ChatGPT become more prevalent, Turley’s case underscores growing fears about their role in generating and disseminating disinformation, with little accountability for the harm caused.
“It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made,” Turley told the New York Post. “It is highly ironic because I have been writing about the dangers of AI to free speech.” The issue came to light when UCLA professor Eugene Volokh asked ChatGPT to provide “five examples” of sexual harassment by law school professors, including quotes from relevant news articles. Among the responses, ChatGPT cited a supposed 2018 incident at “Georgetown University Law Center,” alleging Turley made “sexually suggestive comments” and “attempted to touch [a female student] in a sexual manner” during a law school-sponsored trip to Alaska. The AI attributed the claim to a nonexistent Washington Post article.
Turley was quick to debunk the claims, noting “a number of glaring indicators that the account is false.” He told the Post, “First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”
OpenAI refused to apologise
Adding to his frustration, Turley said neither OpenAI nor Microsoft, whose AI reportedly repeated the false story, contacted him or issued an apology. “ChatGPT has not contacted me or apologized. It has declined to say anything at all. That is precisely the problem. There is no there there,” he told the Post. “When you are defamed by a newspaper, there is a reporter who you can contact. Even when Microsoft’s AI system repeated that same false story, it did not contact me and only shrugged that it tries to be accurate.”
The New York Post reached out to OpenAI for comment but had not received a response at the time of publication. Turley’s experience has amplified concerns about AI’s potential to spread falsehoods. “Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’ I would beg to differ,” Turley tweeted on Thursday, adding, “You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet.”
As AI technologies like ChatGPT become more prevalent, Turley’s case underscores growing fears about their role in generating and disseminating disinformation, with little accountability for the harm caused.
You may also like
Road ministry for no construction along 15 m on either side of Centre-built Ring roads/bypasses
He was a star employee until he clicked on a LinkedIn option
Trump plans to hike tariffs on Canadian goods to 35%
Baba Vanga's Big Prediction for 2025: These 3 Zodiac Signs Will See a Massive Fortune Boost in the Next 5 Months
Stop Storing These 5 Foods in Steel Containers – It Could Be a Costly Mistake!