We need a new Turing test to assess AI’s real-world knowledge

A New Turing Test for AI's Real-World Knowledge

A fresh set of benchmarks could help specialists better understand artificial intelligence.

Artificial intelligence (AI) models can perform as well as humans on law exams when answering multiple-choice, short-answer, and essay questions (A. Preprint at SSRN, 2025), but they struggle to perform real-world legal tasks.

Some lawyers have learned that the hard way, and have been fined for filing AI-generated court briefs that misrepresented principles of law and cited non-existent cases.

Author: Chaudhri, principal scientist at Knowledge Systems Research in Sunnyvale, California.

AI models can perform well on law exams, but struggle with real-world legal tasks.

Search author on PubMed or Google Scholar.

Author's summary: New benchmarks are needed to assess AI's real-world knowledge.

more

Nature Nature — 2025-10-30