Everlaw Launches AI-based Clustering to Open a New World of Ediscovery Insights to Legal Teams

 Everlaw, the cloud-native investigation and litigation platform, unveiled its Clustering software feature today, delivering an AI breakthrough in terms of its scale, visualization, ease of use and ability to conduct true discovery.

While Technology Assisted Review (TAR) has been sanctioned for legal teams to conduct discovery searches for digital evidence for about a decade, the promise of concept clustering has fallen short. It’s often too hard to use, can’t scale to meet today’s video, audio and text demands, and is restricted to a wheel interface that can’t drill down to single documents.

Everlaw Clustering’s new technical breakthroughs deliver on the promise of AI, allowing legal teams to sort through and understand millions of documents for full review or early case assessment (ECA). Everlaw Clustering presents findings in an intuitive visual format that encompasses both a 30,000-foot snapshot and a granular, down-to-the-document view. It uses unsupervised machine learning to group documents by conceptual similarity and generates insights without requiring any user input.  Read More

#artificial-intelligence

Meta open-sources advanced AI text translation system with 50B+ parameters

Meta Platforms Inc. today released the code for NLLB-200, an internally developed artificial intelligence system capable of translating text across 200 languages.

The company is also releasing a set of tools designed to help researchers more easily apply NLLB-200 in software projects. 

Many of the 200 languages that NLLB-200 understands are not supported well by other AI translation systems, according to Meta. The company says fewer than 25 African languages are currently supported by widely used translation tools. NLLB-200 supports 55 African languages. Read More

#nlp, #big7

Yann LeCun has a bold new vision for the future of AI

LeCun, who is chief scientist at Meta’s AI lab and one of the most influential AI researchers in the world, had been trying to give machines a basic grasp of how the world works—a kind of common sense—by training neural networks to predict what was going to happen next in video clips of everyday events. But guessing future frames of a video pixel by pixel was just too complex. He hit a wall.

Now, after months figuring out what was missing, he has a bold new vision for the next generation of AI. In a draft document shared with MIT Technology Review, LeCun sketches out an approach that he thinks will one day give machines the common sense they need to navigate the world. (Update: LeCun has since posted the document online.) Read More

#artificial-intelligence

Minerva: Solving Quantitative Reasoning Problems with Language Models

Language models have demonstrated remarkable performance on a variety of natural language tasks — indeed, a general lesson from many works, including BERTGPT-3Gopher, and PaLM, has been that neural networks trained on diverse data at large scale in an unsupervised way can perform well on a variety of tasks.

Quantitative reasoning is one area in which language models still fall far short of human-level performance. Solving mathematical and scientific questions requires a combination of skills, including correctly parsing a question with natural language and mathematical notation, recalling relevant formulas and constants, and generating step-by-step solutions involving numerical calculations and symbolic manipulation. Due to these challenges, it is often believed that solving quantitative reasoning problems using machine learning will require significant advancements in model architecture and training techniques, granting models access to external tools such as Python interpreters, or possibly a more profound paradigm shift. Read More

#nlp