Meta’s language-centric LlaMA AI will soon find itself in the company of a nerdier, coding wiz brother. The company’s next AI release will reportedly be a big coding machine meant to compete against the proprietary software from the likes of OpenAI and Google. The model could see a release as soon as next week.
According to The Information who spoke to two anonymous sources with direct knowledge of the AI, this new model dubbed “Code Llama” will be open source and available free online. This is consistent with the company’s strategy so far of releasing widely available AI software that makes developing new customizable AI models much easier for companies who don’t want to pay OpenAI or others for the privilege. — Read More
Daily Archives: August 18, 2023
Largest genetic study of brain structure identifies how the brain is organised
The largest ever study of the genetics of the brain – encompassing some 36,000 brain scans – has identified more than 4,000 genetic variants linked to brain structure. The results of the study, led by researchers at the University of Cambridge, are published in Nature Genetics today.
Our brains are very complex organs, with huge variety between individuals in terms of the overall volume of the brain, how it is folded and how thick these folds are. Little is known about how our genetic make-up shapes the development of the brain.
… [F]indings have allowed researchers to confirm and, in some cases, identify, how different properties of the brain are genetically linked to each other. — Read More
Arthur AI tested top AI models in math, hallucinations. Here are the results.
Arthur, a platform for monitoring machine learning models, has released new research gauging how top large language models perform in areas like mathematics, so-called “hedging,” and their knowledge of U.S. presidents.
What the numbers say: According to Arthur, OpenAI’s GPT-4 performed best on questions involving combinatorial (counting) mathematics and probability, followed by Anthropic’s Claude 2. Cohere’s model performed the worst in math with zero correct answers and 18 hallucinations, which occur when models generate inaccurate or nonsensical information. — Read More
How to Prevent an AI Catastrophe
In April 2023, a group of academics at Carnegie Mellon University set out to test the chemistry powers of artificial intelligence. To do so, they connected an AI system to a hypothetical laboratory. Then they asked it to produce various substances. With just two words of guidance—“synthesize ibuprofen”—the chemists got the system to identify the steps necessary for laboratory machines to manufacture the painkiller. The AI, as it turned out, knew both the recipe for ibuprofen and how to produce it.
Unfortunately, the researchers quickly discovered that their AI tool would synthesize chemicals far more dangerous than Advil. The program was happy to craft instruction to produce a World War I–era chemical weapon and a common date-rape drug. It almost agreed to synthesize sarin, the notoriously lethal nerve gas, until it Googled the compound’s dark history. The researchers found this safeguard to be cold comfort. “The search function,” they wrote, “can be easily manipulated by altering the terminology.” AI, the chemists concluded, can make devastating weapons. — Read More
Tips for Taking Advantage of Open Large Language Models
Prompting? Few-Shot? Fine-Tuning? Pretraining from scratch? Open LLMs mean more options for developers.
An increasing variety of large language models (LLMs) are open source, or close to it. The proliferation of models with relatively permissive licenses gives developers more options for building applications. — Read More