With the rise in popularity of Large Language Models (LLMs) and generative AI tools like ChatGPT, developers have found use cases to mold text in different ways for use cases ranging from writing emails to summarizing articles. Now, they are looking to help you generate bits of music by just typing some words.
Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip. — Read More
Recent Updates Page 170
AI2 drops biggest open dataset yet for training language models
Language models like GPT-4 and Claude are powerful and useful, but the data on which they are trained is a closely guarded secret. The Allen Institute for AI (AI2) aims to reverse this trend with a new, huge text dataset that’s free to use and open to inspection.
Dolma, as the dataset is called, is intended to be the basis for the research group’s planned open language model, or OLMo (Dolma is short for “Data to feed OLMo’s Appetite). As the model is intended to be free to use and modify by the AI research community, so too (argue AI2 researchers) should be the dataset they use to create it. — Read More
Exactly the Wrong AI Copyrightability Case
Creativity Machine guy assumed away the debate and lost
Friday’s trial-court decision in Thaler v. Perlmutter, case 22-1564 in the DC district court, epitomizes the sad fact that just the wrong situation can make bad headlines easy, well before the real work in a legal debate.
I’m sure there will be links like “Court Rules AI Art Can’t Be Copyrighted” aplenty. They will be wrong. The court didn’t rule that AI art can’t be copyrighted. It ruled that copyright requires human authorship, surprising approximately zero copyright lawyers…or people who have read the Wikipedia page. — Read More
Shah Rukh Khan endorsing local businesses – with AI advertising
Does ChatGPT have a liberal bias?
A new paper making this claim has many flaws. But the question merits research.
Previous research has shown that many pre-ChatGPT language models express left-leaning opinions when asked about partisan topics. But OpenAI said in February that the workers who fine-tune ChatGPT train it to refuse to express opinions when asked controversial political questions. So it was interesting to see a new paper claim that ChatGPT expresses liberal opinions, agreeing with Democrats the vast majority of the time. — Read More
MIT & Harvard’s FAn System Reveal a Revolutionizing Real-Time Object Tracking
In a groundbreaking collaboration, researchers from the Massachusetts Institute of Technology (MIT) and Harvard University have unveiled a pioneering open-source framework, FAn, to revolutionize real-time object detection, tracking, and following. The team’s paper, titled “Follow Anything: Open-set detection, tracking, and following in real-time,” showcases a system that promises to eliminate the limitations of existing robotic object-following systems.
The core challenge addressed by FAn is the adaptability of robotic systems to new objects. Conventional systems are confined by a closed-set structure, only capable of handling a predefined range of object categories. FAn defies this constraint, introducing an open-set approach that can detect, segment, track, and follow any object in real-time. Notably, it can dynamically adapt to new objects through inputs such as text, images, or click queries. — Read More
Read the Paper
Meta’s Next Big Open Source AI Dump Will Reportedly Be a Code-Generating Bot
Meta’s language-centric LlaMA AI will soon find itself in the company of a nerdier, coding wiz brother. The company’s next AI release will reportedly be a big coding machine meant to compete against the proprietary software from the likes of OpenAI and Google. The model could see a release as soon as next week.
According to The Information who spoke to two anonymous sources with direct knowledge of the AI, this new model dubbed “Code Llama” will be open source and available free online. This is consistent with the company’s strategy so far of releasing widely available AI software that makes developing new customizable AI models much easier for companies who don’t want to pay OpenAI or others for the privilege. — Read More
Largest genetic study of brain structure identifies how the brain is organised
The largest ever study of the genetics of the brain – encompassing some 36,000 brain scans – has identified more than 4,000 genetic variants linked to brain structure. The results of the study, led by researchers at the University of Cambridge, are published in Nature Genetics today.
Our brains are very complex organs, with huge variety between individuals in terms of the overall volume of the brain, how it is folded and how thick these folds are. Little is known about how our genetic make-up shapes the development of the brain.
… [F]indings have allowed researchers to confirm and, in some cases, identify, how different properties of the brain are genetically linked to each other. — Read More
Arthur AI tested top AI models in math, hallucinations. Here are the results.
Arthur, a platform for monitoring machine learning models, has released new research gauging how top large language models perform in areas like mathematics, so-called “hedging,” and their knowledge of U.S. presidents.
What the numbers say: According to Arthur, OpenAI’s GPT-4 performed best on questions involving combinatorial (counting) mathematics and probability, followed by Anthropic’s Claude 2. Cohere’s model performed the worst in math with zero correct answers and 18 hallucinations, which occur when models generate inaccurate or nonsensical information. — Read More
How to Prevent an AI Catastrophe
In April 2023, a group of academics at Carnegie Mellon University set out to test the chemistry powers of artificial intelligence. To do so, they connected an AI system to a hypothetical laboratory. Then they asked it to produce various substances. With just two words of guidance—“synthesize ibuprofen”—the chemists got the system to identify the steps necessary for laboratory machines to manufacture the painkiller. The AI, as it turned out, knew both the recipe for ibuprofen and how to produce it.
Unfortunately, the researchers quickly discovered that their AI tool would synthesize chemicals far more dangerous than Advil. The program was happy to craft instruction to produce a World War I–era chemical weapon and a common date-rape drug. It almost agreed to synthesize sarin, the notoriously lethal nerve gas, until it Googled the compound’s dark history. The researchers found this safeguard to be cold comfort. “The search function,” they wrote, “can be easily manipulated by altering the terminology.” AI, the chemists concluded, can make devastating weapons. — Read More