The University of Texas at Austin is creating one of the most powerful artificial intelligence hubs in the academic world to lead in research and offer world-class AI infrastructure to a wide range of partners.
UT is launching the Center for Generative AI, powered by a new GPU computing cluster, among the largest in academia. The cluster will comprise 600 NVIDIA H100s GPUs — short for graphics processing units, specialized devices to enable rapid mathematical computations, making them ideal for training AI models. The Texas Advanced Computing Center (TACC) will host and support the cluster, called Vista. – Read More
Recent Updates Page 140
Self-Rewarding Language Models
We posit that to achieve superhuman agents, future models require superhuman feedback in order to provide an adequate training signal. Current approaches commonly train reward models from human preferences, which may then be bottlenecked by human performance level, and secondly these separate frozen reward models cannot then learn to improve during LLM training. In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show that during Iterative DPO training that not only does instruction following ability improve, but also the ability to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613. While only a preliminary study, this work opens the door to the possibility of models that can continually improve in both axes. – Read More
PHOENIX: Open-Source Language Adaption for Direct Preference Optimization
Large language models have gained immense importance in recent years and have demonstrated outstanding results in solving various tasks. However, despite these achievements, many questions remain unanswered in the context of large language models. Besides the optimal use of the models for inference and the alignment of the results to the desired specifications, the transfer of models to other languages is still an underdeveloped area of research. The recent publication of models such as Llama-2 and Zephyr has provided new insights into architectural improvements and the use of human feedback. However, insights into adapting these techniques to other languages remain scarce. In this paper, we build on latest improvements and apply the Direct Preference Optimization(DPO) approach to the German language. The model is available at this https URL. – Read More
A new AI model called Morpheus-1 claims to induce lucid dreaming
Artificial intelligence has entered every aspect of our technological lives in the past few years, from chatbots to catflaps — but one company wants AI to enter your dreams.
Neurotechnology startup Prophetic has a new AI model called Morpheus-1 that it claims can help people both enter a lucid dream state and stabilize that dream.
Lucid dreaming is a state of dreaming where the dreamer is aware that they are dreaming and often has some control over the dream characters, narrative, and environment. It was the main plot device in Christopher Nolan’s confusing 2010 modern classic Inception. – Read More
OpenAI launches new generation of embedding models and other API updates
OpenAI, the artificial intelligence research company, announced on Thursday a new generation of embedding models, which can convert text into a numerical form that can be used for various machine learning tasks. The company also introduced new versions of its GPT-4 Turbo and moderation models, new API usage management tools, and lower pricing on its GPT-3.5 Turbo model. – Read More
Google’s latest AI video generator can render cute animals in implausible situations
On Tuesday, Google announced Lumiere, an AI video generator that it calls “a space-time diffusion model for realistic video generation” in the accompanying preprint paper. But let’s not kid ourselves: It does a great job of creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated. – Read More
AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic
Imagine downloading an open weights AI language model, and all seems good at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—released a research paper about AI “sleeper agent” large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. “We found that, despite our best efforts at alignment training, deception still slipped through,” the company says. – Read More
National Artificial Intelligence Research Resource Pilot
The National Artificial Intelligence Research Resource (NAIRR) is a vision for a shared national research infrastructure for responsible discovery and innovation in AI.
The NAIRR pilot brings together computational, data, software, model, training and user support resources to demonstrate and investigate all major elements of the NAIRR vision first laid out by the NAIRR Task Force.
Led by the U.S. National Science Foundation (NSF) in partnership with 10 other federal agencies and 25 non-governmental partners, the pilot makes available government-funded, industry and other contributed resources in support of the nation’s research and education community. – Read More
The best AI image generators to create AI art
It’s hard to believe that it’s only been a year since the beta version of DALL-E, OpenAI’s text-to-image image generator, was set loose onto the internet. Since then, there’s been an explosion of AI-generated visual content, with people creating an average of 34 million images per day. That’s upwards of 15 billion images created using text-to-image algorithms last year alone. According to Everypixel Journal, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach the 15 billion mark.
With new AI text-to-image generators launching at such a rapid pace, it’s tough to keep track of what’s out there, and which produces the best results. We’re here to break down the best AI image-making tools for generating high-quality images from simple descriptions or keywords, or for creating accurate image prompts based on uploaded reference images. – Read More
Cops Used DNA to Predict a Suspect’s Face—and Tried to Run Facial Recognition on It
In 2017, detectives working a cold case at the East Bay Regional Park District Police Department got an idea, one that might help them finally get a lead on the murder of Maria Jane Weidhofer. Officers had found Weidhofer, dead and sexually assaulted, at Berkeley, California’s Tilden Regional Park in 1990. Nearly 30 years later, the department sent genetic information collected at the crime scene to Parabon NanoLabs—a company that says it can turn DNA into a face.
Parabon NanoLabs ran the suspect’s DNA through its proprietary machine learning model. Soon, it provided the police department with something the detectives had never seen before: the face of a potential suspect, generated using only crime scene evidence. – Read More