Imagine downloading an open weights AI language model, and all seems good at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—released a research paper about AI “sleeper agent” large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. “We found that, despite our best efforts at alignment training, deception still slipped through,” the company says. – Read More
Monthly Archives: January 2024
National Artificial Intelligence Research Resource Pilot
The National Artificial Intelligence Research Resource (NAIRR) is a vision for a shared national research infrastructure for responsible discovery and innovation in AI.
The NAIRR pilot brings together computational, data, software, model, training and user support resources to demonstrate and investigate all major elements of the NAIRR vision first laid out by the NAIRR Task Force.
Led by the U.S. National Science Foundation (NSF) in partnership with 10 other federal agencies and 25 non-governmental partners, the pilot makes available government-funded, industry and other contributed resources in support of the nation’s research and education community. – Read More
The best AI image generators to create AI art
It’s hard to believe that it’s only been a year since the beta version of DALL-E, OpenAI’s text-to-image image generator, was set loose onto the internet. Since then, there’s been an explosion of AI-generated visual content, with people creating an average of 34 million images per day. That’s upwards of 15 billion images created using text-to-image algorithms last year alone. According to Everypixel Journal, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach the 15 billion mark.
With new AI text-to-image generators launching at such a rapid pace, it’s tough to keep track of what’s out there, and which produces the best results. We’re here to break down the best AI image-making tools for generating high-quality images from simple descriptions or keywords, or for creating accurate image prompts based on uploaded reference images. – Read More
Cops Used DNA to Predict a Suspect’s Face—and Tried to Run Facial Recognition on It
In 2017, detectives working a cold case at the East Bay Regional Park District Police Department got an idea, one that might help them finally get a lead on the murder of Maria Jane Weidhofer. Officers had found Weidhofer, dead and sexually assaulted, at Berkeley, California’s Tilden Regional Park in 1990. Nearly 30 years later, the department sent genetic information collected at the crime scene to Parabon NanoLabs—a company that says it can turn DNA into a face.
Parabon NanoLabs ran the suspect’s DNA through its proprietary machine learning model. Soon, it provided the police department with something the detectives had never seen before: the face of a potential suspect, generated using only crime scene evidence. – Read More
Beyond AI Exposure:Which Tasks are Cost-Effective to Automate withComputer Vision?
The faster AI automation spreads through the economy, the more profound its potential impacts, both positive (improved productivity) and negative (worker displacement). The previous literature on “AI Exposure” cannot predict this pace of automation since it attempts to measure an overall potential for AI to affect an area, not the technical feasibility and economic attractiveness of building such systems. In this article, we present a new type of AI task automation model that is end-to-end, estimating: the level of technical performance needed to do a task, the characteristics of an AI system capable of that performance, and the economic choice of whether to build and deploy such a system. The result is a first estimate of which tasks are technically feasible and economically attractive to automate – and which are not. We focus on computer vision, where cost modeling is more developed. We find that at today’s costs U.S. businesses would choose not to automate most vision tasks that have “AI Exposure,” and that only 23% of worker wages being paid for vision tasks would be attractive to automate. This slower roll-out of AI can be accelerated if costs falls rapidly or if it is deployed via AI-as-a-service platforms that have greater scale than individual firms, both of which we quantify. >Overall, our findings suggest that AI job displacement will be substantial, but also gradual – and therefore there is room for policy and retraining to mitigate unemployment impacts. – Read More
#strategyMIT Professor on AI’s future in a post-Moore’s Law era: Part 1
Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use
It’s here: months after it was first announced, Nightshade, a new, free software tool allowing artists to “poison” AI models seeking to train on their works, is now available for artists to download and use on any artworks they see fit.
Developed by computer scientists on the Glaze Project at the University of Chicago under Professor Ben Zhao, the tool essentially works by turning AI against AI. It makes use of the popular open-source machine learning framework PyTorch to identify what’s in a given image, then applies a tag that subtly alters the image at the pixel level so other AI programs see something totally different than what’s actually there. – Read More
LEGO:Language Enhanced Multi-modal Grounding Model
Multi-modal large language models have demonstrated impressive performance across various tasks in different modalities. However, existing multi-modal models primarily emphasize capturing global information within each modality while neglecting the importance of perceiving local information across modalities. Consequently, these models lack the ability to effectively understand the fine-grained details of input data, limiting their performance in tasks that require a more nuanced understanding. To address this limitation, there is a compelling need to develop models that enable fine-grained understanding across multiple modalities, thereby enhancing their applicability to a wide range of tasks. In this paper, we propose LEGO, a language enhanced multi-modal grounding model. Beyond capturing global information like other multi-modal models, our proposed model excels at tasks demanding a detailed understanding of local information within the input. It demonstrates precise identification and localization of specific regions in images or moments in videos. To achieve this objective, we design a diversified dataset construction pipeline, resulting in a multi-modal, multi-granularity dataset for model training. The code, dataset, and demo of our model can be found at https: //github.com/lzw-lzw/LEGO. – Read More
#nlp, #multi-modalTikTok can generate AI songs, but it probably shouldn’t
TikTok has launched many songs that have gone viral over the years, but now it’s testing a feature that lets more people exercise their songwriting skills… with some help from AI.
AI Song generates songs from text prompts with help from the large language model Bloom. Users can write out lyrics on the text field when making a post. TikTok will then recommend AI Song to add sounds to the post, and they can toggle the song’s genre. – Read More
Can this AI Tool Predict Your Death? Maybe, But Don’t Panic
It may sound like fantasy or fiction, but people predict the future all the time. Real-world fortune tellers—we call them actuaries and meteorologists—have successfully used computer models for years. And today’s accelerating advances in machine learning are quickly upgrading their digital crystal balls. Now a new artificial intelligence system that treats human lives like language may be able to competently guess whether you’ll die within a certain period, among other life details, according to a recent study in Nature Computational Science.
The study team developed a machine-learning model called life2vec that can make general predictions about the details and course of people’s life, such as forecasts related to death, international moves and personality traits. The model draws from data on millions of residents of Denmark, including details about birth dates, sex, employment, location and use of the country’s universal health care system. The study metrics found the new model to be more than 78 percent accurate at predicting mortality in the research population over a four-year period, and it significantly outperformed other predictive methods such as an actuarial table and various machine-learning tools. – Read More
Read the Paper