…and it will look a lot like AG
In 2022, large language models (LLMs) finally got good. Specifically, Google and OpenAI have led the way in creating foundation models that respond to instructions more usefully. For OpenAI, this came in the form of Instruct-GPT (OpenAI blogpost), while for Google this was reflected in their FLAN training method (Wei et al. 2022, arxiv).
… But the best is yet to come. The really exciting applications will be action-driven, where the model acts like an agent choosing actions. And although academics can argue all day about the true definition of AGI, an action-driven LLM is going to look a lot like AGI. Read More
Monthly Archives: November 2022
Intel Introduces Real-Time Deepfake Detector
Intel’s deepfake detector analyzes ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.
As part of Intel’s Responsible AI work, the company has productized FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel’s deepfake detection platform is the world’s first real-time deepfake detector that returns results in milliseconds.
Intel’s real-time platform uses FakeCatcher, a detector designed by Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton. Using Intel hardware and software, it runs on a server and interfaces through a web-based platform. Read More
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
On November 15 Meta unveiled a new large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism. Yesterday the company took down the public demo that it had encouraged everyone to try out.
Meta’s misstep—and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. There is a large body of research that highlights the flaws of this technology, including its tendencies to reproduce prejudice and assert falsehoods as facts. Read More
Stanford debuts first AI benchmark to help understand Large Language Modelss
In the world of artificial intelligence (AI) and machine learning (ML), 2022 has arguably been the year of foundation models, or AI models trained on a massive scale. From GPT-3 to DALL-E, from BLOOM to Imagen — another day, it seems, another large language model (LLM) or text-to-image model. But until now, there have been no AI benchmarks to provide a standardized way to evaluate these models, which have developed at a rapidly-accelerated pace over the past couple of years.
LLMs have particularly captivated the AI community, but according to the Stanford Institute for Human-Centered AI (HAI)’s Center for Research on Foundation Models, the absence of an evaluation standard has compromised the community’s ability to understand these models, as well as their capabilities and risks.
To that end, today the CRFM announced the Holistic Evaluation of Language Models (HELM), which it says is the first benchmarking project aimed at improving the transparency of language models and the broader category of foundation models. Read More
A Complete Guide to Natural Language Processing
Natural Language Processing (NLP) is one of the hottest areas of artificial intelligence (AI) thanks to applications like text generators that compose coherent essays, chatbots that fool people into thinking they’re sentient, and text-to-image programs that produce photorealistic images of anything you can describe. Recent years have brought a revolution in the ability of computers to understand human languages, programming languages, and even biological and chemical sequences, such as DNA and protein structures, that resemble language. The latest AI models are unlocking these areas to analyze the meanings of input text and generate meaningful, expressive output. Read More
AI Can Now Make Fake Selfies For Your Tinder Profile
Get ready to swipe right on some AI-generated profile pics.
The AI image-generating craze has entered its next phase of absurdity: creating fake profile pics that make you look good on dating apps and social media.
For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire, while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses. Read More
Notion’s latest feature is an AI that can write blog posts, to-do lists and more
Before you ask, Notion AI did not write this article.
Notion, the company behind the popular note-taking app of the same name, has started testing a new feature called Notion AI that uses a generative AI to write notes and other content. The Vergegot a chance to use the software before today’s announcement. The interface is straightforward. You first select the type of writing you want help with from a list that includes options like “blog post,” “marketing email” and “to-do list.” You then provide the software with a suitable prompt, hit the blue “Generate” button and then watch as it creates text in real-time.
Judging from some of the writing the tool produced for The Verge, it benefits, like other generative AIs, from the user being as specific as possible about what they want. For instance, when the outlet asked Notion AI to write a blog post about the state of the smartwatch industry, the resulting draft mentioned the Apple Watch 4, Samsung Galaxy Watch and Tizen. In other words, it wrote about the state of the market in 2018, not as it exists today. Read More
When AI can make art – what does it mean for creativity?
Image-generators such as Dall-E 2 can produce pictures on any theme you wish for in seconds. Some creatives are alarmed but others are sceptical of the hype
When the concept artist and illustrator RJ Palmer first witnessed the fine-tuned photorealism of compositions produced by the AI image generator Dall-E 2, his feeling was one of unease. The tool, released by the AI research company OpenAI, showed a marked improvement on 2021’s Dall-E, and was quickly followed by rivals such as Stable Diffusion and Midjourney. Type in any surreal prompt, from Kermit the frog in the style of Edvard Munch, to Gollum from The Lord of the Rings feasting on a slice of watermelon, and these tools will return a startlingly accurate depiction moments later.
The internet revelled in the meme-making opportunities, with a Twitter account documenting “weird Dall-E generations” racking up more than a million followers. Cosmopolitan trumpeted the world’s first AI-generated magazine cover, and technology investors fell over themselves to wave in the new era of “generative AI”. The image-generation capabilities have already spread to video, with the release of Google’s Imagen Video and Meta’s Make-A-Video.
But AI’s new artistic prowess wasn’t received so ecstatically by some creatives. “The main concern for me is what this does to the future of not just my industry, but creative human industries in general,” says Palmer. Read More
DeviantArt Has a Plan to Keep Its Users’ Art Somewhat Safe From AI Image Generators
The art hosting site is releasing its own AI art system called DreamUp, and users can decide if they want to let their work be picked up by the system.
The year of our lord 2022 could be accurately described as the rise of AI. Instead of Skynet raining fire on our heads, we have AI image generators creating a different kind of apocalypse, especially for artists who promote their work online. So far, few have tried to answer how creators can actually respond to systems that scrape their work from the internet, using art to create new works without offering them any credit.
On Friday, DeviantArt released its new DreamUp AI art generator. Based on the existing Stable Diffusion AI model, this new system will actively tag their images as AI and will even credit which creators it used to create the image when they’re published on the DeviantArt site. Read More
When AI can make art – what does it mean for creativity?
When the concept artist and illustrator RJ Palmer first witnessed the fine-tuned photorealism of compositions produced by the AI image generator Dall-E 2, his feeling was one of unease. The tool, released by the AI research company OpenAI, showed a marked improvement on 2021’s Dall-E, and was quickly followed by rivals such as Stable Diffusion and Midjourney. Type in any surreal prompt, from Kermit the frog in the style of Edvard Munch, to Gollum from The Lord of the Rings feasting on a slice of watermelon, and these tools will return a startlingly accurate depiction moments later.
The internet revelled in the meme-making opportunities, with a Twitter account documenting “weird Dall-E generations” racking up more than a million followers. Cosmopolitan trumpeted the world’s first AI-generated magazine cover, and technology investors fell over themselves to wave in the new era of “generative AI”. The image-generation capabilities have already spread to video, with the release of Google’s Imagen Video and Meta’s Make-A-Video.
But AI’s new artistic prowess wasn’t received so ecstatically by some creatives. “The main concern for me is what this does to the future of not just my industry, but creative human industries in general,” says Palmer. Read More