Synthesia’s hyperrealistic deepfakes will soon have full bodies

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year.  — Read More

#fake

She Built an AI Product Manager Bringing in Six Figures—As A Side Hustle

How Claire Vo created ChatPRD while working a demanding job

Claire Vo built ChatPRD—an on-demand chief product officer powered by AI. It’s now used by over 10,000 product managers and is pulling in six figures in revenue. 

The best part?

Claire has a demanding day job as the chief product officer at LaunchDarkly. So she built all of ChatPRD herself—over the weekend—with AI. — Read More

#podcasts, #strategy

Anthropic just dropped Claude 3.5 Sonnet with better vision and a sense of humor

Claude 3.5 Sonnet is the latest artificial intelligence model from Anthropic, one of the leading AI labs in the world. The company promises it is faster than its predecessor, has a better understanding of humor and can even read your handwriting.

Claude 3 Opus was already impressive. A model I dubbed the “most human-like” of any of the AI chatbots. I had a quick play with 3.5 Sonnet and it does seem more natural and with a better understanding of sarcasm. Claude is also listed as the best alternative to ChatGPT in my guide to chatbots. — Read More

#nlp

AI Discovers That Not Every Fingerprint Is Unique

Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!

… It’s a well-accepted fact in the forensics community that fingerprints of different fingers of the same person–”intra-person fingerprints”–are unique, and therefore unmatchable.

A team led by Columbia Engineering undergraduate senior Gabe Guo challenged this widely held presumption. Guo, who had no prior knowledge of forensics, found a public U.S. government database of some 60,000 fingerprints and fed them in pairs into an artificial intelligence-based system known as a deep contrastive network. Sometimes the pairs belonged to the same person (but different fingers), and sometimes they belonged to different people.  — Read More

Read the Paper

#legal

Mixture-of-Agents Enhances Large Language Model Capabilities

Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction. Toward this goal, we propose a new approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents (MoA) methodology. In our approach, we construct a layered MoA architecture wherein each layer comprises multiple LLM agents. Each agent takes all the outputs from agents in the previous layer as auxiliary information in generating its response. MoA models achieves state-of-art performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. For example, our MoA using only open-source LLMs is the leader of AlpacaEval 2.0 by a substantial gap, achieving a score of 65.1% compared to 57.5% by GPT-4 Omni. — Read More

#performance

Safe Superintelligence Inc. launches: Here’s what it means

Three well-known generative AI pioneers have formed Safe Superintelligence Inc., a startup that will focus on safe superintelligence (SSI).

In a post, former OpenAI leaders Ilya Sutskever and Daniel Levy and Daniel Gross, a former Y Combinator partner, announced the company’s role and mission. Sutskever was OpenAI’s chief scientist and Levy was an OpenAI engineer

Here’s the Safe Superintelligence Inc. mission in a nutshell. The three founders wrote:

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. — Read More

#singularity

More New Open Models

A trio of powerful open and semi-open models give developers new options for both text and image generation. Nvidia and Alibaba released high-performance large language models (LLMs), while Stability AI released a slimmed-down version of its flagship text-to-image generator.

… Nvidia offers the Nemotron-4 340B family of language models, which includes a 340-billion parameter base model as well as versions fine-tuned to follow instructions and to serve as a reward model in reinforcement learning from human feedback. …. Alibaba introduced the Qwen2 family of language models. Qwen2 includes base and instruction-tuned versions of five models that range in size from 500 million to 72 billion parameters and process context lengths between 32,000 and 128,000 tokens. …. Stability AI launched the Stable Diffusion 3 Medium text-to-image generator, a 2 billion-parameter based on the technology that underpins Stable Diffusion 3.  — Read More

#strategy

OpenDevin, an autonomous AI software engineer

Read More

#devops, #videos

How Meta trains large language models at scale

As we continue to focus our AI research and development on solving increasingly complex problems, one of the most significant and challenging shifts we’ve experienced is the sheer scale of computation required to train large language models (LLMs).

Traditionally, our AI model training has involved a training massive number of models that required a comparatively smaller number of GPUs. This was the case for our recommendation models (e.g., our feed and ranking models) that would ingest vast amounts of information to make accurate recommendations that power most of our products.

With the advent of generative AI (GenAI), we’ve seen a shift towards fewer jobs, but incredibly large ones. Supporting GenAI at scale has meant rethinking how our software, hardware, and network infrastructure come together. — Read More

#big7

What happened when 20 comedians got AI to write their routines

AI is good at lots of things: spotting patterns in data, creating fantastical images, and condensing thousands of words into just a few paragraphs. But can it be a useful tool for writing comedy?

New research suggests that it can, but only to a very limited extent. It’s an intriguing finding that hints at the ways AI can—and cannot—assist with creative endeavors more generally.  — Read More

#humor