An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

I’m stressed and running late, because what do you wear for the rest of eternity? 

This makes it sound like I’m dying, but it’s the opposite. I am, in a way, about to live forever, thanks to the AI video startup Synthesia. For the past several years, the company has produced AI-generated avatars, but today it launches a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything I’ve ever seen. While today’s release means almost anyone will now be able to make a digital double, on this early April afternoon, before the technology goes public, they’ve agreed to make one of me. — Read More

#fake

OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework

The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring 2× fewer pre-training tokens. Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors. Our source code along with pre-trained model weights and training recipes is available at \url{this https URL}. Additionally, \model models can be found on HuggingFace at: \url{this https URL}. — Read More

#devops, #nlp

Evaluating language models on a wide range ofopen source legal reasoning tasks

There has been a considerable effort to measure language model performance in academic tasks and chatbot settings but these high-level benchmarks are not applicable to specific industry use cases. Here we start to remedy this by reporting our application-specific findings and live leaderboard results on LegalBench, a large crowd-sourced collection of legal reasoning tasks.  — Read More

#legal

How to make music with AI using Udio

There’s something quite alluring about trying to create art in a form you’re less familiar with. AI music is the latest canvas in this space.

While we can easily sketch a drawing with a pen and piece of paper at home, not all of us have instruments lying around or the skills to use them.

Generative AI gets rid of those hurdles and tools like Udio, Stable Audio, Cassette AI and Suno allow us to dip our toes into music production. Prior experience is not required. Furthermore, Udio seems to be on to something in that it is able to combine a simple user experience with pretty decent results. — Read More

#audio

Introducing more enterprise-grade features for API customers

We[OpenAI]’ve introduced Private Link, a new way that customers can ensure direct communication between Azure and OpenAI while minimizing exposure to the open internet. We’ve also released native Multi-Factor Authentication (MFA) to help ensure compliance with increasing access control requirements. These are new additions to our existing stack of enterprise security features including SOC 2 Type II certification, single sign-on (SSO), data encryption at rest using AES-256 and in transit using TLS 1.2, and role-based access controls. We also offer Business Associate Agreements for healthcare companies that require HIPAA compliance and a zero data retention policy for API customers with a qualifying use case. — Read More

#cyber

Microsoft launches Phi-3, its smallest AI model yet

Microsoft launched the next version of its lightweight AI model Phi-3 Mini, the first of three small models the company plans to release. 

Phi-3 Mini measures 3.8 billion parameters and is trained on a data set that is smaller relative to large language models like GPT-4. It is now available on Azure, Hugging Face, and Ollama. Microsoft plans to release Phi-3 Small (7B parameters) and Phi-3 Medium (14B parameters). Parameters refer to how many complex instructions a model can understand.  — Read More

#nlp

AI Is Turning into Something Totally New — Mustafa Suleyman — TED

Read More

#videos

Drake Uses AI Tupac and Snoop Dogg Vocals on ‘Taylor Made Freestyle

The beef between Drake and what continues to be a strong sect of the hip-hop community grows deeper. On Friday night (April 19), the rapper released a song on his social media entitled “Taylor Made Freestyle,” which uses AI vocals from Tupac Shakur and Snoop Dogg on a stopgap between diss records as he awaits Kendrick Lamar’s reply to his freshly released “Push Ups.”Read More

#audio

How Meta is paving the way for synthetic social networks

On Thursday, the AI hype train rolled through Meta’s family of apps. The company’s Meta AI assistant, a ChatGPT-like bot that can answer a wide range of questions, is beginning to roll out broadly across Facebook, Messenger, Instagram and WhatsApp.

Powering the bot is Llama 3, the latest and most capable version of Meta’s large language model. As with its predecessors — and in contrast to models from OpenAI, Google, and Anthropic — Llama 3 is open source. Today Meta made it available in two sizes: one with 8 billion parameters, and one with 70 billion parameters. (Parameters are the variables inside a large language model; in general, the more parameters a model contains, the smarter and more sophisticated its output.) — Read More

#big7, #devops

Maybe I don’t want a Rosey the Robot after all

Boston Dynamics’ latest — deliberately creepy? — humanoid robot has me rethinking my smart home robot dreams.

As a child of the 1980s, my perception of the smart home has been dominated by the idea that one day, we will all have Rosey the Robot-style robots roaming our homes — dusting the mantelpiece, preparing dinner, and unloading the dishwasher. (That last one is a must; we were smart enough to come up with a robot to wash our dishes; can’t we please come up with one that can also unload them?)

However, after seeing Boston Dynamics’ latest droid, Atlas, unveiled this week, my childhood dreams are fast turning into a smart home nightmare. While The Jetsons’ robot housekeeper had a steely charm, accentuated by its frilly apron, the closer we come to having humanoid robots in our home, the more terrifying it appears they will be. Not so much because of how they look — I could see Atlas in an apron — but more because of what they represent.  — Read More

#robotics