Inflection debuts its own foundation AI model to rival Google and OpenAI LLMs

Inflection, a well-funded AI startup aiming to create “personal AI for everyone,” has taken the wraps off the large language model powering its Pi conversational agent. It’s hard to evaluate the quality of these things in any way, let alone objectively and systematically, but a little competition is a good thing.

Inflection-1, as the model is called, is of roughly GPT-3.5 (AKA ChatGPT) size and capabilities — as measured in the computing power used to train them. The company claims that it’s competitive or superior with other models on this tier, backing it up with a “technical memo” describing some benchmarks it ran on its model, GPT-3.5, LLaMA, Chinchilla and PaLM-540B. — Read More

#chatbots

Do Foundation Model Providers Comply with the Draft EU AI Act?

Foundation models like ChatGPT are transforming society with their remarkable capabilities, serious risks, rapid deployment, unprecedented adoption, and unending controversy. Simultaneously, the European Union (EU) is finalizing its AI Act as the world’s first comprehensive regulation to govern AI, and just yesterday the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions. The Act includes explicit obligations for foundation model providers like OpenAI and Google.

In this post, we evaluate whether major foundation model providers currently comply with these draft requirements and find that they largely do not. — Read More

#legal

AI-generated images are everywhere. Here’s how to spot them

Amid debates about how artificial intelligence will affect jobs, the economy, politics and our shared reality, one thing is clear: AI-generated content is here.

Chances are you’ve already encountered content created by generative AI software, which can produce realistic-seeming text, images, audio and video.

So what do you need to know about sorting fact from AI fiction? And how can we think about using AI responsibly? — Read More

#fake

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca (We are working with our legal team to publicly release a diff of the model weights in accordance with LLaMA’s release policy to be published at this https URL), a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills. — Read More

#explainability

Apple Is an AI Company Now

After more than a decade, autocorrect “fails” could be on their way out. Apple’s much-maligned spelling software is getting upgraded by artificial intelligence: Using sophisticated language models, the new autocorrect won’t just check words against a dictionary, but will be able to consider the context of the word in a sentence. In theory, it won’t suggest consolation when you mean consolidation, because it’ll know that those words aren’t interchangeable.

The next generation of autocorrect was one of several small updates to the iPhone experience that Apple announced earlier this month. The Photos app will be able to differentiate between your dog and other dogs, automatically recognizing your pup the same way it recognizes people who frequently appear in your pictures. And AirPods will get smarter about adjusting to background noise based on your listening over time.

All of these features are powered by AI—even if you might not know it from how Apple talks about them. Its conference unveiling the updates included zero mentions of AI, now a buzzword for tech companies of all stripes. Instead, Apple used more technical language such as machine learning or transformer language model. Apple has been quiet about the technology—so quiet that it has been accused of falling behind. Indeed, whereas ChatGPT can write halfway-decent business proposals, Siri can set your morning alarm and not much else. But Apple is pushing forward with AI in small ways, an incrementalist approach that nonetheless still might be the future of where this technology is headed. — Read More

#big7

Grammys CEO Breaks Down Rules Around AI Recordings: “This Is Something We Have to Pay Attention To”

As the world continues to grapple with the AI takeover, so are the Grammys.

The Recording Academy made headlines last week when it announced its rules about music created with artificial intelligence. Some feel like those songs should be banned, others say they are creative and innovative

The Grammys are listening to both sides — but don’t expect them to award a robot. — Read More

#audio, #vfx

High-resolution image reconstruction with latent diffusion models from human brain activity

Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector of image Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straight-forward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs. Please check out our webpage at https://sites.google.com/view/stablediffusion-with-brain/. — Read More

#human

Parrot — Stenographic Transcription and Digital Reporting for Depositions & EUOs

The all-in-one platform offering speech-to-text for remote depositions, stenographic transcriptions and reporting for lawyers. — Read More

#legal

OpenAI Lobbied the E.U. to Water Down AI Regulation

The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.

But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company…

… In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight. — Read More

#legal

Inside China’s underground market for high-end Nvidia AI chips

Psst! Where can a Chinese buyer purchase top-end Nvidia (NVDA.O) AI chips in the wake of U.S. sanctions?

Visiting the famed Huaqiangbei electronics area in the southern Chinese city of Shenzhen is a good bet – in particular, the SEG Plaza skyscraper whose first 10 floors are crammed with shops selling everything from camera parts to drones. The chips are not advertised but asking discreetly works. — Read More

#china-vs-us, #nvidia