Kling, the AI video generator rival to Sora that’s wowing creators

If you follow any AI influencers or creators on social media, there’s a good chance you may have seen them more excited than usual lately about a new AI video generation model called “Kling.”

The videos it generates from pure text prompts and some configurable, in-app buttons and settings, look incredibly realistic, on par with OpenAI’s still non-public, invitation only, closed beta AI model Sora, which it has shared with a small group of artists and filmmakers as it tests it and its adversarial (read: risky, objectionable) uses.

[W]here did Kling come from? What does it offer? And how can you get your hands on it? Read on to find out. — Read More

#china-ai, #image-recognition

Andrew Ng: A Look At AI Agentic Workflows And Their Potential For Driving AI Progress

Read More

#videos

Fake beauty queens charm judges at the Miss AI pageant

Beauty pageant contestants have always been judged by their looks, and, in recent decades, by their do-gooderly deeds and winning personalities.

Still, one thing that’s remained consistent throughout beauty pageant history is that you had to be a human to enter.

But now that’s changing.

Models created using generative artificial intelligence (AI) are competing in the inaugural “Miss AI” pageant this month. — Read More

#fake

Using AI for Political Polling

Public polling is a critical function of modern political campaigns and movements, but it isn’t what it once was. Recent US election cycles have produced copious postmortems explaining both the successes and the flaws of public polling. There are two main reasons polling fails.

First, nonresponse has skyrocketed. It’s radically harder to reach people than it used to be. Few people fill out surveys that come in the mail anymore. Few people answer their phone when a stranger calls. Pew Research reported that 36% of the people they called in 1997 would talk to them, but only 6% by 2018. Pollsters worldwide have faced similar challenges.

Second, people don’t always tell pollsters what they really think. Some hide their true thoughts because they are embarrassed about them. Others behave as a partisan, telling the pollster what they think their party wants them to say—or what they know the other party doesn’t want to hear.

Despite these frailties, obsessive interest in polling nonetheless consumes our politics. Headlines more likely tout the latest changes in polling numbers than the policy issues at stake in the campaign. This is a tragedy for a democracy. We should treat elections like choices that have consequences for our lives and well-being, not contests to decide who gets which cushy job. — Read More

#strategy

Towards Conversational Diagnostic AI

At the heart of medicine lies the physician-patient dialogue, where skillful history-taking paves the way for accurate diagnosis, effective management, and enduring trust. Artificial Intelligence (AI) systems capable of diagnostic dialogue could increase accessibility, consistency, and quality of care. However, approximating clinicians’ expertise is an outstanding grand challenge. Here, we introduce AMIE (Articulate Medical Intelligence Explorer), a Large Language Model (LLM) based AI system optimized for diagnostic dialogue.

AMIE uses a novel self-play based simulated environment with automated feedback mechanisms for scaling learning across diverse disease conditions, specialties, and contexts. We designed a framework for evaluating clinically-meaningful axes of performance including history-taking, diagnostic accuracy, management reasoning, communication skills, and empathy. We compared AMIE’s performance to that of primary care physicians (PCPs) in a randomized, double-blind crossover study of text-based consultations with validated patient actors in the style of an Objective Structured Clinical Examination (OSCE). The study included 149 case scenarios from clinical providers in Canada, the UK, and India, 20 PCPs for comparison with AMIE, and evaluations by specialist physicians and patient actors. AMIE demonstrated greater diagnostic accuracy and superior performance on 28 of 32 axes according to specialist physicians and 24 of 26 axes according to patient actors. Our research has several limitations and should be interpreted with appropriate caution. Clinicians were limited to unfamiliar synchronous text-chat which permits large-scale LLM-patient interactions but is not representative of usual clinical practice. While further research is required before AMIE could be translated to real-world settings, the results represent a milestone towards conversational diagnostic AI. — Read More

#augmented-intelligence

Francois Chollet – LLMs won’t lead to AGI – $1,000,000 Prize to find true solution

Read More

#videos

Survey: More students, teachers are familiar with and using ChatGPT

recent poll shows K-12 students’ familiarity with ChatGPT rose from 37% to 75% in just over a year. The survey, by Impact Research for the Walton Family Foundation, also found that teachers’ familiarity with ChatGPT jumped from 55% to 79% from February 2023 to May 2024. — Read More

#strategy

Deception abilities emerged in large language models

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs are under suspicion of becoming able to deceive human operators and utilizing this ability to bypass monitoring efforts. As a prerequisite to this, LLMs need to possess a conceptual understanding of deception strategies. This study reveals that such strategies emerged in state-of-the-art LLMs, but were nonexistent in earlier LLMs. We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior. GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning. In sum, revealing hitherto unknown machine behavior in LLMs, our study contributes to the nascent field of machine psychology. — Read More

#trust

Apple Intelligence is Right On Time

Apple’s annual Worldwide Developer Conference keynote kicks off in a few hours, and Mark Gurman has extensive details of what will be announced in Bloomberg, including the name: “Apple Intelligence”. As John Gruber noted on Daring Fireball:

His report reads as though he’s gotten the notes from someone who’s already watched Monday’s keynote. I sort of think that’s what happened, given how much of this no one had reported before today. 

… The irony of the leak being so huge is that nothing is particularly surprising: Apple is announcing and incorporating generative AI features throughout its operating systems and making them available to developers. Finally, the commentariat exclaims! Apple is in danger of falling dangerously behind! The fact they are partnering with OpenAI is evidence of how desperate they are! In fact, I would argue the opposite: Apple is not too late, they are taking the correct approach up-and-down the stack, and are well-positioned to be one of AI’s big winners. — Read More

#strategy

NVIDIA Unveils “NIMS” Digital Humans, Robots, Earth 2.0, and AI Factories

Read More

#nvidia, #videos