Apple today announced new Apple Intelligence features that elevate the user experience across iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro. Apple Intelligence unlocks new ways for users to communicate with features like Live Translation; do more with what’s on their screen with updates to visual intelligence; and express themselves with enhancements to Image Playground and Genmoji.1 Additionally, Shortcuts can now tap into Apple Intelligence directly, and developers will be able to access the on-device large language model at the core of Apple Intelligence, giving them direct access to intelligence that is powerful, fast, built with privacy, and available even when users are offline. These Apple Intelligence features are available for testing starting today, and will be available to users with supported devices set to a supported language this fall. — Read More
Daily Archives: June 11, 2025
IBM now describing its first error-resistant quantum compute system
On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.
Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.
“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.” — Read More
Duolingo’s CEO outlined his plan to become an ‘AI-first’ company. He didn’t expect the human backlash that followed
On April 28, Duolingo cofounder and CEO Luis von Ahn posted an email on LinkedIn that he had just sent to all employees at his company. In it, he outlined his vision for the language-learning app to become an “AI-first” organization, including phasing out contractors if AI could do their work, and giving a team the ability to hire a new person only if they were not able to automate their work through AI.
The response was swift and scathing. “This is a disaster. I will cancel my subscription,” wrote one commenter. “AI first means people last,” wrote another. And a third summed up the general feeling of critics when they wrote: “I can’t support a company that replaces humans with AI.” — Read More
How You Can Use Few-Shot Learning In LLM Prompting To Improve Its Performance
You must’ve noticed that large language models can sometimes generate information that seems plausible but isn’t factually accurate. Providing more explicit instructions and context is one of the key ways to reduce such LLM hallucinations.
That said, have you ever struggled to get an AI model to understand precisely what you want to achieve? Perhaps you’ve provided detailed instructions only to receive outputs that fall short of the mark?
Here is where we’ll use the few-shot prompting technique to guide LLMs toward producing accurate, relevant, and properly formatted responses. In it, you’ll teach the LLM by example rather than through complex explanations. Excited?! Let’s begin! — Read More