‘Mind-blowing’ IBM chip speeds up AI

IBM’s NorthPole processor sidesteps need to access external memory, boosting computing power and saving energy.

A brain-inspired computer chip that could supercharge artificial intelligence (AI) by working faster with much less power has been developed by researchers at IBM in San Jose, California. Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.” — Read More

#human, #nvidia

A new chip architecture points to faster, more energy-efficient AI

We’re in the midst of a Cambrian explosion in AI. Over the last decade, AI has gone from theory and small tests to enterprise-scale use cases. But the hardware used to run AI systems, although increasingly powerful, was not designed with today’s AI in mind. As AI systems scale, the costs skyrocket. And Moore’s Law, the theory that the density of circuits in processors would double each year, has slowed.

But new research out of IBM Research’s lab in Almaden, California, nearly two decades in the making, has the potential to drastically shift how we can efficiently scale up powerful AI hardware systems. — Read More

Read the Paper

#human

Meta’s Habitat 3.0 simulates real-world environments for intelligent AI robot training

Researchers from Meta Platforms Inc.’s Fundamental Artificial Intelligence Research team said today they’re releasing a more advanced version of the AI simulation environment Habitat, which is used to teach robots how to interact with the physical world.

Along with the launch of Habitat 3.0, the company announced the release of the Habitat Synthetic Scenes Dataset, an artist-authored 3D dataset that can be used to train AI navigation agents, as well as HomeRobot, an affordable robot assistant hardware and software platform for use in both simulated and real world environments.

In a blog post, FAIR researchers explained that the new releases represent its ongoing progress into they like to call “embodied AI.” By that, they mean AI agents that can perceive and interact with their environment, share that environment safely with human partners, and communicate and assist those human partners in both the digital and the physical world. — Read More

#robotics

DALL·E 3 is now available in ChatGPT Plus and Enterprise

ChatGPT can now create unique images from a simple conversation—and this new feature is available to Plus and Enterprise users today. Describe your vision, and ChatGPT will bring it to life by providing a selection of visuals for you to refine and iterate upon. You can ask for revisions right in the chat. This is powered by our most capable image model, DALL·E 3.

DALL·E 3 is the culmination of several research advancements, both from within and outside of OpenAI. Compared to its predecessor, DALL·E 3 generates images that are not only more visually striking but also crisper in detail. DALL·E 3 can reliably render intricate details, including text, hands, and faces. Additionally, it is particularly good in responding to extensive, detailed prompts, and it can support both landscape and portrait aspect ratios. These capabilities were achieved by training a state-of-the art image captioner to generate better textual descriptions for the images that we trained our models on. DALL·E 3 was then trained on these improved captions, resulting in a model which heeds much more attention to the user-supplied captions. You can read more about this process in our research paper. — Read More

#image-recognition

Let me finish your sentences

We’re all stochastic parrots (or) what AI can teach us about being human.

… It turns out that a machine that can finish our sentences can, with very minor modifications, also be made to write essays and stories, to summarize and translate. It can write working code and stylized poetry, generate art in the style of the old masters, and pass the SAT, GRE, LSAT, AP, and Bar exams. It can answer philosophical questions, act as a co-pilot, tutor, and therapist, do your child’s homework, and much more. 

The emergence of such new and general capabilities wasn’t obvious or necessarily a given. Almost no one, not even the creators of ChatGPT fully anticipated its wide spectrum of cognitive and creative abilities. Despite Moravec’s Paradox, very few predicted that skills requiring human creativity would be among the first to fall to AI. — Read More

#nlp

Towards a Real-Time Decoding of Images from Brain Activity

At every moment of every day, our brains meticulously sculpt a wealth of sensory signals into meaningful representations of the world around us. Yet how this continuous process actually works remains poorly understood.

Today, Meta is announcing an important milestone in the pursuit of that fundamental question. Using magnetoencephalography (MEG), a non-invasive neuroimaging technique in which thousands of brain activity measurements are taken per second, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.

This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant. — Read More

Read the Paper

#human

OpenAI Finally Allows ChatGPT Complete Internet Access

OpenAI’s world-famous chatbot is free to rummage through the internet’s darkest corners. The company declared Tuesday that the “Browse with Bing” feature is ready for prime time for those ChatGPT users paying for Plus or Enterprise editions. This lets ChatGPT access up-to-date information, rather than being limited to the training data that was cut off before September 2021. — Read More

#chatbots

Clearview AI and the end of privacy, with author Kashmir Hill

Today, I’m talking to Kashmir Hill, a New York Times reporter whose new book, Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It, chronicles the story of Clearview AI, a company that’s built some of the most sophisticated facial recognition and search technology that’s ever existed. As Kashmir reports, you simply plug a photo of someone into Clearview’s app, and it will find every photo of that person that’s ever been posted on the internet. It’s breathtaking and scary. 

Kashmir is a terrific reporter. At The Verge, we have been jealous of her work across ForbesGizmodo, and now, the Times for years. She’s long been focused on covering privacy on the internet, which she is first to describe as the dystopia beat because the amount of tracking that occurs all over our networks every day is almost impossible to fully understand or reckon with. But people get it when the systems start tracking faces — when that last bit of anonymity goes away

… But not everyone. Your Face Belongs to Us is the story of Clearview AI, a secretive startup that, until January 2020, was virtually unknown to the public, despite selling this state-of-art facial recognition system to cops and corporations.  — Read More

#podcasts, #surveillance

China Chips and Moore’s Law

On Tuesday the Biden administration tightened export controls for advanced AI chips being sold to China; the primary target was Nvidia’s H800 and A800 chips, which were specifically designed to skirt controls put in place last year. The primary difference between the H800/A800 and H100/A100 is the bandwidth of their interconnects: the A100 had 600 Gb/s interconnects (the H100 has 900GB/s), which just so happened to be the limit proscribed by last year’s export controls; the A800 and H800 were limited to 400 Gb/s interconnects.

The reason why interconnect speed matters is tied up with Nvidia CEO Jensen Huang’s thesis that Moore’s Law is dead. Moore’s Law, as originally stated in 1965, states that the number of transistors in an integrated circuit would double every year. Moore revised his prediction 10 years later to be a doubling every two years, which held until the last decade or so, when it has slowed to a doubling about every three years. — Read More

#china-ai, #nvidia

How transparent are AI models? Stanford researchers found out.

Today Stanford University’s Center for Research on Foundation Models (CRFM) took a big swing on evaluating the transparency of a variety of AI large language models (that they call foundation models). It released a new Foundation Model Transparency Index to address the fact that while AI’s societal impact is rising, the public transparency of LLMs is falling — which is necessary for public accountability, scientific innovation and effective governance. — Read More

#trust