MIT Claims New Artificial Neuron 1 Million Times Faster Than the Real Thing


Think and you’ll miss it: researchers at MIT claim to have successfully created analog synapses that are one million times faster than those in our human brains.

Just as digital processors need transistors, analog ones need programmable resistors. Once put into the right configuration, these resistors can be used to create a network of analog synapses and neurons, according to a press release.

These analog synapses aren’t just ultra-fast, they’re remarkably efficient, too. And that’s pretty important, because as digital neural networks grow more advanced and powerful, they require more and more energy, increasing their carbon footprint considerably.

As detailed in a new paper, the researchers hope their findings will advance the field of analog deep learning, a burgeoning field of artificial intelligence. Read More

#nvidia

AMD’s latest APU could revolutionize supercomputers

AMD has confirmed that a next-gen CDNA 4 multi-chip and multi-IP Instinct accelerator is currently in development and scheduled to launch by 2023, known as the Instinct MI300 GPU. Technically speaking, this is actually an APU that will combine the next-generation of CDNA 3 cores with the next-generation Zen 4 CPU cores.

That’s right – this chip combines CPU and GPU cores onto a single package for data centers and AI, and the anticipated performance boost is allegedly monstrous.

The Instinct MI300 accelerator has a unified memory APU architecture and new Math Formats to provide 5x performance-per-watt improvement over CDNA 2, as well as an 8x projected improvement in AI training performance versus its spiritual predecessor, the MI250X.Read More

#nvidia

Energy-Efficient AI Hardware Technology Via a Brain-Inspired Stashing System

Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations. Read More

#nvidia

The first IBM mainframe for AI arrives

The next-generation IBM z16 comes with an IBM Telum processor for real-time AI insights.

Mainframes and AI? Isn’t that something like a Model-T Ford with a Tesla motor? Actually, no. Mainframes are as relevant in 2022 as they were in the 1960s. IBM’s new IBM z16, with its integrated on-chip Telum AI accelerator, is ready to analyze real-time transactions, at scale. This makes it perfect for mainframe mission-critical workloads such as healthcare and financial transactions. 

This 21st century Big Iron AI accelerator is built onto its core Telum processor. With this new dual-processor 5.2 GHz chip and its 16 cores, it can perform 300 billion deep-learning inferences per day with one-millisecond latency. Can you say fast? IBM can.  Read More

#nvidia

NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

Instant NeRF is a neural rendering model that learns a high-resolution 3D scene in seconds — and can render images of that scene in a few milliseconds.

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly — making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering. Read More

#image-recognition, #nvidia

Meta has built an AI supercomputer it says will be world’s fastest by end of 2022

Social media conglomerate Meta is the latest tech company to build an “AI supercomputer” — a high-speed computer designed specifically to train machine learning systems. The company says its new AI Research SuperCluster, or RSC, is already among the fastest machines of its type and, when complete in mid-2022, will be the world’s fastest.

… The news demonstrates the absolute centrality of AI research to companies like Meta. Rivals like Microsoft and Nvidia have already announced their own “AI supercomputers,” which are slightly different from what we think of as regular supercomputers. RSC will be used to train a range of systems across Meta’s businesses: from content moderation algorithms used to detect hate speech on Facebook and Instagram to augmented reality features that will one day be available in the company’s future AR hardware. And, yes, Meta says RSC will be used to design experiences for the metaverse — the company’s insistent branding for an interconnected series of virtual spaces, from offices to online arenas. Read More

#big7, #metaverse, #nvidia

Meta has a giant new AI supercomputer to shape the metaverse

Meta, the tech giant previously known as Facebook, revealed Monday that it’s built one of the world’s fastest supercomputers, a behemoth called the Research SuperCluster, or RSC. With 6,080 graphics processing units packaged into 760 Nvidia A100 modules, it’s the fastest machine built for AI tasks, Chief Executive Mark Zuckerberg says.

That processing power is in the same league as the Perlmutter supercomputer, which uses more than 6,000 of the same Nvidia GPUs and currently ranks as the world’s fifth fastest supercomputer. And in a second phase, Meta plans to boost performance by a factor of 2.5 with an expansion to 16,000 GPUs this year. Read More

#metaverse, #nvidia

Nvidia’s upgraded AI art tool turned my obscure squiggles into a masterpiece

It’s incredible, the things we can do with AI nowadays. For artists looking to integrate artificial intelligence into their workflow, there are ever more advanced tools popping up all over the net. One such tool is Nvidia Canvas, which has just been updated with the more powerful GauGAN2 AI, to replace the original GauGAN model, along with loads of new features. 

The Nvidia Canvas software is available for free to anyone with an Nvidia RTX graphics card. This is because the software uses the tensor cores present in your GPU to let the AI do it’s job. Read More

#gans, #image-recognition, #nvidia

‘Paint Me a Picture’: NVIDIA Research Shows GauGAN AI Art Demo Now Responds to Words

GauGAN2 uses a deep learning model that turns a simple written phrase, or sentence, into a photorealistic masterpiece.

A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo.

The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like “sunset at a beach” and AI generates the scene in real time. Add an additional adjective like “sunset at a rocky beach,” or swap “sunset” to “afternoon” or “rainy day” and the model, based on generative adversarial networks, instantly modifies the picture.

With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images. Read More

#gans, #image-recognition, #nvidia

Facebook Develops New Machine Learning Chip

Google, Amazon and Microsoft have all been hiring and spending millions of dollars to design their own computer chips from scratch, with the goal of squeezing financial savings and better performance from servers that handle and train the companies’ machine-learning models. Facebook has joined the party too, and is developing a chip that powers machine learning for tasks such as recommending content to users, according to two people familiar with the project.

Another in-house chip designed by Facebook aims to improve the quality of watching recorded and livestreamed videos for users of its apps through a process known as video transcoding, one of the people said. If successful, the efforts to develop cheaper but more powerful semiconductors could help the company reduce the carbon footprint of its ever-growing data centers in coming years while also potentially decreasing its reliance on existing chip vendors, which recently included Intel, Qualcomm and Broadcom. Read More

#big7, #nvidia