Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time

Large language models (LLMs) with hundreds of billions of parameters have sparked a new wave of exciting AI applications. However, they are computationally expensive at inference time. Sparsity is a natural approach to reduce this cost, but existing methods either require costly retraining, have to forgo LLM’s in-context learning ability, or do not yield wall-clock time speedup on modern hardware. We hypothesize that contextual sparsity, which are small, input-dependent sets of attention heads and MLP parameters that yield approximately the same output as the dense model for a given input, can address these issues. We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM’s quality or in-context learning ability. Based on these insights, we propose DejaVu, a system that uses a low-cost algorithm to predict contextual sparsity on the fly given inputs to each layer, along with an asynchronous and hardware-aware implementation that speeds up LLM inference. We validate that DejaVu can reduce the inference latency of OPT-175B by over 2X compared to the state-of-the-art FasterTransformer, and over 6X compared to the widely used Hugging Face implementation, without compromising model quality. The code is available at this https URL. — Read More

#performance

UKRAINE IS RIDDLED WITH LAND MINES. DRONES AND AI CAN HELP

EARLY ON A JUNE morning in 2023, my colleagues and I drove down a bumpy dirt road north of Kyiv in Ukraine. The Ukrainian Armed Forces were conducting training exercises nearby, and mortar shells arced through the sky. We arrived at a vast field for a technology demonstration set up by the United Nations. Across the 25-hectare field—that’s about the size of 62 American football fields—the U.N. workers had scattered 50 to 100 inert mines and other ordnance. Our task was to fly our drone over the area and use our machine learning software to detect as many as possible. And we had to turn in our results within 72 hours.

The scale was daunting: The area was 10 times as large as anything we’d attempted before with our drone demining startup, Safe Pro AI. My cofounder Gabriel Steinberg and I used flight-planning software to program a drone to cover the whole area with some overlap, taking photographs the whole time. It ended up taking the drone 5 hours to complete its task, and it came away with more than 15,000 images. Then we raced back to the hotel with the data it had collected and began an all-night coding session.

We were happy to see that our custom machine learning model took only about 2 hours to crunch through all the visual data and identify potential mines and ordnance. But constructing a map for the full area that included the specific coordinates of all the detected mines in under 72 hours was simply not possible with any reasonable computational resources. The following day (which happened to coincide with the short-lived Wagner Group rebellion), we rewrote our algorithms so that our system mapped only the locations where suspected land mines were identified—a more scalable solution for our future work. — Read More

#dod, #image-recognition